November 30, 2006
The Box
Welcome back. Now, while the game did not have a lot of spit and polish (missing a life meter, shots originate from the center of the ships instead of the nose, horrid UI), it was still fun, even though it was just the original game of "Asteroids" flipped inside out.
This is the kind of thing that can spawn a successful game...taking another game and reversing the roles of the protagonists. However, it doesn't often happen in our industry. Why?
We try to pigeonhole our games into these set categories. (FPS, run-and-gun, stop-and-pop, RTS, RPS, RPG, MMO, match-3, card battle, top-down shooter, etc.) Mashups and reverses rarely fit into such narrow categories, but they can be so fun.
Even simple non-game time-wasters like "Kitten Cannon," with the infringetastic Daft Punk music, are difficult to categorize. (Yes, it isn't a real game...but it's damn fun.)
So, the industry insists on pigeonholing games into these set categories to be marketed, but they're fun? How do we get the games out?
We have at least three choices. We can go full-bore with a full development budget and hope that when we ship, the public happens to be in the mood for feline artillery or rock-beats-ship. We can go cheap and create the fun game with the gameplay only but no assets to speak of, and not make any money because people don't like paying money for something that looks like crap, but then watch a competitor slap a fresh coat of paint on our idea and have it take off.
Think outside the box for a second...what do you think the third option could be?
(And yes, while I did not take the cat picture, I captioned it. It seemed appropriate. You can get a metric litterbox-ton of cat pictures, including the source for the one I used, here.)
Closed Blog Now Live
Other QA Blogs To Check Out
For languages, look no further than Cory Smith over at AddressOf.com. Like me, Cory is a tireless advocate of Visual Basic. It's not a kid's language anymore, and hasn't been for quite some time.
For development platforms, check out Managed DirectX and XNA with Andy over at TheZBuffer.com. Performant, stable, and it makes it very easy to eliminate several common types of errors.
For other QA bloggers, you've got a choice. On the app side, check out The Braidy Tester. On the games side, GameQABlog. In the middle, check out Sam Kalman.
For people who think I'm full of shit, check out Francesco Poli.
Tell them "the future industry pariah who dared to open his mouth about that which should not be mentioned" sent you.
December Agenda
Sometime next week, I'm going to be posting part 4 of 5 of the Automating Games QA series. (Yes, I split part 4 in two.) Both parts will be posted prior to Christmas.
Beyond that, I intend to take a break from the blog for most of December. With XNA Game Studio Express Edition coming out December 11, my 12-year anniversary shortly after that, Christmas shortly after that, and a week off in there as well, my hope is to spend time reconnecting with my family, focusing on my work, and planning out the next year...well, as much as I can.
Oh, and the observant among you may have noticed some additional changes made to my blog today. The changes were made as an additional step towards integrating this blog into my primary site.
Apologies to Sony
Seperation of Topics
Industry anecdotes, testing tips and discussion, and discussion about products will remain on this blog.
However, don't expect the removal of one topic to affect how far I go on these other topics. As this recent dust-up proves, people are not used to straight talk coming from this industry. Everyone assumes that we have an agenda, so we must be lying our asses off. One thing that caused issues for "SiN Episodes" was that we promised 4-5 hours of gameplay, so everyone assumed that it meant 2-3 hours because of "time inflation." Right now, our average playtime is 4h57m, so I'd say we were dead on.
I'm a very straightforward person. I do my best to say things as they are. While it may drive PR departments insane when I open my mouth, I'd be doing a disservice to myself and to quality assurance if I toned down my words, omitted more than was legally necessary, or intentionally misled people.
And as for an agenda, I do have one. My agenda is to bring quality assurance out of the basement and into the light. QA has become an army of disposable temps in this industry, and is seen as the invisible enemy of most development teams and the automatic scapegoat for most customers when something goes wrong. This perception will only fester and grow if nothing is said or done about it.
November 29, 2006
The Wrong Comparison
If I'm developing a game for the 360 and for the PS3 at the same time, I'm going to use the same assets. I'm going to be using the same shaders where possible. I'm not going to bust ass to uprez any of my assets. If anything, I'm going to drop the resolution of assets to get them to fit in memory or on the media if there is a problem. But if they're being developed at the same time, the assets are being tuned to the lowest common denominator as part of their creation.
This comparison is like putting the same ingredients into two seperate sausages, cooking them seperately, then comparing the tastes. What's the point?
Open Letter To QA
To all developers: QA should be an integral part of any development process, not an afterthought. This doesn't just mean developer testing (unit tests, integration tests, etc.). This doesn't mean the certification process by the platform holder. This means real testing with real testers. Playtesting should not be how you find bugs. Shipping the product should not be how you find bugs. There are people out there who excel at finding these types of problems before they pound your review scores to dust...get them, keep them happy, and put them to work. Most importantly, listen to them. Testing without action is masturbation.
To Sony QA: I realize that your staffing structure is a direct result of cost cutting measures. However, several people in your in-house development houses joke about the bugs they receive. A big part of the reason they get laughable bugs is that when you're bringing that many people on for such a short period of time, the quality of training the testers receives suffers, as does the quality of bugs. Test leads do what they can to filter bad bugs from getting through, but there is a finite number of hours in the day and the longer the hours are that your testers work, the more items that are going to slip through their fingers.
If you want to adjust the perception, bite the bullet. Hire great testers, bring them on full-time, work them a reasonable number of hours, pay the benefits. It takes time to change a culture, but change has to start somewhere. A defeatist attitude like "the 5% rule" I was told about only proves the culture's point. (And if testing is going on from day 1, 5% should never happen.)
To the Press: A lot of people place blame on any bugs in a shipping product solely at the feet of quality assurance. Some people believe that bugs making it out are the result of QA sloppiness, or QA "not fighting hard enough" for the customer. To be honest, there are times when that is the case. However, knee-jerk accusations towards QA don't help anyone. In fact, it is reactions like that which have led many publishers to believe that since the highly-paid testers "missed this issue," they may as well employ "controller monkeys" instead. After all, they're cheaper, work longer hours, and are disposable.
And when you get an article like this, don't just take my word for it! While I stand by everything that I said, nowhere have I seen any attempts to contact Sony for a statement. Nowhere have I seen a response from Sony. The only response I've seen have been from former Sony QA members who said, "Yep, sounds right." Please try to present a balanced viewpoint.
To my regular readers: Sorry for the distraction. I didn't think sharing my experiences would lead to such a hubbub.
SiN and USK (Part 1)
For the U.S. release, we had a couple of set behaviors for anyone who wasn't armed if gunfire went off near them or they witnessed an attack. They would run to safety, crouch and express fear. If they had no "safety zone," they would just crouch and express fear.
For the most part, our playtesters left the unarmed people alone. One person made a concerted effort to kill everyone, armed or not, but that was an anomaly.
In the final iteration of the game, there was only one civilian in the game after you received your magnum, and he is run over by a semi truck within a few moments, so very few people get a chance to try to kill him.
When we got our USK report back, one of the items discussed was that you could kill people who have expressed fear. The logic behind this being that they felt that it was inhumane to kill someone who was obviously afraid of you and was not attacking you.
At this point, we had two choices. We could take the easy road and just make civilians and unarmed enemies (like the U4 technicians) invincible. Had we done this, people who purchased the USK version would get an experience similar to other games where when they come across a civilian, they can't really do anything…the civilian is just for show. The second choice would be to make it so that civilians don’t express fear to the player.
We decided to go the second route. We did this for two reasons. First, it seemed really dumb to us to have something in the game that couldn’t be shot, especially given how important gunplay was in "SiN Episodes." Second, there was one place in U4 Labs 02 where if you were unable to shoot the lab workers, the player could be blocked from progressing through the remainder of the game.
The final result for owners of the USK version is that civilians ignore threats caused by the player. They’ll still fear mutants in cutscenes, but they no longer beg for their lives before you kill them.
November 28, 2006
My Stance On OpenAL And Vista
I've got five major issues with OpenAL and Vista.
The first is that it is bypassing the HAL in Vista so we will yet again be at the mercy of the driver developers. With Vista, Microsoft has made system stability a cornerstone of the operating system. I don't like the idea of opening a backdoor to companies that can't even be bothered to fix blue-screen causing crashes in their current drivers.
The second is that while it started as a joint effort between the Loki Entertainment and Creative Labs, it has turned into a vendor-driven API. Remember Glide? Vendor-driven API's can be dangerous, especially when the vendor driving it has their stock price is barely $1.50 over their 52-week low, over $10 below their 5-year high, and facing stiff competition on nearly every front.
Third, audio config testing efforts are going to go through the roof with OpenAL games. I go to Creative's developer forums and read things like, "Well, it works on a Creative card, but on this other card, it crashes," and the response back is, "I'm glad to hear that it works on a Creative card." Piss-poor support is what led to the creation of UAA in the first place.
Fourth, because OpenAL's functionality maps to resources available on the host sound card, there are additional issues with the API design. For example, there is no way to query what the maximum number of allowed buffers is. Instead, you have to try to create the buffers, and if the API doesn't cause an access violation because of a null pointer exception (a common occurance), you then check the return value and if the return code happens to be "that's too high," then you decrement and start over...blech.
Finally, some of the titles listed may use OpenAL, but they also use Miles Sound System as a fallback for cards that don't support OpenAL. Why the hell would I want to use an audio API that doesn't work on several popular sound chipsets? If I have to license another sound library just to fill the config gaps in yours, I may as well use what I'm paying for to the fullest.
November 27, 2006
SiN and USK (Intro)
Our initial submission was denied a USK rating, which essentially meant that USK felt that BPjM would have indexed it, and as a result, we had to go back to the drawing board and try to find the smallest possible changes that we could make that would get the game passed USK.
There were very few changes made to the USK version, and I'm going to be discussing six of those changes over the next few weeks. I'll be discussing the change itself, the reasoning behind the change, and show that there are no gameplay ramifications from the changes.
The six changes that we are going to be going over are ragdoll physics, fire, gibs, jetpack deaths, civilians/unarmed characters and applied forces. Saddle up, this is going to get interesting...
A few ground rules. First, I will not be posting any instructions on how to revert these changes. Second, comments linking to instructions on how to revert these changes will be deleted immediately. It's unfortunate, but these are the rules that I have to abide by in order to bring you this list.
Quality Assurance at Sony
1) I have nothing against Sony's QA department, contrary to what some reporters have said. I was commenting on the impression I got of how QA was perceived within Sony, not QA in Sony.
2) I talk about the impressions that I got seven months ago. Things may have changed, I don't know.
3) The department in question is the "last line of defense" inside Sony. From what I have been told, individual internal developers may have their own QA staffs on top of these.
4) These were my impressions, and are not necessarily the opinions of my past, present or future employers.
Sam Kalman made a post on November 22nd about a bug in Genji found by Chris Kohler, and it begs for the following story to be told.
Back in April, I was interviewed for a FPQA Manager position at Sony Computer Entertainment America's San Diego office. Sony was extremely nice. They flew me down and back first-class, took me out to lunch, etc.
Everyone I met there was a consumate professional, but there was a lot of underlying tension. I signed an NDA so I can't go into specifics, but there was talk about issues that only came up on production UMD's for PSP games, major friction between test and development teams with little to no management backing for test, little to no shared technology, extremely lax "user effect" bug metrics for determining whether or not to fix something, and a variety of other fairly hefty issues, not just from a process standpoint, but a overall culture standpoint. Microsoft is known for giving QA a bit too much say in the products that are developed, but the feeling I got inside Sony was that QA was seen as nothing but a bunch of monkeys with controllers.
The straw that broke the camel's back came in the last hour of my interview. I was told that the way that Sony tests their games is that there are one or two test leads on a project starting at about six months out. At T-8 weeks, between 80 and 100 temporary testers are brought on to test the game for those eight weeks. That's it. This was done for financial reasons, and as a QA Manager, I would be expected to run test the same way. Obviously, I didn't feel that was a valid way of handling QA.
The following morning, I sent an E-mail to Sony removing myself from consideration for the position because I didn't feel that I could run test the way that they wanted me to.
At Microsoft, the stringent QA processes often strangle creativity. At Sony, the lax QA process allows creativity to squash quality. It's hard to walk a middle ground where QA and creativity work hand in hand, but it is a tightrope that this industry is going to have to learn to walk if it is going to succeed in the 21st century and beyond.
(Update: Welcome, visitors from Sony/Psygnosis and readers of the Escapist. Please don't take this as criticism of Sony, just of the practices as they were described to me. No company has QA perfected, and Sony has released some wonderful titles over the years. However, past success is not a guarantee of future success as this incident proves. Trust in Sony's ability to deliver is already shaken, not only from a consumer standpoint, but a developer standpoint as well. [Hell, I still haven't received my taxi fare reimbursement...]
First-party games are supposed to push the envelope with killer gameplay, crystal-clear graphics and first-rate quality. First-party games are supposed to sell not only the abilities of the console, but the promise of the platform.
Consider this a prod towards delivering the true promise of the platform: next-generation gaming for the masses. The masses don't like patching.)
No Respect...
Look closely at the linked picture, and tell me what department is missing. (Hint: Playtesting != QA)
November 25, 2006
Level 60
Today, after 16 days, 14 hours and 9 minutes /played, I finally reached level 60 with my first character.
Most of today was spent in Felwood killing Toxic Horrors. I told my wife what I was killing, but she thought I said "Toxic Whores." I riffed with that for a bit, saying, "Nah, they're over in Auberdine. Better be careful when dealing with them, though, or you'll end up with a bad case of sylphilis."
Thanks for making a game worth paying almost $17 a month for after tax, Blizzard. Looking forward to The Burning Crusade.
November 22, 2006
Runner-Up
They'll post the haiku on the GDC website shortly.
Update, 2:16p: Looks like the one haiku of mine they didn't post was the one that cost me first place. My original entries, with the omitted one bolded:
Annual absense
Has caused career turmoil
So I must attend
The keynote speeches
Lead to a renewed sense of
Purpose for our games
The featured classes
#define our purpose: Coding
Experiences
The panels bring out
The curmudgeon and the sheep
To battle it out
Casual Games Summit
Connects the hardcore with mom
To bring fun to all
IGF: Where next
Year's ripped-off gameplay is seen
Today in the flesh
GDC Awards:
We celebrate each other
But only one wins
So many classes...
So much information, but
Why ignore testing?
November 20, 2006
Escape of the Bird
Yvonne has a small cage that she puts her birds in so she can put them out on the balcony and they can enjoy the weather and interact with other birds. Last year, Elmo escaped and she was devastated. Last Christmas, I gave her the money to go get herself another bird, and she got Chris.
These two events have been really hard on Yvonne because she hasn't really been able to make any friends down here in Dallas. Elmo was the last physical link she had with Utah, and now with her out of work because of her fibromyalgia and really just being stuck in the house, she's going to be alone again.
I guess it's good that I'm going to be home for a 4-day weekend this weekend...I can help her cope with this loss.
November 17, 2006
Automating Games QA (part 3)
Most games nowadays have some sort of customization system, be it your character, your "crib," your vehicle, etc. Testing the entire gamut of combinations by hand can actually get to the point where it is impossible to test within the time available for testing.
For example, let's say that you have a standard human avatar with a customizable shirt and pants. There are 10 different shirts available and 10 different pants available. That is 100 combinations right there. Let's add 10 hairstyles. That bumps it up to 1,000 combinations. Add 10 different fleshtones...10,000 combinations. Add a second gender...20,000 combinations. Add 5 different faces per gender...100,000 combinations. It adds up quickly.
Combination testing is designed to hit the two simplest types of bugs: single-value and two-value settings.
Now, if you look at the example above, while there are 100,000 combinations, there only a few individual settings: 10 shirts, 10 pants, 10 hairstyles and 5 faces per gender. That can mean either 40 settings if you assume everything else equal between the genders beside the faces, or 70 settings if everything is seperate between the genders. An automation script to individually cycle each of these single-value settings can quickly help eliminate bad items, and if the script screencaps each item, manual verification of item appearance will go fairly quickly.
One last thing: while handling the single-item tests, check the amount of memory that each item uses. A good additional test is to set all of your settings to their most memory-intensive setting and play the game that way to check for borderline out-of-memory conditions.
The second most-common type of bug is when two values are interdependent on each other. For example, you may have a hairstyle that clips through some geometry on a certain shirt in this example. Now even with automation, you're still going to have to manually verify the screenshots, so you want to minimize the number of shots you are looking at. This is where all-pairs testing comes in. This type of testing is also called pairwise testing, and there is a very in-depth example here.
This gets very easy if your combination lists are data-driven. Feed your lists into a tool like ALLPAIRS from James Bach and pass those lists into the game for testing. Pass the created list into your framework and have at it.
Automation testing is all about getting the grunt work done by the machine so you can focus on the non-automatable tasks. Always be on the lookout for tasks like this that you can automate.
The next games QA automation column will be on automating content testing. It's probably the hardest type of automation, and it will not be a good fit for most studios, which is why I left it for the end. Stay tuned.
November 16, 2006
Game-Themed Wedding
Evidently, you make them play a video game in order to get married.
Devious.
Halfway There...
What a pain in the ass.
Cat-Scratch Fever
Cute cats.
That is all.
November 15, 2006
Pyramid Head
Silent Hill, Silent Hill,
Doing the things that a town will,
What's it like is not important
Silent Hill
Is a town or is it in hell?
When darkness comes, does it feel well,
Or does the fog come from a well?
Nobody knows, Silent Hill...
Pyramid Head, Pyramid Head
Pyramid Head hates Silent Hill
They have a fight, nobody wins
Pyramid Head
Konami Man, Komani Man
Designs the entire universe man
Likes to scare the common man
Konami Man
The clock puzzle has a minute hand
An hour hand and a second hand
When they lock, the sanity ends
Powerful man, Konami Man
Sunderland, Sunderland
"Killed his wife with a pillow" man
Lives his life in a foggy land
Sunderland
He is depressed, and he is a mess,
and feels totally worthless.
Silent Hill will judge the man,
Try to redeem Sunderland.
Pyramid Head, Pyramid Head
Pyramid Head judges Sunderland
They have a fight, Pyramid wins
Pyramid Head
November 14, 2006
Right/Wrong vs. Right/Left
There is a lot of potential if you come into the games industry believing that you can fight the good fight and win the war against poor quality crapware, but I've found that people who keep that attitude burn out fairly quickly. It isn't that it's a wrong attitude, but the way that the games industry works, "right vs. wrong" just isn't...right.
I bring this up because I like making fun of commercials. Recently, they've been showing a commercial for DVD boxsets for the old Superman TV series and the last season of Lois and Clark. In the commercial, Dean Cain as Superman states in a matter-of-fact fashion, "I stand for what is right!" I always reply saying, "I stand for what is left!" For the most part, my reply is a joke, but I started thinking about it, and it does actually apply to how people tend to survive in QA.
Everyone on a team can pretty much agree about the "right" bugs to fix. Everyone on the team can also agree on the "wrong" bugs to fix: the ones that will result in additional instability, the ones that nobody will ever see, the ones that only occur if you noclip out of the world, the stupid shitty bugs that never work. However, between the "right" and the "wrong" bugs are the bugs in the grey area...the bugs that are left.
As testers, we stand for the bugs that are left. We fight for the bugs that aren't slam-dunk "must fix," but will have a serious impact on our customers. We wade into the grey, and escort our issues into the light.
Shifting from a "right/wrong" mentality to a "right/left" mentality isn't easy, but it makes survival in this industry so much easier.
"We" vs. "IFQ"
To check the "we" index, strike up a conversation with a person about the company that they are at, and listen to how they verbally refer to their company.
If they say "we" a lot, they're happy with the company itself, and they feel like they belong. If they say "they" a lot, they're unhappy with the company, but still feel like they belong. If they say "I" a lot, they're happy with the company, but feel like they're alone in what they do. Finally, if they say the company's name a lot, they're not only unhappy with work, but they're extremely unhappy.
This works because the person who is speaking is completely unaware that they are doing it. It falls apart when they're writing E-mails, unfortunately, but in conversation, it can be an invaluable tool for determining the state of mind of a person related to their employment.
Now, there is a way that an employee can roughly evaluate themselves on this scale using what is called the "IFQ" index, but there is a catch. People who thought they were unhappy because of work have started feeling optimistic; others who have tried this method and thought they were happy started looking for other jobs. If you are open to that sort of experience, then advance to the next paragraph. Otherwise, see you next post.
To determine your "IFQ" index, go home. Kick everyone out of the house for ten to fifteen minutes. Go into the restroom and look straight into the mirror. Finally, say in the most convincing way that you can, "I fucking quit." Now think about how you feel after saying that.
If a pit has formed in your stomach and/or you feel slightly ill, then you would be miserable if you left where you are. If you aren't happy at work but you feel ill at the thought of leaving, that generally means that there is a specific something that is dragging you down and if you can find and isolate it, you'll enjoy work a lot more.
If you felt like a major weight has been lifted off your shoulders, you have two choices. The first choice is you can start looking for another job. Something has pissed you off to the point where even saying "you quit" has made you happy. The other option is to try to figure out what you want to get away from and try to correct those issues.
If you felt no change, sorry, no answer at this time...try again later.
The Feed Solution
Okay, feeds have been broken for most of the last 48 hours for most people. The reason that they were broken is that Blogger used to put the feeds in my root folder, but now they're placed in my blog folder. In addition, Blogger in Beta no longer creates an RSS feed by default...only Atom.
So the issue I had was two-fold. First, since I'm on a managed hosting solution, how to I redirect people from [root]/atom.xml to [root]/blog/atom.xml? Second, how to I get an RSS feed for people who can't use Atom?
I can't use redirect pages. Any blog service worth its salt wouldn't execute Javascript or meta tags in an HTML page that it gets instead of a proper XML document.
What I ended up doing for the short-term was deleting atom.xml and rss.xml from the root of my site and creating subdirectories there. Those subdirectories are actually ASP.NET applications whose default pages redirect people to the blog using Response.Redirect.
Now when you ask for /atom.xml, the site redirects you to /atom.xml/Default.aspx, which then redirects you to /blog/atom.xml. It's inefficient, but it takes advantage of the HTTP standard to get the job done.
Now I need to get back to work on finishing the Atom/RSS conversion helper, and find out why Blogger thinks I'm a spam blog...
Feed Update
Update 1: Technorati fixed, now working on Bloglines.
Update 2: Submitted a trouble ticket to Bloglines. Old blog link doesn't update anymore, REALLY old blog link hasn't updated in awhile. New feed is working, but everything is highly truncated. Someone else handles my feed on LiveJournal, so I can't fix that one.
Update 3: Everything should now be fixed via a hack.
November 13, 2006
Feed Issues
I'm working on a fix. I appreciate your patience.
Automating Games QA (Part 2)
This post is meant to discuss game flow testing. The role of game flow testing is to verify that the appropriate content loads and that the proper branches are followed. This is generally more complex than simple UI automation testing and can require additional hookups as part of your game hookup.
The basic flow of a game flow automation script is to start the game from a known state (generally "New Game") and pass level completion/failure states to the game so that the game will progress to the next state, and so the game will record which state it is in at each step.
For example, let's say that you are working on a linear first-person shooter with no inter-level transitions. (Think Doom 1.) Your script starts a new game and verifies that you are in E1M1. Your script then passes in a command to tell the game that you have beaten E1M1. It then verifies that E1M2 loads. If you have a failure case or an alternate test, you'd handle that in a seperate script.
That's the simple answer in a nutshell. You aren't just loading each level, you're attempting to verify the links between the levels as well. If you have a data-driven level flow, you can often automate the creation of the scripts from that dataset.
One other nice part of this is that you're also checking for compounding issues. A lot of the time, issues won't manifest themselves in a level that is loaded anew because the issues are the result of memory fragmentation or memory leaks from a previous level. This is a nice, automated way of helping to bring those problems to the forefront.
Now, this does not remove the requirement that you still play through each path. This verifies that the links exist, not that the links can be triggered through normal gameplay. But if the link is broken in the script, you can save some time during testing by avoiding doing a known broken task.
In the next installment, we'll be going over combination testing, and the final installment will go over content testing theory and practice.
Blogger Beta
(Note: If you are seeing this, then this is finally working.)
November 10, 2006
[Politics] Ahem...
If we are going to use the First Amendment as a shield, we shouldn't attempt to deny it to others.
November 9, 2006
[Contest] Win Free MSDN/VSTS While I Rant
The catch is that you have to say what you would do to get it.
As for me, I'm not sure what I'd do for it. I certainly wouldn't pay the overinflated single-seat license cost of $10,939 for it, which is as much as you would pay for premium dorm space at Stanford for one year. Hell, I'm lucky to have Visual Studio 2005 Standard.
I guess if I had one thing to get it, it would be to hack together the bastard child of SharpDevelop and XNA Game Studio Express Edition to bring together the promise of .NET (language independent development, platform neutral coding) and XNA (game development for the masses) to bring the true meaning of .NET development to light. Just as all gamers don't speak English, not all game coders speak curly braces!
Gamedevs, cast off your unnecessary typecasts! Walk with me towards a bold future, where the language we code in is rendered irrelevant by the end result: interactive entertainment that can be created by anyone and enjoyed by anyone on the platform of their choice, be it Windows, Xbox 360, Zune, PocketPC or smartphones. Let your games live anywhere, whether or not they use Live Anywhere. Send these unnecessary restrictions into the
void
.The tools are there. Microsoft documented the interfaces for us to call. (Poorly, but the documentation and interfaces are there.) You can try to take away our language independence, but you'll never take away our freedom!
[XNA] Content Pipeline Pros/Cons
Pros
1. Strongly-typed loading. Each asset is serialized as a binary asset, and as a result, asset loading is not only quick, it's type-safe. The larger and more complex the asset, the better this is.2. Extensible. The content pipeline was designed to be extended in every which way possible. For most items, composite objects are going to be the the name of the game because a lot of the commonly built functionality is already embedded into the framework. If you need extra functionality above and beyond what is there, for most things it will be easy enough to add it yourself.
3. (Mostly) self-configuring. It's nice to be able to add an asset and (most of the time) have the proper settings already set on the importer/converter.
Cons
1..XNB
. Whoever had the bright idea to make everything have the same extension and have to individually rename assets inside the IDE needs to be strung up by their testicles and flogged in public. It is very common to have a model in a folder with the appropriate texture having the same base name (chair.fbx/chair.tga for example). The reason for this is because if you don't have the files in the same folder, most modelling software freaks out with the texture paths.2. No included packed file system. True, you can subclass the ContentManager to make your own packed file system, but the time savings from loading a strongly-typed serialized file are more than offset by slow individual file access times (due to security lookups, etc.) For small projects, this isn't that big of a deal, but once you start getting into asset counts in the 4- to 5-digit range, it matters.
3. Not team-friendly. Bear with me on this one. Converting an .FBX file to a model, or a .TGA to a texture takes little to no time, but when you have custom asset importers to handle certain items (lightmaps, pathfinding precalculations, visibility), the time it takes to process these items increases by several orders of magnitude. We aren't talking seconds...we're talking hours EACH. One of the primary purposes of any asset pipeline is that this time cost is paid one time per iteration of the asset. With the current XNA Content Pipeline, each person who gets the asset is going to have to pay the time cost unless they use source control, which leads me to...
4. Not source-control friendly. Now, Visual C# Express Edition does not support source control, but you can still use an external source control solution. The nice thing is that you can eliminate the extra time costs via source control. The downside is that to get this time savings, you have to check in not only the asset and the compiled asset, but the intermediate "temp" file as well. Then to do an iteration, you have to check out all three files, do the build, and then check them in again.
5. Documentation still fairly weak. Just try finding the information you need to write an image importer inside the documentation...be prepared to delve through over a dozen help topics and even then you'll have to go through and do it by trial-and-error.
6. Plug-ins. Don't get me wrong, I love the idea of plugins. My major issue here is that they seem to have missed one of the strengths of .NET. If I create a control in a Windows Forms project, I compile and then that control is available for use in my toolbox automatically. The IDE uses reflection on the compiled assembly and lets me use the control appropriately. If I'm writing plugins as part of my game, why do I have to then reference the plugin DLL through a setting?
7. Still haven't found a way to hack it to work outside of Visual C# Express Edition. I will find a way...
November 8, 2006
[Testing] Automating Games QA (Part 1)
However, automation testing has had a difficult time infiltrating game development houses for a couple of fairly hefty reasons. The first reason is that the number of bugs contained in the code are generally dwarfed by the number of bugs contained in the content. The second reason is that it is difficult to impossible to automate most games because most games have an element of randomness to them, and as such, it makes it difficult to determine success or failure on a test case in an automated fashion. Finally, there is rarely, if ever, a standardized way of querying a game about the state that it is in, or even passing input to a game to trigger a response.
This isn't to say that automation is impossible in a game scenario, but automation does require an additional level of developer interaction and even imagination that other scenarios simply do not have. There are generally four areas where automation testing can be efficiently used in game development: User Interface, Game Flow, Combination and Content.
User Interface automation testing is where you are going to see the biggest initial gain from a QA standpoint, and is an excellent place to push for an automation start in any company. The goal should be that anyone in your QA department should be able to write automation test cases without much training. I'm going to describe a simple framework that you can share with your development team as a starting point.
A game UI automation framework generally consists of four seperate components. The game hookup, the communication component, the use case library and the test cases themselves.
The "game hookup" is a piece of code inside the game itself that listens for commands and queries from the communication component. For example, on the Xbox, your game hookup may just be a background thread that sits and listens on the debug channel. On a PC game, it may listen on a named pipe or an IP address or some other similar item.
The "communication component" is generally going to be a COM component that sits on your PC, and is responsible for brokering communication between the use case library and the game hookup.
The "use case library" is a set of user actions in user-oriented-named subs written in VBScript. Each use case sends the appropriate commands to the communication component to execute a certain action, and requests information from the communication component to verify that an action executed correctly. This is usually jointly maintained by the developers and more technically oriented testers.
The "test cases" are the actual test cases that call the subs in the use case library to handle each individual test case. Let's hook all of these up and see how these would work and evolve over time.
You are working on an Xbox title. As a test to verify that UI automation is useful, the development team agrees to create the game hookup and communication component. The game hookup will only recognize a limited command set, "getscreen," "getcontrol," "getvalue," "nextcontrol," "prevcontrol," "cancel," and "activate." "Getscreen" returns the name of the currently active screen. "Getcontrol" returns the name of the currently active control. "Getvalue" returns the value of the current control, but can later be extended to let you query more data. "Nextcontrol" and "Prevcontrol" are simply tab-order style items, and act by sending the "up" or "down" input for the main controller. "Cancel" acts as the B button. "Activate" acts as the A button or START button. The communication component takes one argument when it is created: the name of the Xbox on the network that you want to control via automation. It has a sub for passing a command, a function for passing a query, a sub for sleeping for one tenth of a second, and a sub for restarting the console as a cold boot and launching into your game.
You decide that all of your test cases are going to be designed to start from power-up. As a result, your first use case will be get to main menu. The flow at this point in development is no videos, just the "press START" screen and then the main menu. You write your use case similar to this pseudocode...
Sub GetToMainMenu(Xbox1 As TestControls.XboxControl)A sample test case that called this would be:
Xbox1.Reboot
For X = 1 to MaxTimeOut
If Xbox1.Getscreen = "press_start" Then
LogSuccess "At Start Screen"
Xbox1.Activate
For Y = 1 to MaxTimeOut
If Xbox1.Getscreen = "main_menu" Then
LogSuccess "At Main Menu"
End If
Next Y
LogFailure "Could Not Get To Main Menu"
Exit Sub
Else
Xbox1.Sleep
End If
Next X
LogFailure "Could Not Get To Press START Screen"
End Sub
Sub TestSystemLinkMatchmakingAs part of your nightly build process, you then hook up a couple of these scripts to be executed after the build is done and deployed to some Xbox test kits. When you come in the next morning, check the logs.
Dim Xbox1 As New TestControls.XboxControl("TestXbox1")
Dim Xbox2 As New TestControls.XboxControl("TestXbox2")
GetToMainMenu Xbox1
GetToMainMenu Xbox2
...
So, let's say that
TestSystemLinkMatchmaking
reports a failure. It's easy to open the test case and manually try the steps inside to verify the failure. If it does fail, you can even tell the developer to just run the automated test case to trigger the failure. It saves the developer time to repro and debug the problem.If your use cases are prolific enough, you can even write individually coded cases for regression testing and repros of manually found bugs.
It requires some effort to create a system like this, and changes to the UI require adjustments to the use case library. Major changes may even require changes to the test cases themselves. However, the time savings in comparison can be fairly hefty.
Obviously, this is an extremely simplified example, but hopefully it got you thinking.
In the next installment, we'll go over game flow testing.
November 7, 2006
[Humor] His First Name...Revealed
Her prescription was filled by Dr. Matthew Tran.
I wonder if he eats at Roybertito's?
[Industry] Microtransactions/Points
Microtransactions work from a business standpoint for a few reasons. First, credit card processing is expensive for merchants. For an online retailer to handle a Card-Not-Present transaction, it generally works out to about thirty cents plus three percent of the total plus a daily "processing" fee to handle all of the charges processed that day. So for a $20 charge, ninety cents go to the credit card company, plus whatever that transaction's share of the processing fee may be. It adds up.
Second, they help mitigate risk. For electronically distributed software and content, it's fairly well known and accepted that the primary loss factor is piracy...people who never bought it in the first place. The second largest loss factor, though, is the chargeback.
Let's say that I make a new game, GameX, and decide to sell it online. In order to sell it online, I take credit cards or PayPal. I decide that the best price balance for me to make money on top of the card fees and PayPal fees as well as encourage impulse purchases is $5. This means that forty-five cents out of every sale are going towards processing charges. Now, I'm targeting a niche market, and I only sell 1,000 copies. After charges (ignoring the processing fee), that means that my game cleared $4,550. (I'm excluding my bandwidth charges for the software here.)
Now, let's say that one customer who actually bought the game decides to contest this $5 charge on his credit card. (This is called "friendly fraud.") He received the game, played the game, and just decided to contest it for whatever reason. The moment it is contested, $5 (not $4.55) get deducted from my account. Because I'm a small-volume e-retailer, I also get charged a $25 chargeback fee. I then have twelve days to challenge this charge being contested. Assuming I win, I get that money back, but I only have a 30-40% chance of that happening with electronically distrubted software. If the charge is contested a second time, $30 goes away again and I have only two ways of trying to get it back. The first is to make an appeal through Visa/MasterCard. Doing that requires that I spend $150 to file the appeal, and pay them $250 to review the appeal. Obviously, that's not worth it, especially considering that retailers only win their appeals under 40% of the time. The other way is to send a collection agent after you, but given that most collection agents have a minimum charge of $30, that isn't worth it either.
Now, in this case, one customer wiped out the profit of over six sales. From what I hear from other e-tailers, in the above example, fifteen chargebacks are a more likely number, which reduces what I clear down to $4,100 after dispute charges. In other words, those fifteen chargebacks wiped out nearly 10% of my income. That isn't counting the manhours lost fighting the chargebacks in the first place, or other expenses necessary to keep a business going. In addition, because over 1% of my charges were charged back, I'm now seen as a "high-risk" e-tailer, and I start getting higher charges and extra charges per month.
Microtransaction services like Xbox Live Marketplace and the Wii's Virtual Console help by reducing/eliminating a lot of this risk for developers. Let's use Microsoft Points as an example.
When you spend your $20 on Microsoft Points, you get 1600 points to do with as you wish. The reason that it is 1600 instead of 2000 is that when you purchase your points, Microsoft takes their cut then. Their cut goes towards maintaining and upgrading the service, handling chargebacks, et cetera. Based on what you see above, that means that out of Microsoft's share of that 400 points, about 90 are going to the credit card companies, and potentially as much as 300 are going to chargebacks. Whatever remains after that goes to Microsoft to maintain the Points service (employees, servers, auditing, etc.).
Now the rest of this money is virtualized, and is immune from chargebacks. Now points can be utilized by services, like Windows Live Messenger, the Zune Marketplace or Xbox Live. So, let's say that I get GameX on Xbox Live Arcade. I decide on 400 points for my price point ($5 real money) and sell my 1,000 units. My end result, assuming no additional fees on my end, is $4,000. It's $100 less than I had after the chargebacks, but I didn't have to exert any manpower towards pursuing the friendly fraud. When I get that check at the end of the month, I don't need to worry about parts of it vanishing sometime in the next sixty days because of chargebacks. (The Live service itself takes a small amount of points to cover bandwidth costs for the content, but I'm excluding them because they are comparable to the bandwidth costs I excluded above.)
Now for the real upside of microtransactions. Let's say that I make a small expansion pack for GameX, say an extra 20 levels. I can drop it on Xbox Live Arcade for an obscenely low amount (say, 100 points) because I'm not having to pay the credit card processing charges, etc. For me to sell the same item outside of Live for real money and clear the same amount of money after processing fees and asshole chargebacks, I'd have to increase the cost to nearly $2. I can actually sell something on Live for one point if I choose to. There's no way I could do that with a credit card.
Services like Microsoft Points and Wii Points help eliminate a lot of the risk and headache associated with electronic distribution. As we transition to microtransactions, you're going to see some great examples (the Oblivion expansion pack on Xbox 360 for 800 points compared to $20 for the PC), some pathetic examples (*cough* horse armor *cough*), and some "what the fuck are they thinking" examples (*cough* GT:HD *cough*). Please try to be patient with us as we experiment here...we're as new to this concept as you are.
November 6, 2006
[Milestone] Torn
On the upside, I've met a lot of interesting people and had a lot of wonderful experiences as a direct result of this blog.
On the down side, there are a few opportunities that if I take them, may reduce or eliminate my ability to blog in the near future.
On the upside, posts I've made have managed to spark debate about piracy and bring attention to issues directly related to QA in the games industry.
On the downside, most of the debate about piracy has been dominated by people whose idea of original speech is farts after two seperate meals, and games QA is still stuck in a bit of a quagmire of "too much responsibility, too little authority, way too little pay."
On the upside, I'm still independant. I haven't sold out to anyone...my voice is still mine.
On the downside, my "not selling out" has actually counted against me recently.
So I am a bit torn. Over the next hundred posts, expect to see many thinly-veiled hypotheticals, a little incoherent ranting, maybe a post or two that were promised back in the 500's, and maybe some more stupid top 10 lists.
(Stupid trivia: My most ripped-off post is the Top 10 Signs You Are Dating A Tester.)
[Miscellaneous] Weekly Wrapup
I'm working on a response to Session-Based Testing. Brief summary of what I'm working towards: nice technique, but not a drop-in replacement for testing structure in an iterative gaming scenario by itself.
Microsoft released XNA Game Studio Express Edition (beta 2) last week. So far, major improvement over beta 1, but I hope that XNA GS Pro completely retools the content pipeline. While the pipeline in GSEE is ideal for individual developers, I see major problems integrating it into a team-based system. I'm working on writing a pair of custom importers at the moment (PCX files and Quake MAP files). I'll keep people posted.
[Science] The Ten-Foot Pole
First off, evidence of vestigal limbs on dolphins do exist as part of the fossil record. Evolution of dolphins has been researched on a regular basis. But that's besides the point. What I wanted to get to was this sentence...
Remember Jacob, just because something appears in a book that does not make it true.I dislike the sheer amount of hypocrisy contained in that sentence when taken in the context of the entire reply. "I don't want you to agree with the bulk of scientific knowledge, so I want you to rely on this translation of a 2,000-year-old book instead, but I don't want you to trust what you read in books...except this one."
Science and faith should supplement each other, not supplant each other. A belief can lead to a theory, but a belief in and of itself is not a theory. Even members of the Vatican have stepped forward and said as much.
Please, believe what you want to believe. But please be willing to submit your beliefs to the same level of scrutiny that you submit the beliefs of others to. After all, I believe that there may be a religious tenet or two about this...
If you believe the data presented by science is incorrect, then by all means, go forth and construct an alternate theory, create an experiment to prove that theory, and then share the experiment and the results with the world. After all, until several hundred years ago, it was common knowledge that the sun revolved around the earth. (Some people still believe it.) If you can back it up and your experiment survives peer review, who knows?
November 2, 2006
[Testing] Where Is Our XP?
First, some background. At Microsoft, testing has a level of power that is unheard of outside of in "the real world." While most test teams are kept in check by effective test leads and managers, some testers tend to abuse their power and use the bug database as a bully pulpit for design changes. On top of that, the process guides everything. There is a very rigid flow of specs, plans and test cases that have to be gone through for most items. The practical upshot of this power and process is that the test team becomes a laser-focused tool dedicated to beating the living shit out of your product and finding as many bugs as humanly possible in the short amount of time that they have.
However, there's a limitation to the Microsoft process, and it's a fairly major one...what happens if the project changes dramatically tomorrow? The Microsoft testing process is ideally suited to a waterfall development system, but does not adapt well to more iterative development methodologies. The level of prep-work that is done for a Microsoft-level testing scenario can turn out to be wasted effort after a single day of pair programming working on the core of the system during a Scrum run.
This led to the big question. We have agile development systems. We even have agile content creation systems. Where is our agile testing system, our Extreme Testing? Most testing systems are based on the work done by Kaner. What about Whittaker? While his work streamlines testing, does streamlining really make us more agile? The difficult part of all of this is that while the definition of the program can change, our duty towards the program remains the same: ensuring that it works. This does require at least some level of preparation and planning.
Personally, I shoot for more of a loose plan based on the milestone deliverables and weekly goals, and that tends to work quite well, but it still relies heavily on work done during previous weeks adding up for the end. This doesn't let me turn on a dime, but I can course correct fairly quickly. It's not like the pure MS methodology, where redirecting test is like turning a luxury liner so that it misses an iceberg...
Since I have a fairly QA-oriented audience, I pose the question to you...How do you keep your test department agile in the face of Extreme Programming methodologies?