2007-08-19

Textures, Colors, and Bits

There's a lot of talk these days about how much space is needed to store texture data for the latest generation of game consoles. As while the Wii and the Xbox360 have stuck with the tried and true DVD format (4.7GB single layer, 8.5GB dual layer), the PS3 has gone with BluRay which provides 25 GB single layered and 50 in dual layer. Some developers are claiming to be maxing out the potential of the DVD (not to be confused with HD-DVD), however I'm willing to bet that about 98% of the time, this is simply due to poor compression and color palette use (note: I'm certainly no expert on all of this...just thinking out lout here).

Even though we now have a small, but growing number of TVs and monitors than can support beyond 24bit color, this does not mean every single texture needs to be stored with such a high color palette. Now beyond the Red, Green and Blue channels, you'll also want to store a "alpha" (or transparency) channel. This will bump some textures up to 32bits. However, giving up 8bits per pixel of your textures to memory can eat up alot of space and be quite wasteful, so some will only use a single bit for transparency, putting them in an either-or situation. This second method while much more efficient in terms of storage and processing required to render it in the scene looks absolutely awful most of the time. You find this 1bit alpha used most often in the textures for leaves, blades of grass, or a chain linked fence...all of which look absolutely awful because they cause so much aliasing that it appears the areas they're used in heavily together causes a shimmering, sparkling effect which can be quite irritating to the eye. Also since most modern anti-aliasing techniques strictly affect the polygons, and not your textures in the scene this becomes even more apparent as it makes these items stand out even more from the rest of the scene.

With the ultra powerful CPUs and GPUs found in the 360 and PS3 the processing power needed to render an 8bit alpha vs. a 1bit alpha is almost negligible at this point. However, as I started off talking about, it can make quite a difference in the amount of space required to store your textures, whether it be in memory or the physical storage media. With the higher and higher resolutions of todays textures, multiplied by a growing number of them with modern graphics engines using sometimes 8 or even 16 different textures across just one area on a model, those 7 little bits add up pretty quick.

The thing that has always puzzled me is why no one uses anything in between? The difference between a 1bit mask and an 8bit mask is very apparent, but the difference between a 2 or 3 bit mask compared to an 8 bit is not nearly as much.

side-by-side comparison


Looking at them close up makes the differences more apparent, but it also makes it more clear how much we don't really need to use a full 8bit mask for decent alpha when it comes to edge aliasing.

First up, is a 4 times magnified close up of the 8bit mask:

doodle-8bit


And then look at the 1bit's very visible difference.

doodle-1bit


However, now look at the 2bit example. It's not quite as nice as the 8bit, but a huge improvement over the 1bit...

doodle-2bit


And if we bump that up to 3bit there's even less of a difference.

doodle-3bit


I had also originally made a 4bit example as well, but there was practically no difference between it and the 8bit example at all.

Now I want to point out that for all of this I am focusing on textures that have obvious aliasing problems, like a chainlinked fence, or leaves on a tree. For other effects like smoke, fire, or maybe even hair this may not work as well and you may still need to use 8bit to look good, but then again... maybe not? ;)

So, usually a texture map is going to be stored with either with 8, 16, 24, or 32 bits (due to byte addressable memory mainly I would assume). An 8bit texture is going to only give you an indexed palette of 256 colors, or 255 if you use 1 of those colors as your transparent color. If we take 2 of those bits and set them aside for transparency (giving 4 levels of transparency) we now get 64 colors that each can be displayed at 4 levels, or even 5 if you reserve one of those colors for absolutely transparent as with the usual scheme. Generally this is still plenty of different colors for a mainly monochromatic thing like a leaf or blade of grass.

However, let's say you need more colors. Let's look at a 16 bit texture, which in most instances is perfectly fine for even high quality graphics. Now we can use an indexed palette like with the 8bit image, and that will follow almost the same principles. Let's look at using a RGB scale instead. Once again, many times people will use 5 bits for each color channel and then the extra bit for transparency [RGBA5551], which gives us the same problems as with the 255+1 color image and aliasing. What if we bump down each color channel to 4bits, and now we have 4 bits for the transparency channel as well (16 levels of transparency) [RGBA4444]. As we've already determined, that's more than enough for the issue we're looking at, but it does cut into your number of possible colors by quite a bit. Perhaps, with a 16bit texture, you'd still be best off to use a static palette, with 2 or 3 bits set aside for transparency? It would probably depend on what type of texture you're working with individually.

So okay, let's look at 24bit textures, which rarely have an alpha channel at all. Here the solution seems very obvious, cut your color channels down to 7bits rather than 8, and then use the remaining 3bits for alpha [RGBA7773].

Generally however, if you want 24bit color textures, and transparency, you just bump on up to 32bits...which once again, may or may not be a waste depending on what type of texture it is your working with. As most TVs and monitors still can not display anything higher than 24bit color (don't let your Windows display settings fool you guys, there is no such thing as a 32bit monitor), some of the newer displays will actually go up to 30bit (10 per channel) and 36bit (12 bits per channel) which is only available to you if you're using HDMI v1.3+ or DisplayPort1.1+. In such a case (which I doubt anyone at all will even consider till the next generation of systems) we could have a 32 bit texture using 10 bits for each color channel and 2 bits for the transparency, or maybe a 40bit (next byte addressable size up) with 12bits per color channel and 4bits for transparency. However, most developers will probably opt to start using floating point based color channels ending up with 48 and 64 bit textures, which will take even more space...

Another interesting point I'd like to point out while I have your attention is that rather than using a 32bit texture for super high quality color representation, why not use a 16bit texture thats twice the resolution or more, yet uses about the same space? When the texture is filtered through mipmapping and various other filters, you probably won't even be able to tell the difference at a distance, yet up close you've got even more detail then before ;)

I guess the point of all this is that today's developers don't seem to care to try and be as creative about such problems as they used to be a decade ago. If you think squeezing all those textures onto a DVD is hard, try putting them on a 16 megabit SNES cartridge... If you spend just a little more time thinking about these kinda things, I'm sure that you'll be able to fit just about as much detail into that 9 gig DVD as you have planned to plop onto that BluRay disc. The same goes for audio (10 gigs my ass, that must be uncompressed or something...have these guys never heard of Ogg Vorbis?).

2007-07-22

The GPU's days are numbered

I just saw an article with a quote from John Carmack about how he doesn't think there's a real need for a dedicated phyics processor (PPU) like the Ageia PhysX. He says that between the advancement of multi-core CPUs and GPUs they should be able to handle physics just as well in the near future. This is not the first time I've heard this sentiment, but it has brought back an old thought in my mind. How much longer will it be till the GPU suffers the same fate?

When dedicated graphics processors (predominantly used for 3D graphics) were first introduced in the mid-late 1990s, there was certainly a need for them. They allowed game developers to create a new level of graphical quality that would not have been possible strictly using the general purpose CPU. However, that didn't stop Intel from developing MMX (multi-media extensions) for their Pentium chip line. The idea was that for those that wanted decent 3D graphics, but didn't have to have the best of the best, their new assembly commands built into their chips would allow for mainstream use of 3D. Some tried, but in the end it just wasn't enough to compete with even a low-end dedicated 3D processor. When NVidia released the GeForce 2, the world's first GPU with the ability to perform realtime lighting and deformations, it was all over. Later cards would introduce the capacity to have fully programmable shaders thus moving the GPU and CPU even further apart.

Today it seems the progression of GPUs has started to plateau, just as CPUs had a few years ago prior to the multi-core revolution. It is already predicted that 3d graphics chip makers will begin to follow suit with multi-core GPUs in the next couple years. However, I'm much more interested in another route AMD is planning to take. They have announced a future CPU called the "hybrid." This multicore CPU will also feature an on-die GPU. Details are sketchy at best beyond that...so people like me are left to allow our imaginations to run wild with the idea.

Now, AMD has tried to make it clear that the graphical quality of such a setup will be comparable to current on-board IGP solutions from their ATI division. But that certainly doesn't mean things won't progress beyond that in future itterations. Imagine an AMD hybrid chip featuring 4 of their next generation Phenom CPU cores, with an additonal 2 ATI 3D GPU cores...all on one chip. Now think about if they take it one step further and start integrating some of the same features of their GPUs directly into their CPU's cores.... Then instead of the 6-core hybrid setup I described before, now you can have a 4 or 8 core CPU that has built in 3d and 2d graphics hardware capabilities built right into a general purpose CPU.

Now, I'm not a hardware/processor expert by any means, but from what I understand, one of the biggest differences between standard, general purpose CPUs and 3D GPUs today is the ability to use Vectors instead of just the usual integers and floating point numbers. AMD has already included this ability into their upcoming K10 chips, so we'll already be partially there come this fall. Having hardware vector processing in the CPU will also help confirm what Mr. Carmack thinks about PPUs.

To sum it up, I think Intel's ideas behind adding MMX to their Pentium Pro and Pentium 2 chips over a decade ago was just a little too far ahead of its time. If things continue to progress the way they have over the last few years, we might see the dedicated graphics processor go the way of the sound card. Sure, there will always be a few people who just have to have a higher end experience, but for the vast majority of people, a future generation of CPU may be able to handle all their processing needs.

I could end this article right here, but there's something else to think about if you follow my line of thinking. There are hundreds of processor and chip producing companies...however, it's really only 4 companies that make the bulk of them used in Desktops, Laptops, and non-portable gaming systems: Intel, AMD, IBM and NVidia. Now until recently, that list would have featured 5 companies, but as you should already know, AMD bought out ATI last year. With ATI and NVidia being the only 2 companies that mattered when it came to GPUs, that leaves NVidia alone now as the only strict GPU and chipset producer in this bunch. Intel already has its own line of GPUs too, however most people still find them to be highly inadequate when compared to NVidia and AMD/ATI's offerings. Also, now that AMD and Intel have bumped IBM out of the Desktop/Laptop world, thanks to Apple, they're also in an interesting situation. If I remember correctly, IBM even sells some of its servers featuring AMD chips now, so really for IBM their business model isn't quite so focued on chip production these days. On the flip side, IBM is the sole manufacture of CPUs for all three of the newest game consoles, with AMD/ATI supplying the GPU for two of them, and NVidia supporting just the PS3, which is already shaping up to be a dissappointing failure.

My prediction is that NVidia is either going to be bought by Intel or IBM, or they will have to start making their own x86 type general purpose CPUs to stay in the game. If my predictions are right on where AMD may lead the industry by bringing the functionality of the GPU directly into their CPUs, thus killing the need for add-on cards and dedicated GPUs for most people, NVidia will quickly find themselves in trouble with their current market focus. Intel may decide that they'll just continue to evolve their own graphics technology and beef it up to compete with AMD's hybrid platform, and won't need to buy NVidia. Although, I personally think they'd both be better off if they did. Also, I think it's a real long shot that IBM will want to buy NVidia, being that they don't even compete in the mainstream x86 market where NVidia's graphics cards are most commonly used. I suppose the other real long shot could be that AMD ends up buying NVidia too...but I highly doubt it as cool as that thought may be.

Ever since I installed my first GeForce card I have been a fan of NVidia's products, so I hope they don't end up finding themselves all alone and closing up shop 10 years from now when the CPU/GPU hybrid becomes the norm. As per usual, only time will tell.

2007-07-10

This simple guide is intended for Ubuntu 7.04 (Feisty) users, but may work for other releases as well.

To get moto4lin to work right, you'll also need the p2kmoto package, but for some reason the Ubuntu guys put moto4lin in their repositories, but not p2kmoto. I noticed there's a source package in Gusty (7.10), but no .deb. So, here's how to get it quickly working on your system if you don't want to bother compiling it.

$ sudo apt-get install moto4lin

After installing this package you will need to download the .deb for p2kmoto from somewhere. A quick Google search found the following sources for me:

http://members.chello.cz/gliding/p2kmoto_0.1
http://www.timothytuck.com/component/option,com_remository/Itemid,0/func,fileinfo/id,4/
(this last one seemed to stall out for me, but might work for you)

Now of course, I can not vouch for either one of these sources, so download at your own risk!

Once you have installed both moto4lin and p2kmoto, you (in theory) should be able to just type in:

$ sudo moto4lin

However, this did not ever work right for me... Instead I had to run the p2ktest program first, and then moto4lin worked after that. Also note there are ways to change your udev rules so you don't have to run this app as root, but as long as you're careful you should be fine running it with sudo, as listed above.

It's still not perfect, and a little slow seeming to me, but it gets the job done...and for free! Those bastards at "the new AT&T" wanted to charge me $50 for a cable and some crappy software CD (most likely Windows only anyway). A handy little $15 multi-tip USB cable set and some good ol' open source software just seemed like the better option to me ;)

2007-05-15

Release Namings

This is part one of two posts I plan on making...although who knows when I'll get around to the massive second part :P

I've noticed something in the software world that annoys me... No one can seem to come to a clear definition of what release types mean. What exactly is an "alpha", "beta", "release candidate", etc...?

To me these namings have always had a clear cut meaning, why don't they to everyone else? I'm not going to be ridiculous enough to propose that mine are the be all end all and everyone should conform to these standards, but it certainly needs to be discussed and standardized at some point.

First off we have the elusive "alpha" release. I've always thought of the alpha stage in development to mean a work-in-progress...ie, new features are still being developed and added. The code base is at a state where you can start testing it to some extent and it's somewhat usable, but there are still new features being added, and the currently implemented ones may very well be completely rewritten depending on how tests and such go. So let's just say alpha means: The software in question is usable, but still under active development, not all features have been implemented yet, and the code is still subject to radical change. If a user is feeling really adventurous they can go ahead and give it a shot, but stability is most certainly not guaranteed.

With that definition, let's take a step back to the rarely used "pre-alpha." To me this would mean the same as alpha except it's not even usable yet, and there's no point in trying to test the software as whole yet, although specific classes and functions may be complete. Also with this definition there's no reason to really ever offer up a public release dubbed a pre-alpha. The only way an end user should ever get their hands on pre-alpha code is if they're compiling from CVS/Subversion/etc.

Next onto another commonly used term that rarely has the same meaning from project to project; "beta" releases. What defines a beta release seems to have almost no contention between different developers. To me, a beta release means that all features have been implemented and from this point on all subsequent releases will be to fix bugs and tighten up the code. It always drives me crazy when developers release their programs with a so called beta release, yet all the features are not there yet, this just isn't a beta...it's still alpha! Not only should all planned features be implemented upon the first beta release, but there should have been a reasonable amount of testing to make sure there are no major, commonly found bugs in it. A beta doesn't have to be completely stable, but it should certainly be more stable than an alpha. Now of course, this is hard to define as the beta phase is there strictly to find new bugs, but like I said, it should at least be as stable, if not more than the alpha release(s) were. Perhaps another way to look at it is that alphas are almost exclusively for developers to test, but betas are for the end users to start testing. The beta cycle of development should probably be the longest of them all as well. Sometime it takes time to find all the major bugs in your code, and there's no sense in releasing a release candidate until you've had an adequate amount of time and testers to find any show stoppers. So, to sum up my definition of beta: all features have been implemented, some testing has been done, there are no known critical bugs in the code at the time of release.

Next we have the "release candidate" or sometimes simply referred to as the "rc" for those who just can't handle typing that much. Not all projects even release a release candidate, but I think it's a good idea. Your release candidate comes after you've been in beta for a decent amount of time and all the known bugs have been found. The main purpose for the release candidate is to grab a larger group of testers than you had for your beta. As some users are too paranoid to run beta software, and many times for good reason, a release candidate is more acceptable to those who need absolute stability. Your release candidate phase doesn't have to last very long, but it should probably be more than a week (I'm looking at you Ubuntu!). Once again you should have zero bug reports open upon release of your first release candidate, as this release should..in theory...be just as stable as your final release. Of course, hopefully a few will be found before you let loose that final release, since that's the whole point of this phase. And if you're saying to yourself "what if my code doesn't have any bugs in it?" you're just kidding yourself, everyones' code has bugs. If no new bugs have been reported since your first release candidate you probably still don't have enough users testing your code.

Once the RC has been out there for a while, and you have absolutely no bugs left, then you release your final version. Now of course all development teams will have this version, but not all of them call it the same thing. Some will call it "gamma" as it's the next letter after "beta" in the greek alphabet, and many of the game developers will call it the "gold" release because of the color of the old CD-R masters they used to send off to their replicators. Of course today most of them will use the same silver-ish CD-Rs everyone else does, but the name remains. And for many developers, especially in the open source community, their code never touches a physical disc. So no matter whether you call it gold, or just "final," it's the last release you should ever make for this version number. The only complaint/suggestion I'd have here is don't rush to get to final, there's no shame in having a lengthy beta phase if that means your final is rock solid when it finally hits. Just as with the rest of this, we really should agree on one name that we all use, I prefer the simple "final" myself.

So to recap my definitions:

Pre-Alpha: code almost unusable, in heavy development, definitely not for end users

Alpha: code still in heavy development, but somewhat usable; not all features have been implemented yet, don't expect stability end users

Beta: all features have been implemented, code is fairly stable, okay for users to test at their own risk

Release Candidate: all known bugs have been irradicated, safe for all to test

Final: super stable, well tested, safe for mission critical use


In my next post I'll discuss an even sloppier area: version numbering.

2007-04-05

A quick thought

Like many other poor souls, I'm still forced to deal with Windows everyday at work. So, I end up using it much more than I would like obviously. Anyway, I just had a thought today and that is: Why hasn't anyone created a package manager for Windows? If I could just do an apt-get install firefox on these Windows machines with all updates being automagically handled by the PM life would be so much simpler. Well, obviously I'm not the first to have this idea as there seems to already be a couple projects under way to do just this. The first one I found is simply called WinPackMan (or the Windows Package Manager), although I've yet to have a chance to try it yet. It appears to still be in an alpha state at the moment though...

If there were a reliable, open-source package manager for Windows, this could really help people transition to the Linux world much easier down the road too. Package installation/management is one the first things new Linux converts will find to complain about when they attempt to make the switch. And it's not really their fault, nor is it the Linux community's fault...we simply do things differently. So, once again... if Windows users started using package managers akin to Apt, Pacman, Yum, Portage (yes Portage can be used for binaries in addition to compiling from source), etc... then things would probably feel alot more natural to them when they make the switch.

Perhaps a more modular/extensible package manager like SmartPM needs to be ported over to Windows? I think it would very much be worth it.

2007-03-18

Thinking Out Loud: Episodic Gaming

If you spend much time reading all the game sites about new concepts and ideas, probably one of the biggest buzz terms you'll hear is "digital distribution." The ability to provide your content directly to the end user and skip all the middle men is an interesting prospect to many developers, especially the smaller independent developers who's chances of ever getting their games carried in the Walmarts and Best Buys of the world is slim to none. The concept of digital distribution is appealing to many, although most of the publishers and retail distributors are probably scared to death of it. With the cost of development rising it could be a major benefit to the industry as a whole even though it could kill some of the juggernauts that currently run the show. Not only does DD make the thought of self-publishing your own games much more realistic than ever before, but it also offers to the actual gamers the possibility of getting their fix at cheaper prices.

Another concept that builds on top of DD is the idea of "episodic content." The idea is that rather than buying a big epic game all at once, the gamer buys it in smaller segments, thus making their gaming experience more akin to a TV show than a movie. Very few companies have actually tried realizing the concept so far, but it's fairly inevitable that it will come to be a normal occurance in the future.

Now I will try to offer some of my ideas on how to successfully pull off this idea that I have yet to see fully realized in our industry. Now of course being that I'm a fledgling game designer/developer myself, some might ask why give away these ideas if they could be your own big break? Well, I see it as a much greater benefit to the industry as a whole rather than to keep to myself. The method of delivery isn't nearly as important as the real meat of the game, AKA: content.

So, first off, episodic gaming requires your users to have a broadband internet connection. This is a little bit of a problem as still not everyone has one. In fact, according to recent surveys and statistics you'll find that in the United States (where I reside), only about half of Americans have broadband at their homes. In fact, from my own personal experience, it's even less than that if your in the rural south (also where I currently reside). Since America is so spread out, the further away you live from a major city the less your chances are of having anything better than dial-up. So this presents a major bottleneck in the concept...but, not as huge as it may sound at first. Just as with regular television shows, you can offer offline versions of your episodic content if you're popular enough, and have the distribution channels to support you. Once you finish a "season" in your game, you can bundle it all up on a DVD or 2 and sell it through traditional retail means (once again, if your game is popular enough to have retail distributors behind it). So, this means, that some smaller companies might find themselves online-only for a couple years, but after they're audience hits a certain point, they might find the big boys come beg them to let them publish the offline version. Sound like delusions of grandeur? Perhaps, but I'm merely trying to get an idea across...I'm not saying it will be easy nor happen to many.

So, where do we begin.... first off, let me say that if you want to make any money off your game you have to give it away for free. Sound crazy? It's not... In the television industry all new shows start off with a "pilot" episode. Then in the game industry many games will offer a "demo." What I'm proposing here is simply to combine the two. It's not that new a concept either, it's what made shareware titles like Doom take off to where they are today. So, here's what you do: Offer up the entire first episode of your game 100% free to everyone. You build a digital distribution/update service into that initial release, and so when you release episode two, the gamer simply starts up the same game they've already downloaded for free, and do all their purchasing and downloading from within it. Now of course, there might be a third party mechanism you want to use, like Steam for instance. That's fine, but once again... I say give away your first episode for free. Give the gamers something to play. Let them see just how good your series will be.

Now of course, this puts alot of weight on your first episode. If you're giving it out for free, and no one likes it, you're going to have a hell of time selling them your next one. Sorry, that's the price of such a service. And speaking of price, if you're a small company trying to do this all by yourself, the bandwidth to host that first free episode is going to be monstrous. Of course, you could always look for investors/partners to help carry that load, but then that means you have to give them a cut of your profits once you start making a return on your investment...and you may not want to get into that situation if you can help it. There's also things like bitTorrent to help lighten the load, but unfortunately alot of ISPs (especially those offered on college campuses) are trying to block it out of existence for potential piracy uses.

Then of course there's always advertising, whether it simply be on your site, perhaps displayed during the download process, or actually in your game. All I have to say about that last one is be careful, gamers will put up with it to a certain extent, but if you over do it or do it wrong (like putting a big Mountain Dew billboard in the middle of an ancient medieval world), there will be a backlash. Also it's important to remember that since you intend on selling subsequent episodes after the first one, people will probably feel pretty angry with you if they are paying for it and seeing ads at the same time. Once again, there's a small window where you can get away with it, but eventually your users are going to catch on. Be careful...

Another good idea may be to go ahead and prepare the first 2 or 3 episodes before you ever release that first one for free. That way, if people like they can go ahead and download the next episode or two right away, and you can start paying off that bandwidth debt right away too. Some may even feel compelled to finish the entire season before "airing" the first episode, but once again...this is somewhere you need to be careful on. If your audience realizes that you've completed the entire game and are simply slipping it out bit by bit to them simply so you can charge them more in the long run than you would have to have sold it all at once, they're not gonna be happy.

And now we come to the next aspect you need to consider... price. Luckily digital distribution offers you a beautiful thing called scalability. The bigger your audience, the more money you make and thus you can afford bigger bandwidth too. I would suggest to you to sell your episodes as cheaply as you can afford to. The cheaper they are, the more likely someone will be willing to pay for them, and thus you can potentially make a lot more money by selling them at a lower cost. Then some may choose to sell their new episodes at a higher price point the first week or two of their release, and then lower the price gradually over time. For example, you may charge $20 for your episode at first, then a few weeks later, you drop it to $10, then later on down the road, you drop it to $5. Now, of course that could just as easily be 5 then 2 then 1 depending on your business model.

When deciding how much to charge for each episode, you need to really look at how big each one will be, and how many you plan on releasing over the course of your season. If each episode only consists of a single level that will average out to about an hour or two of gameplay, but you plan on releasing 20 of them, I'd suggest selling them for about $5-10 if not less even. If you plan on releasing 5 or 6 episodes with about 6-8 hours of gameplay each, you'd probably want to charge around $10-20. These are all factors you need to look at when deciding all these things.

Lastly there are a few more small things to take into account. Do you even want to have seasons? Perhaps you just want to start with episode one and never stop until your ready to end the series. If you do go with a season model, perhaps you should give away the first episode of each subsequent season for free just as you did with the first one in case someone wants to start from there? Do you want to let your audience pick and choose which episodes they buy, or do you force them to own all the prior episodes first? (ie...you can't buy episode 4 unless you already have played through and completed 1,2 and 3) These are all tough, and very important questions you need to consider. Since you're giving away the first episode for free and merely selling the content of the subsequent episodes, perhaps it would be more beneficial for you to use an open source model with your actual game engine and digital distribution service? Maybe you might even want to build a general purpose engine that multiple "shows" can be purchased through, rather than just your own? There all kinds of options out there for you, and even more factors than I have covered here. Episodic gaming offers a potentially, very compelling experience for gamers and developers alike.....but when will anyone be ready to really pull it off? Are you?

2007-03-11

Killer Games

Throughout recent history, every time a new line of electronic products are released, they don't really take off and become mainstream until they find their "killer app." Usually some form of content that you can only experience through this new medium, although not always a type of entertainment. Back in the 1980s the spreadsheet was supposedly the killer app for the PC, and it was for many businesses. However, the PC did not find it's true killer app for home use until the world first experienced the world wide web. In the late 1990s the DVD format was taking off very slowly until the Matrix came out. After that, anyone and everyone had a DVD player, and you almost always found a copy of the Matrix on DVD in their collection. Now of course, once that killer app has been found and had time to thoroughly saturate the market, the technology becomes common place and then the killer app is no longer essential even though it once was. It is said that Nirvana's "Nevermind" was the killer app for the CD player. Apple's computer have been mildly popular for decades, but it doesn't seem they really started taking off till they found their killer app in the iPod. And if we want to take the concept way back, there was the Christian Bible for books made from a printing press. Yet as important as all these were, I'm more interested in games for this little rambling session.

I'm not really sure what the killer app for the Atari was, I guess it was a little before my time. That and I've still yet to hear a definitive answer. Some would try to argue Pong or the awful port of PacMan that was released for it, but I'm still not so sure from the mixed reports I've heard. When the Nintendo Entertainment System dropped in on the USA in 1985, it came preloaded with it's killer app: Super Mario Bros. Sure there were many other important titles in that generation, but Mario made the NES and the game industry what it is today. Mario 1 (as some like to call it, even though not the most accurate title nor number), was a must have game. When someone talked about wanting to get a NES, it was a safe assumption that they wanted this game. Even though many other classic NES games may not be in the same genre or anything, Mario set the tone for that generation. And not only that, but Nintendo included it with your system by default. This was pure genius. They did the same thing with the GameBoy a few years later. Tetris, did not always come with a new GB when you bought it, but it often did, and it was certainly the killer app that got that handheld system rolling.

As we go into the 16-bit generation things aren't quite as clear. I would say that Sonic the Hedgehog, along with its subsequent sequels were the killer app for the Sega Genesis. However, it could also be argued that Mortal Kombat was the killer app when it came to the Genesis with all it's gore and fatalities fully intact, as while the SNES had a bloodless, neutered version for it's home users. The Super NES would be pretty clearly marked for Super Mario World this time around (once again included by default when you purchased a new system).

As the game industry began to transition into 3D games with the so-called 32-bit era, the Sega Saturn never seemed to find it's killer app, at least not with American audiences (which of course I'm more familiar with being that I live here). The new player in the console biz at the time, Sony, with it's PlayStation would not really take off until the release of Final Fantasy 7. Sure in the scope of things, the Madden football series would easily out weight the FFs in sales numbers, but FF7 was the one for most people to make them say, "I've got to have one of these." The Nintendo64 would come out a year later than the other two with Super Mario 64 included by default as Nintendo tried to make it 3 for 3. Unfortunately as innovative as Mario64 was, it just didn't have the same fun factor as its 2D predecessors. The N64 would not find it's killer app until a couple years later with The Legend of Zelda: Ocarina of Time was released for it. Zelda had always been an extremely popular game on prior Nintendo systems, but this time it got to take the spotlight away from the long running front runner Mario.

As the next generation came about, Sega would release their very last console, The Dreamcast. Sadly, just as with the Saturn in the prior generation, the DC would never truly find its own killer app, or at least not in time. Sonic's transition into three dimensions was even more coldly received than Mario's had been in the last gen. Innovative and quirky games like Jet Set Radio would make small dents as well, but not enough to really matter. Sega's greatest attempt would be the epic release of the Shenmue series. At the time it was the most expensive game ever made with an unheard of 5 years in development. Once again the series would surprisingly fall flat. It's sequel, Shenmue 2, would not even receive an American release as the first one had sold so poorly here. Twice in a row Sega had failed to find their killer app and it was too late to try and continue.

Oddly enough, when the PlayStation 2 was released, it became an instant hit even with no killer app. In this generation, the PlayStation brand name would be all the killer app Sony needed to crush it's competition a second time around. However, even with such a strong fan base, the PS2 would eventually need a real game to hold the crown and almost two years later it would find it with Grand Theft Auto 3. GTA3 would be the system seller, even though oddly once again, the PS2's sales numbers had already marked it as a success with no really worthwhile games to show for it. For Japanese consumers, many of it's initial purchasers bought it for its cheap DVD player functionality and would not buy an actual game for it for some time.

Microsoft would now make it's first attempt into the console gaming realm with the Xbox, and quickly found its killer app in Halo. Halo and it's sequel, Halo 2 would become the best selling games to that time, but still would not be enough to take the PS2's crown. In fact even with record breaking sales numbers for the Halo series, the Xbox would barely sell any systems at all in Japan.

Lastly Nintendo would try to regain the supremacy it once had with the GameCube. However, this would mark the first time that Nintendo did not include a Mario game with it's system at launch. Sure, there was Mario Sunshine available at launch, but it was not actually included with the system as were its predecessors. Not only that, but the sales of Mario Sunshine would be almost as abysmal as the Xbox's sales in Japan. Mario had lost his magic touch and was no longer even remotely considerable as a killer app. Next Nintendo would attempt to bring their last gen champion in, but The Legend of Zelda: The Wind Waker would have fairly mediocre sales as well. In the end, Super Smash Bros: Melee would be the closest thing to a killer app for the GameCube, but it would be hard to call it a system seller.

Now with the history behind us, let's look at the new generation of systems. First up is the Xbox 360. Released in the fall of 2005, almost a year and a half later there is still no killer app. Yes, Gears of War was immensely successful and has already out sold both Halo and Halo 2, numbers wise. Yet, it still does not seem to be strong enough a title to be the system selling killer app MS needs. In Japan, MS was hoping that the recently released Blue Dragon would become the killer app there. However, even though the title drove Xbox 360 sales well over the combined sales of the first system, it still has a pretty weak market share there.

The PlayStation3, although just released about 4 months ago, does not seem to be doing so well. Sony had hoped its built-in BluRay player capabilities would be its killer app, but unlike with DVD in the prior generation, BluRay has yet to become a proven format. As it appears its brand name will not be enough to drive the system's sales this go 'round, Sony needs a killer app and soon if it doesn't want to go the way of Sega. Sadly for them, there do not seem to be any upcoming games that appear like they will be able to do the trick any time soon.

And finally there is Nintendo's Wii. It seems after two generations of failure, Nintendo may be poised to take back their old spot at the top. However, the Wii is in an odd situation itself. The Wii is already immensely popular even with those who do not traditionally play videogames, yet it like the PS2 before it, does not seem to have any one game ready to become its killer app. Nintendo's latest iteration in the Zelda series seems popular enough, but it's still no killer app. And its tacked on Wii-mote functionality is not enough to push it to system seller status since you can have almost the same gaming experience from the GameCube version. No, what's selling the systems is the revolutionary Wii-mote itself. There are lots of games that show off its potential, but it still appears it might be a while before any developers actually realize its full potential. Yet that potential, along with the simple yet fun WiiSports package that comes with ever Wii sold seem to be enough to keep gamers guzzling down the machines just as quickly as Nintendo can produce them. Yet just like the PS2 again, it will eventually need an actual game to take over as its killer app. If and when that happens is still unknown, and some fear that if it doesn't happen before this Christmas Nintendo may find them with a few million disappointed and angry Wii owners.

Now, of course it's no secret I have my own console aspirations for this generation and potentially the next, but I realize that no matter how novel the open console format may be, it'll take an exclusive killer app to make it really happen.

Interestingly enough, I'd like to finish up by discussing HD-DVD, BluRay and HDTV in general. HDTV and HD formats are inevitable and have slowly filtered into US homes. However, they don't seem to be just exploding, nor is there any clear victor in the HD disc wars. This, once again is because there's still yet to be any HD killer app. There is no movie, TV show or videogame that's making consumers say "I've got to have one." What will it be? Who knows.... but it's bound to be only a matter of time, and I don't know about you but I can't wait to experience it ;)