The GPU's days are numbered

I just saw an article with a quote from John Carmack about how he doesn't think there's a real need for a dedicated phyics processor (PPU) like the Ageia PhysX. He says that between the advancement of multi-core CPUs and GPUs they should be able to handle physics just as well in the near future. This is not the first time I've heard this sentiment, but it has brought back an old thought in my mind. How much longer will it be till the GPU suffers the same fate?

When dedicated graphics processors (predominantly used for 3D graphics) were first introduced in the mid-late 1990s, there was certainly a need for them. They allowed game developers to create a new level of graphical quality that would not have been possible strictly using the general purpose CPU. However, that didn't stop Intel from developing MMX (multi-media extensions) for their Pentium chip line. The idea was that for those that wanted decent 3D graphics, but didn't have to have the best of the best, their new assembly commands built into their chips would allow for mainstream use of 3D. Some tried, but in the end it just wasn't enough to compete with even a low-end dedicated 3D processor. When NVidia released the GeForce 2, the world's first GPU with the ability to perform realtime lighting and deformations, it was all over. Later cards would introduce the capacity to have fully programmable shaders thus moving the GPU and CPU even further apart.

Today it seems the progression of GPUs has started to plateau, just as CPUs had a few years ago prior to the multi-core revolution. It is already predicted that 3d graphics chip makers will begin to follow suit with multi-core GPUs in the next couple years. However, I'm much more interested in another route AMD is planning to take. They have announced a future CPU called the "hybrid." This multicore CPU will also feature an on-die GPU. Details are sketchy at best beyond that...so people like me are left to allow our imaginations to run wild with the idea.

Now, AMD has tried to make it clear that the graphical quality of such a setup will be comparable to current on-board IGP solutions from their ATI division. But that certainly doesn't mean things won't progress beyond that in future itterations. Imagine an AMD hybrid chip featuring 4 of their next generation Phenom CPU cores, with an additonal 2 ATI 3D GPU cores...all on one chip. Now think about if they take it one step further and start integrating some of the same features of their GPUs directly into their CPU's cores.... Then instead of the 6-core hybrid setup I described before, now you can have a 4 or 8 core CPU that has built in 3d and 2d graphics hardware capabilities built right into a general purpose CPU.

Now, I'm not a hardware/processor expert by any means, but from what I understand, one of the biggest differences between standard, general purpose CPUs and 3D GPUs today is the ability to use Vectors instead of just the usual integers and floating point numbers. AMD has already included this ability into their upcoming K10 chips, so we'll already be partially there come this fall. Having hardware vector processing in the CPU will also help confirm what Mr. Carmack thinks about PPUs.

To sum it up, I think Intel's ideas behind adding MMX to their Pentium Pro and Pentium 2 chips over a decade ago was just a little too far ahead of its time. If things continue to progress the way they have over the last few years, we might see the dedicated graphics processor go the way of the sound card. Sure, there will always be a few people who just have to have a higher end experience, but for the vast majority of people, a future generation of CPU may be able to handle all their processing needs.

I could end this article right here, but there's something else to think about if you follow my line of thinking. There are hundreds of processor and chip producing companies...however, it's really only 4 companies that make the bulk of them used in Desktops, Laptops, and non-portable gaming systems: Intel, AMD, IBM and NVidia. Now until recently, that list would have featured 5 companies, but as you should already know, AMD bought out ATI last year. With ATI and NVidia being the only 2 companies that mattered when it came to GPUs, that leaves NVidia alone now as the only strict GPU and chipset producer in this bunch. Intel already has its own line of GPUs too, however most people still find them to be highly inadequate when compared to NVidia and AMD/ATI's offerings. Also, now that AMD and Intel have bumped IBM out of the Desktop/Laptop world, thanks to Apple, they're also in an interesting situation. If I remember correctly, IBM even sells some of its servers featuring AMD chips now, so really for IBM their business model isn't quite so focued on chip production these days. On the flip side, IBM is the sole manufacture of CPUs for all three of the newest game consoles, with AMD/ATI supplying the GPU for two of them, and NVidia supporting just the PS3, which is already shaping up to be a dissappointing failure.

My prediction is that NVidia is either going to be bought by Intel or IBM, or they will have to start making their own x86 type general purpose CPUs to stay in the game. If my predictions are right on where AMD may lead the industry by bringing the functionality of the GPU directly into their CPUs, thus killing the need for add-on cards and dedicated GPUs for most people, NVidia will quickly find themselves in trouble with their current market focus. Intel may decide that they'll just continue to evolve their own graphics technology and beef it up to compete with AMD's hybrid platform, and won't need to buy NVidia. Although, I personally think they'd both be better off if they did. Also, I think it's a real long shot that IBM will want to buy NVidia, being that they don't even compete in the mainstream x86 market where NVidia's graphics cards are most commonly used. I suppose the other real long shot could be that AMD ends up buying NVidia too...but I highly doubt it as cool as that thought may be.

Ever since I installed my first GeForce card I have been a fan of NVidia's products, so I hope they don't end up finding themselves all alone and closing up shop 10 years from now when the CPU/GPU hybrid becomes the norm. As per usual, only time will tell.


This simple guide is intended for Ubuntu 7.04 (Feisty) users, but may work for other releases as well.

To get moto4lin to work right, you'll also need the p2kmoto package, but for some reason the Ubuntu guys put moto4lin in their repositories, but not p2kmoto. I noticed there's a source package in Gusty (7.10), but no .deb. So, here's how to get it quickly working on your system if you don't want to bother compiling it.

$ sudo apt-get install moto4lin

After installing this package you will need to download the .deb for p2kmoto from somewhere. A quick Google search found the following sources for me:

(this last one seemed to stall out for me, but might work for you)

Now of course, I can not vouch for either one of these sources, so download at your own risk!

Once you have installed both moto4lin and p2kmoto, you (in theory) should be able to just type in:

$ sudo moto4lin

However, this did not ever work right for me... Instead I had to run the p2ktest program first, and then moto4lin worked after that. Also note there are ways to change your udev rules so you don't have to run this app as root, but as long as you're careful you should be fine running it with sudo, as listed above.

It's still not perfect, and a little slow seeming to me, but it gets the job done...and for free! Those bastards at "the new AT&T" wanted to charge me $50 for a cable and some crappy software CD (most likely Windows only anyway). A handy little $15 multi-tip USB cable set and some good ol' open source software just seemed like the better option to me ;)