Except.. well.. the reason why this "generation" has lasted for about ten years, is that the single thread performance paradigm is ending. We can't get processors to go past 4Ghz without a small power-plant to drive the cooling. So the next step is parallelism.
This isn't exactly something new - the problem has always been that it requires two things before parallel computers become economically viable.
1. transistor size becomes small enough.
2. integrated buses are constructed.
Strange thing is that both of those are already fulfilled. Has been for several years. So why don't we have smaller tabletops, netbooks and mobile phones with integrated buses available in mass right now? After all, it is actually cheaper to produce generic chips with programmable instruction sets rather than separate peripherals.
It is also a very known and real bandwidth problem with having the industry standard architecture (ISA) bus - since it limits very much the speed with which the different devices can communicate. Attempts have been made to circumvent this - creating gpgpu methods that don't store data in system ram, but on the graphics card's ram; for specialized operations this is demonstrably the fastest method, precisely because it circumvents the bottleneck with the ISA (or a separate interconnector bus).
Meanwhile, Intel threatens to sue Nvidia for making an integrated system that does combine gpu/cpu on a single chip. And it slows the adoption of that alternative for a number of years. Another factor limiting adoption of this scheme is the fact that it means rewriting software solutions, and - for example - creating abstraction layers that are compatible with the previous paradigm. Allowing an integrated system to run the same code as an ISA construction. Something that is time-consuming on it's own, as well as limiting performance that could have been leveraged otherwise.
The truth is that it's not consoles that hold the development back this time. It's the industry's lack of interest in fully moving on to a parallel paradigm. Even though it is necessary to increase the processing power of general applications (rather than specialised code that can, obviously, run very fast in gpgpu accelerated contexts).
Of course - we see attempts, such as ARM branching out to Netbooks, and Nvidia successfully releasing their ION chipsets - though of course specifically tailored for the slimmer markets, safely staying away from desktops and workstations, just like ARM. Even though that isn't actually necessary. In fact, we have several examples of integrated systems that perform as well, and many times better, than ISA based implementations.
But as long as the big players refuse to move on to the parallel paradigm - while developers of software acknowledge the fact that processor speed and peripheral cards is nearing their limit - this is a problem we will have.
I'll be specific - the applications of an integrated system that has fast ram and multiple cpus that can run tailored instruction sets controlled programmatically is interesting for:
1. animation technology. We wish to see more interactive animation, meaning that there is a need to update graphics-contexts more frequently. This is severely limited when either all calculations has to be performed via gpgpu acceleration, or else has to involve cpu-time and context-switches.
2. physics calculations. Node-generation and traversal can be performed tremendously fast nowadays thanks to more common multiple core cpu-arrays in desktops. But again we have the problem that while it can be performed quickly, it cannot be as readily updated in the graphical context. This means limited real-time application of calculations like these. Typically, a thread would be opened, and the context updated with new static arrays when time allows. This isn't very effective, and insists on a particular type of design that won't allow the dynamic contexts artists and designers might imagine. Workarounds also take tremendous time to create, and the end result becomes more static than it could have been.
3. Size of graphical contexts. As we do hit a wall on producing graphical contexts with high resolution and many objects, we see graphics card manufacturers start to temp with intermediate scene-construction more explicitly than before. Essentially, this is about creating the scene references, then reducing it, and only drawing the updated areas (which of course speeds up the scene-generation a lot). Similar methods are used for acceleration when drawing primitives - common methods are collapsed to simpler instructions that require fewer clock-cycles to complete. And this is silently adopted by developers on the code-level.
But in reality, what we really want to do is generating larger graphical contexts, and then simply letting the graphics processing unit find out what is supposed to be rendered, and then never worry about scene or object complexity. But to do so, we require high amounts of processing power that can run on:
1. multiple processor units that have
2. programmable instruction sets and "advanced" logic.
Since reducing scene complexity is most efficiently done when using the collapsed and simpler functions, rather than linear iterations of the same algorithm (which is what we're really doing on graphics cards).
This is an area that we will probably not see any progress in on PCs within the foreseeable future. Because it means that the industry will have to move away from producing separate entities with different licensing on the hardware itself. And over to licensing instruction sets for use in different programmable systems.
So for the first time since computers really took off - where we will see progress now is not in peripheral cards and cpu speed (as mentioned, we do know that we're hitting a wall. While generating generic parallel code for multiple processor elements has limitations as well. There is no such thing as code that can be mechanically parallelized endlessly while also getting performance improvements). ..not in peripheral cards and cpu speed - but in embedded systems with specifically created platforms (i.e., software and hardware).
This is what the industry will allow for, and this is what will force itself ahead. In other words - PCs are actually dead. The next PC - thanks to the way the industry works - will be a console, as a matter of definition.
Where software and hardware will be specifically built to launch optimised and specifically written software. We would wish for that the next PCs that turn up actually are generic systems with integrated buses over programmable multiple cpus. But in practice, there are no software giants who are willing to simply bear the cost of the initial research and software production. And there are no hardware giant that can defend creating a new platform at the moment that is free/open.
So any "PC elitism" can please go and have a break. If you buy a PC, and continue to upgrade your cards and components in infinitesimal increments performance wise, you are part of the problem. :/