OK. Well. I can't say for sure WTF this EA goon is getting at. However. If the rumored shared memory architecture is what we're talking about, then yes, they could actually be right. Don't get me wrong. I hate consoles. They're DRM in a box. Loathe the glacial hardware cycle that completely ignores Moore's Law, and tries to sell us the cheapest crap for the most money -- They sell "at a loss" but that's just an illusion, they wouldn't do it if you didn't fall for Rent seeking, and inflated hardware prices down the road (not reflecting reduced MFG prices well).
However, I do write code, sometimes for games, and when I want to do crazy stuff, like use very fine structural voxels smoothed with marching cubes and DOF, where each "atomic" unit of the world can have different physical properties, corrosiveness and interactions (don't put out the chem fire with water!), the wall I usually run into is the memory bandwidth. Specifically, shuffling all the data back and forth between main memory and GPU RAM. Oh, I can run the physics, on the GPU with but GPU has no direct way to get control input, GPU RAM is sometimes too valuable to waste on extra collision geometry, and readback buffers are slow -- Triggering sound effects by looking up which pixels in an offscreen texture were updated is uuuuugly, and slow. Doing that with enemy AI and physics to update network states is even worse. Typically the games will have two copies of the geometry in memory. The low res hitboxes and collision geometry in Main RAM, to do physics on the CPU (where the input and sound system are also controlled), and another high quality set of mesh data in the GPU for rendering. Positioned to match the CPU side physics stuff. There's a bit of crossover into GPU with physics, but mostly it's particle systems and effects, not stuff that directly affect gameplay, because, well, you'd have to synch that fine detail over the network for multiplayer, trigger sound effects, react to input, etc. and GPU to CPU memory bandwidth is the pinch point. It's doable, but man it sucks, and if the bottle neck were removed, a lot more cool gameplay stuff can happen.
With a shared memory architecture you might still use the lower res collision geometry as faster approximates the physics, but the physics code can just update anything in RAM, and there's no readback latency. You can avoid transferring data in and out, it makes everything easier, even networking code. It also means you can stream load much easier, and with FAR more data, because you have all of main memory to play in. I can load some new models as I have spare cycles and get them ready for rendering, then BLAM, No slow transfer to the GPU to actually start rendering them. That's one less stage of streaming to perform, and one less copy of data that must exist in RAM at once... The physics code can then be parallelized on the GPU and the CPU code can directly read the results without copying the stuff across the bus.
If that's what they're hinting at, then yeah, most PC hardware today is not as "next gen" as this. However, I'm not aware if that's actually what these consoles will have. I DO KNOW that by the time the consoles come out, my new development PC will have shared memory architecture, and far more RAM to play in. 'Heterogeneous computing' is the future. Right now CPU code uses one FPU for floats. Data is VERY segmented across cores to prevent race conditions, but what would be nice is single threads that can run lots of data through the same instructions: SIMD. The speed increase you get utilizing SIMD techniques on GPU / APU hardware is awesome for nearly all software. It'll be better for everything as the line between GPU and CPU are blurred, and they combine. Shared memory architecture is a key component to this, and so is "integrated" GPU -- WOAH, calm down discrete GPU fans, it'll just mean you upgrade a motherboard or chip instead of a discrete card. It's no big deal, just progress. Prices will adjust, it'll end up about the same.
Being able to rely on this type of stuff standard will be HUGE. It'll be standard on mid to high end PCs soon enough, and even mobiles eventually, it's just so much more efficient. The fragmentation argument against PC isn't really a big issue, That's why we have optional details and minimum system requirements. Fragmentation is much less an issue on PC platforms than across consoles.
What irks me is that these consoles are just neutered 'media center' PCs. What's best for gamers and gamedevs is if all the games run everywhere forever. What's best for console sales is if games run on one device for the shortest period of time tolerable. I have a single cross platform dev chain that lets the same engine code compile across all major PC OSs, even BSD. I have an abstraction layer so my Android games can run on Desktop Java too (why not?). I port games between Linux, Mac, Windows, Mobile, with one line command: "git pull & make release" There's really nothing technical about the hardware that's preventing it from having a common API across the consoles too. No, there's not. The difficulty is due only to Vendor Lock-In and console maker hopes of more exclusivity, by making it harder to port, thus giving them advantage over the others, when no, it's the same thing as the competition or a PC. Console makers are basically exactly opposed to the progress of the game industry, by their very nature. It's dumb. They need to die. They're all just General Purpose Computing devices -- Yes, even mobiles are.
1980's Mk.II feels like it's right around the corner. It gets closer the slower the console hardware cycles.