RebellionXXI said:
Well that answers my question.
I was wondering why computers didn't have graphics that were (in general) any better than the PS3 when a high-end gaming PC costs twice as much.
I guess it's an optimization issue. You introduce an API layer between your hardware and software to ensure compatibility, and it becomes a huge bottleneck.
This, I think, lends credence to Bob Chipman's opinion as to why PC gaming is going out the door. It's probably better from a performance perspective to develop programs and games for a specific platform that always uses the same hardware. I mean, if you just stuck a mouse and keyboard into your X360 and allowed people to start installing 3rd party programs on it, it'd be no different than a PC.
But, as we've seen with Sony and Apple, having a dedicated platform gives the hardware manufacturer the ability to restrict what kinds of programs can run on their systems and which can't. Compatibility isn't an issue when they can enforce these things by simple fiat. So if we do go towards dedicated platforms, we might get better performance, but (as consumers) we'll lose a lot of control over our hardware and how we use it.
No. The problem with this article is that it makes things out to be much more.dire than they already are. There is not a HUGE decrease in performance as he.says, it is significant when compared to low level programming (as there are similar issues in all other fields when comparing low and high level programming languages), but no where near a major difference.
Furthermore, the man being interviewed does not take into account all factors. We'll use the gtx 280 and 480 for this example. The 280 has 240 stream processors while the 480 has 480 stream processors. Now at first glance you would think that this would equal a 2x increase in performance, but it doesent. Stream processors AREN'T the only parts of a gpu. ROP's and texture mapping units are two others and they also play a big part on performance. The gtx 280 has 80 TMU's and 32 rop's, while the 480 has 20 fewer TMUS (60) and 16 more rop's (48). For a 2x performance increase the gpu would have to contain 160 TMUS and 64 rop's, not 25% fewer TMUS and 50% more rop's. Furthermore, the gtx 280 has a 33% larger memory bus than the 480, and while a 512 bit wide memory bus was too much for the card its been theorized that the 384 bit bus on the 480 at times could bottleneck the card. In real world tests the 480 typically performs between 70 and 85% faster than the 280, but not 100%.
Also, a previous poster made a good point, few games actually support dx 10 or 11 (its actually direct 3d, direct x actually is a set of api's that covers more than graphics). Most games only use dx9 because they are direct console ports. Direct x 10 and 11 support many features that allow better visual fidelity while giving better similar or in some cases better performance that if the same features were enabled under dx 9. Also in that regard developers have to ask is it worth it to use time and resources to create higher polygon models and higher resolution textures for a port of a console game?
Next, if you want a good example look a at a little mentioned game, metro 2033. The 360 version in screenshot comparisons is about equal to medium on the pc version, and at very high settings the pc version offers significantly higher visual fidelity than the 360 version.
Another point, very few console games run at 1080p, most filing at only 720p. For a comparison a resolution of 1280x1024 is a higher resolution of 1080x720. The average resolution pc games are run at these days is 1600x1200, in some cases 1920x1200 or even 2560x1600, all significantly higher than 720p. And seeing as most are capped at 30 fps, whereas pc games usually don't like to run at anything lower than 60.
All of these factors should lead to one conclusion, Richard Huddy brings up many good points in his interview, but because he is talking about a specific point fails to show the whole picture and his point comes off as being much worse than it actually is.