On what basis can you make this claim? The architectural difference between the two consoles make a like-for like comparison very difficult, so how do you know the 360 has better graphical capabilities (in theory at least - implementation issues can always cause issues, and the PS3 seems to have an architecture game programmers don't like.)Zer_ said:Actually, the 360's GPU is more capable than that of the PS3's (By a small margin). The PS3's GPU definitely is based off of the GeForce 7800 series. It's been boosted slightly, though. The 360's GPU is based off of a VERY heavily modified 2000 series ATi Card. It implements a Unified Shader Architecture (something that was only introduced in the 8000 series nVidia GPUs and 3000~4000 series for ATi respectively). To compensate for this discrepancy, many games offload some of the GPU workload on the PS3 to the Cell processor.CrystalShadow said:Hmm. That is odd.
I have a fair grasp of estimated hardware power of consoles owing to their relation to PC hardware. (Architectural overheads on PC and optimisation issues mean a console with a given spec is more powerful than an equivalent PC, especially towards the end of a console cycle).
For this to be true though, it'd have to be really low-end components.
(Some articles implied it's about 3 times the power of an Xbox 360... But who knows? Especially when comparing a new console to one where games have been heavily optimised.)
I feel reasonably confident in the rumours that it's based around Ati 4000 series components.
To put that in perspective, the PS3 uses a close relation of an Nvidia 7800, while the 360 uses something closely related to the ATI 2000 series, but slightly less advanced.
The Xbox Xenos graphics chip has just 48 shaders, and the weakest 4000 series gpu in the entire range already has 40.
figures like that alone don't really tell you the whole story.
Benchmarks give a better picture.
3dmark06 scores imply a midrange part from this range should be around 20% faster than the PS3 gpu.
So... Any implication that is slower would suggest the use of very cheap low-end parts.
I guess it's possible, but... It's a little disturbing.
(Especially since the use of a controller with a display would in some cases imply needing to process a combined resolution somewhat higher than HD resolutions alone.)
Who knows. Though Nintendo's usual cryptic comments on the matter aren't encouraging either.
Your facts are also completely off with regard to Ati cards. Unified shader architeture was first introduced to PC's with the Ati Radeon HD 2000 series. The 360's Gpu is a simpler version of this, having essentially been the prototype for the unified shader concept.
On a like-for like basis it is highly unlikely that the 360's gpu is more powerful than a high-end Ati 2000 series card, and based on it's specifications it's actually a relatively low end part by PC standards from that generation. (Though it compares favourably to older generation stuff such as an Ati X1900 or the like.)
Having just 48 shader units, on paper the xbox Gpu is only marginally faster than an Ati HD 2400, the low end-PC card from that generation, which has 40 shader units.
the 2600 already has 120, and the high-end 2900 had 320, as well as a 512 bit memory controller, and dedicated high-speed RAM. -
All 2000 series GPU's supported directX 10 - the first cards that did, had unified shaders supporting shader model 4, and supported hardware tessellation, although since this was not part of the directX spec until directX 11, and no Nvidia card at the time supported this feature, this ability wasn't ever really used on PC. - However, it probably was on the 360, since it could also do hardware tessellation, and there was no reason not to implement it, because the hardware specifications are fixed.
The Cell processor is an unknown quantity, being a general purpose vector processing unit. Vector processors are very useful for graphics code (MMX and SSE are great if you're writing a software renderer, though nobody bothers with that because of dedicated 3d hardware).
Whether or not this can be used to help the PS3 or not is difficult to say. But again, what are you using to compare such vastly different consoles?
A direct port from one to the other will disadvantage at least one of those consoles, because they work so differently.
And since most game programmers have much more experience with DirectX than whatever the PS3 is built around, chances are game performance will be biased towards the 360, not because of better hardware, but because the programmers have a harder time figuring out what's going on.
You are completely missing the point. The 3dmark scores give a point of reference; They are a consistent and simple way of measuring PC hardware performance, if not especially reliable as a real-world figureYou're also mentioning 3DMark, which means absolutely nothing when it comes to console GPUs. PC games have to run through several layers of interpreters when being run in real-time. Consoles games have something much closer to direct hardware access. Basically that means that console games are inherently more efficient on the platform they are programed for.
However more efficiently a console uses it's hardware compared to a PC is beside the point, because it wasn't about comparing PC's to consoles.
It was about having a frame of reference for comparing one console to another.
And that goes something like this:
Look at the actual hardware a console contains; Find a PC with similar specifications. Do the same with the second console.
Now look up PC benchmark figures for the relevant hardware...
By looking at which is faster in a PC, you can't tell how fast the consoles actually are, but you can estimate how fast two consoles with various types of hardware would be in relation to eachother.
If one console is built around say, an Nvidia 7800, and another around an Ati X1600 (that's never happened, but this is just an example...)
Then yes, PC benchmarks won't tell you if the first console is faster than a PC that also has an Nvidia 7800. (It probably will be, by a significant margin.)
But, the hardware is the same, or pretty similar. So even with different overheads and optimisation, you can still tell that the console built around the 7800 is going to be faster than the one built around the X1600.
Do you follow the logic being used here, or are you still going to claim PC benchmarks can't possibly tell us anything about console performance?
AMD and ATI are the same thing, just so you know. You do have a point about the lack of clear information though. (6000 series AMD/ATI cards are mostly relabelled 5000 series cards though. It would however matter. I have a ATI 5770, which is upper-midrange. It's specifications and performance however are faster than the fastest 4000 series card ever made.)However all this information is useless when looking at the WiiU. Speculation has gone from the WiiU using a 4000 based ATi card to a 6000 based AMD card. One thing I do know is that many of the Alpha hardware samples given out to developers are running at lower clock speeds than the final product will have.
My point however was in trying to work out the worst case scenario. At a rough estimate the worst case scenario could lead to a console with about half the performance of a 360 or PS3.
The best case scenario could easily lead to something probably 10 times as powerful (though that's highly unlikely.)
The spec I'm most interested in is how much VRAM the WiiU will have. In many of the alpha hardware samples, even the VRAM and DRAM are far from final. Assuming the WiiU has at least 3 cores running on two threads each (similar to the 360's) and has a GPU that is on par or better than the PS3 or 360s, then it's reasonable to assume that the console would easily handle CryEngine 3. Some questionable sources stated that the WiiU would have a total of 1gb RAM split 50/50 between GPU and CPU, which means that it can handle a lot more texture data... which is a good thing for the console.
In the end we can only speculate. I believe we'll find out at E3.
Speculation is very true. I don't know about VRAM though. This is closely related to the chip that ends up being used. Most ATI graphics cards made in the last decade can use main system memory regardless of if they have any dedicated graphics memory or not.
The 360 has a unified memory architecture, which is a concept it ironically shares with low-end notebook graphics. The PS3 has a more traditional split memory system.
Unified memory is usually thought of as a bad thing in a PC, but this is only because main system memory tends to be a lot slower than the graphics memory most cards have.
Graphics memory isn't as important as you might think though. The main system memory will probably have much more effect. On PC, memory requirements are directly proportional to screen resolution and texture sizes.
1 GB of graphics memory sounds like it's better than 512, but on PC that's really only true if you like to use huge resolutions like 2560 by 1600.
The Wii doesn't have a lot of graphics memory, but given it's SD resolutions it wouldn't gain much from having a huge amount more. (What really kills the Wii is the gpu performance itself).
But yeah, in the end all we can really do is wait and see.