Well, PCs do use APUs (in the low and upper-lower ranges) and have for a couple of years, so it's not ahead in that sense. The only generation advantage is that it's using Jaguar CPU cores instead of bobcat. Which is the next evolution of AMDs answer to ARM and Intel Atom. It's a good answer for that market segment, it seems like AMD have a got a real chance of succeeding in the large tablet space if they keep it upsaxxon.de said:Actually, this guy said the architecture is a generation ahead of current PCs. He didn't say anything about performance being better. And since CPU & GPU are integrated in one Chip and they share the exact same RAM, so they don't need to transfer data between CPU, GPU and VRAM over the comparably slow PCIe, he's technically right. Even if the PC Gaming Master Race doesn't want to hear it. If you want to make a comparison to the architecture of PS4/XBone & PC, you'd have to compare their performance with something along the lines of Intel Integrated Graphics from something like a Core i7. I suggest you play a current gen game on this setup and report back to this thread how well your PC fared. Mine sucks if I use the Integrated Graphics, I can tell you that.
As for the CPU to GPU communication, as far as i'm aware it doesn't actually produce any meaningful bottleneck at this time. PCI-E 3.0 runs at 16GB/s, which is more than enough for CPU to GPU communication. Even the best GPUs in the world hardly breach 8GB/s. Plus it seems that the time it takes for the CPU instructions to reach the GPU isn't a problem for highly demanding scientific computing, so i doubt it's a worry for rendering some sparks on a screen. The main need for high bandwidth is GPU to vRAM communication, which is the exact same kind of interface as an APU (although most GPUs have higher bandwidth than the PS4).