Quirkymeister said:
But then the question becomes: If the new consoles are not significantly more powerful than the generation that came before and can't do what mid-range PCs have been doing for ages, then what's the point of getting a next-gen console?
Particularly due to the whole pack of backwards compatibility, there was sort of the implication that these consoles were this Great Leap Forward in games presentation, however if they're still limited in this way without the benefit of having hundreds of last-gen games to pad out the library, then why did they release them at all?
I'm aware that frame rate doesn't matter to all people, and that a lot of current-gen games run at 60, however, they aren't doing a great job of justifying themselves over even a similarly-priced PC.
I think last generation, there was a real benefit to the standardization of the console market. Even if the XBox 360 and PS3 ran on fairly different hardware, occasionally resulting in different performance on cross-platform games, there was still a certain sense of a baseline level of performance that both systems would be able to achieve. This was both bane and boon to the PC market; while it prevented developers who wanted to reach the widest possible audience from creating games that were widely outside the capabilities of the PS3 and XBox 360, the "reining in" also meant that any half-way decent PC ought to be able to capably play the same games that were coming out for the consoles (barring exclusivity and shoddy ports, of course, of which there was certainly a fair amount.)
But it feels like we've reached a place where display technology is outpacing processing technology. I suspect it's true that someone could create a PC
today that would rival the PS4 or XBox One for power for the same price, but maybe only just... And I think that a $399 computer would also struggle with 4K resolutions. With economies of scale working in their favor, what I've heard suggests that Sony and Microsoft are still taking a small loss on their console hardware. Hardcore PC gamer "rigs" blow them out of the water, certainly, but many of those are spending as much or more on video cards alone as the consoles are spending on the entire box.
So, why did they release them at all...? I don't know that I have a single, definitive answer.
It was, arguably, time; the generation had gone on far longer than the previous one, customers were clamoring for something new and something that would look good on HD televisions, and at a certain point they either had to release
something or cede the market. Yet I haven't gotten any real sense that either company thought of this generation's hardware as some sort of "stopgap" measure that would just have to hold until the real advance came out.
Part of me wonders if there is or was a hope that, for lack of a better word, "gimmick" hardware would make up for a lack of sheer processor power. Microsoft may have thought the Kinect would do for them what the Wiimote did for the previous generation; of course, we've seen how that played out. Both companies may still be banking on the new resurgence of interest in VR, technology that in some cases requires a lower resolution and that the new hardware itself may be able to take up some of the processing work.
But I think the bottom line is that another year of hemming and hawing wasn't necessarily going to create a monumentally better console. For all the various snafus, major and minor, that both Sony and Microsoft have endured in birthing the current console generation, I suspect nothing would have been as painful as trying to convince Christmas shoppers that $600 or more was a price worth paying for a new console. Maybe in the long run it would have been the smart move, but I think a fair number of executives might have lost their jobs in the process.
Instead, we have a "Meh, just about good enough" generation. And I think the next few years may only diminish its already lackluster splendor.