Daemonate said:
Well, you should understand one thing: your example is a local magazine. They do what I like to call "grouped reviews". Their reviews are based on the pretense of "if you're going to buy something this month, this will tell you what's the best buy". This is a good method, but it does not give a shit of info if you are comparing different months' reviews to each other. In the graphic cards department, it's rather easy, you will almost always have better results going for a higher number, but in games, the standard is something less tangible, and it does not simply 1-up once the hardware does. This causes the normalized curve to take into account the horrible games that were before, and really, if you play a modern game, and compare it to the old stuff, you realize that modern games have gotten rid of so incredibly many large flaws. Of course, they're not necessarily better just because of that (still haven't seen anything rival Banjo-Kazooie).
My point is that when ranking something on a scale on which you plan to rank things for mostly the same parameters on for the next 4 years, you can't just re-adjust the curve to be normalized all the time, because it might just be that there comes a ton of good games. Take right now for instance, Skyrim, Skyward Sword, BF3, MW3 all have come out at the same time. The first is said to be the best of its series in all aspects, and one of the best games ever, so it isn't so hard to understand that it should get a top rating. Skyward Sword however, is a damn good game, and by itself, it deserves a hell of a good rating, but it is also a Zelda game, so the reviewer has to take into account the expectations and and quality of the previous installments of the series, and has a conflict. He can review it based on how it is relative to the current game situation, where it is practically one of a kind, he can review it based on how it measures to Ocarina of Time for instance.
You see that a conflict arises. There are many solutions.
1: get the reviewers to stop basing half the score on technical info. IGN is perhaps the worst on doing this, they will give a game more than 5 if it has a working engine with some sort of shading. Removing the tech from the game review, and only taking it into account for the worst and best cases, will remove a lot of the guilt that arises from not giving a rather polished game a decent score, and will allow the rating system to go back to 8/10 being a good score.
2: accept that the ranking system is skewed, and use decimals. This will allow game ratings to differ more within the skewed curve. This will mean that you have the hate/10, but it's more understandable when the curve starts at 6 for proper games, and it will let the reviewer more easily express that he feels that the game is within the same league of awesome as most other games, but has minor differences that distinguishes it. This helps a lot for sequels for instance, because giving a 9.3 to Skyward Sword will mean that it is one of the best damn games made, but still can't reach OoT which got 10.0 if that is the opinion of the reviewer. And yes, I'm making up these numbers, not founded in reality.
3: stop using a numerical value. Really, instead of having to tackle the issue, you can just say it: "MW3 is pretty much MW2 with new polish. You will be playing the same game in essence, you decide if you're fine with that". No numbers, no letting gamers be lazy in their choice of games.