The Grim Ace said:
It seems more and more like every anti-gaming study isn't even slightly rooted in fact just more weird anti-game bias masquerading as hard fact. I had more than enough teachers back in high school cite study after study that has now been proven to be absolute crap. Then again, all of these bullshit biased studies have lead me to no longer trust any study really.
How were those studies proven to be absolute crap, hmm? By OTHER studies? But you don't trust them either, WHAT TO BELIEVE?
Some studies are good, some studies are bad. You need to be able to distinguish the two if you're going to believe what they say. Sometimes it's readily apparent that a study is bad: if the sample size is less than 10, for instance. If the study is performed by a group with clear ulterior motives (studies which debunk global warming which were financed entirely by Oil corporations, for instance)
Sometimes you can tell that there was no proper control group. It may be apparent that the study was too short to form the kind of conclusions they did. And even statistics and figures can lie if you know the tricks of them.
Tricks like the offset graph. Say you have 3 data points you want to present on a graph: 35, 34, 32. Now if you look at this on the graph as full bars, these all look pretty much the same height. But you know that it's a downward trend, and you know there's got to be a good story about this downward trend, and you want it to look more dramatic. So you start the graph's bottom at 30. The graph has height 5,4,2. Now it looks like there was a 60% drop over time from 5 to 2. But really, the drop was only about 8%. But it paints a much better narrative for your story now, even if it's deceptive.
Of course, I never mentioned the units of this graph, another big issue. Be suspicious of unmarked axes; without units, a graph is meaningless.
But the reverse statistical trend can also be used. Say you're looking at cancer rates in mice. The control group has a 1.2% rate of some kind of cancer, and the test group has a 1.4% rate. But you don't want to report a paltry 0.2% increase in cancer rates for this, it sounds diminutive. So instead of absolute percentage differences, you report relative percentage differences. 1.4% is 17% bigger than 1.2%. So now instead of a 0.2% rise in overall cancer rates, you have:
X Chemical results in 17% greater risk of Cancer!
A much better headline, don't you think?
A healthy degree of skepticism is good practice when encountering studies. Some people decide to stop trusting them altogether. But if you can be bothered to learn how to determine the quality of a study, you can get a better idea of which ones are clearly bullshit. Because not all studies are; there are many valid and well-done studies as well. However, the best way to find these studies typically involves reading a science journal (never public media). Peer review is essentially forcing other scientists to do all that hard work and critical thinking so you don't have to. Not that you should ever stop thinking critically about what you read, but it's one of the most effective filters for BS you can have. Unfortunately, it will be much harder to read than the watered down version you'll find in popular media, but at least you won't be subject to anybody's confusion over the source material but your own.