That study is very likely to be worthless for a number of reasons. His sample among students isn't random, and that sinks the entire experiment.
For a start because psychology as a whole is a 'soft science'. Even it's rock-solid conclusions are prone to change and revision because they turn out to be inacurate.
And to sink the study:
What about students who have experience with actual guns? It was done in the US after all. He asked about firearms training, but what about other sources of knowledge like military service, knowledge transfered from relatives or self-practise. Because I'd 'speculate' that shooting guns makes you better at shooting guns.
And other things that train eye-hand coordination, he didn't control for that. Archery for instance. Different weapon, basically the same activity.
And what about people being customary to other forms of violence? Did he for instance check for ring experience in fighting sports? No.
So he used an extremely unreliable sample consisting of one socio-economic group, same education level, only one country, likely extremely biased in ethnic background even. No matter what else you work on after that, from a sample group that biased, no conclusions can possibly be drawn.
His method is also flawed. He used a gun-shaped controller. Wait, hold it a moment there; So he didn't use the normal input device for a game, but instead he used a gun analogy? That means the increase in accuracy could be caused by using something shaped as a firearm, and not by it being a game.
Also it says he let students fire actual guns. Wouldn't the increase in willingness to aim at human shaped objects be caused by handling actual firearms? It's pretty common knowledge that weapons in such a context incite violence by themselves, so again he's gotten himself an interfering variable that sends his research down the drain.
Unless he's done some serious maths to rule out those interfering variables (something which he pretty much can't because of there being several other variables and the weakness of his test), then all his conclusions already sunk.
To make it even worse, he put a live-sized human target at 6 metres distance. Let me tell you, even if you had Parkinson's disease, you could make headshot on a stationary target at only 6 metres. It's basically point blank range at which nobody can miss, and accuracy results count for nothing at all.
Then he made yet another mistake in the number of shots. Six shots. But wait, he's counting missed or hit. That means he's conducting a new test, a binominal chance experiment. Hit or miss. With only 6 shots, it's impossible to draw any conclusions. The absolute lowest limit to draw conclusions in binominal chance experiments is 30, so the study's conclusions are invalidated because the outcome can be explained by randomness.
Basically this 'professor' wrote a setup that is so crappy that if you used it for a bachelor thesis, your tutor would come down on you like a ton of bricks and give it a heavily insufficient mark.
So okay, professor does test, proves that randomness exists. Good for him. When is his university going to sack him for disgracing them?
Lancer873 said:
The statistics sound made up (really? 99 percent and 33 percent? That's way too rounded off and way too extreme a difference
He only let them take 6 shots, meaning two shots difference is already a 33% gap. If one non-gamer hits twice and a gamer doesn't miss (which is quite bloody hard at such a tiny range) he can already write in a sensationalist style 'gamers make three times as many headshots', and his conclusions reek of such unsound assumptions.
But you're right. If he lets them take 3 shots, he must have been making up the data, because a hit percentage of 99% is an impossible fraction of the numbers, because people can only shoot 1 full bullet, and not 0,05 bullet.
Unless there's a different explanation, that professor had committed fraud.