Study Finds Similarities Between Videogame Addiction, Asperger's

xMacx

New member
Nov 24, 2007
230
0
0
Maybe.

Seriously, there's responses to everything you've posted so far, but I'd sum it up as this:

Research, much like most other technical branches of science, requires a good deal of training to really "get". But because research often communicates in words instead of just equations (like some of its' technical cousins), people who are not trained in research make the mistake of thinking they understand all of the constraints that go into interpreting research results.

Look at the past few pages and see how many posts are me posting or explaining things that are basic research methods or statistics. Most of your objections are based on not understanding the basic assumptions of research methodology.

Note also how many posts I'm trying to describe that processes are much more complicated than your description and attempt to explain the how and why.

Then note that you generally respond by re-simplifying the issue in a different way, followed by responding that you're confused by the original conclusions. That's because it's not that simple.

Not to be mean at all - I think about 10 posts ago we hit the point where you really do need further training to grasp the intricacies of the arguments you're making. For example, we can't discuss problems with experiments unless you have a grasp on the basic tenets of experimental validity. We can't really discuss the array of sampling issues unless you understand basic issues like selection bias. We can't discuss problems with correlational research unless you already understand why correlation doesn't equal causation. Validity, also, has a pretty specific meaning in research, with a number of different facets. It's not a true or false equation. One of the difficulties we're having in this conversation is that there's a language being used that requires some technical knowledge - like understanding that experimental validity means much more than "true".

I'm seriously not saying this to be a jerk - I was having a conversation with one of my colleagues about the same thing. There is a certain level that's just not fruitful without some kind of training, and I think we may have reached it. Further talk is just going to go around in circles.

For the record, I completely agree with you on numbers one and two, think 3 is kind of stating the obvious (anything can lead to bad research, even good previous research), and already talked about the issues with "proving" anything in a previous post.

My position has, and always has been, that research operates on a continuum, and that "bad" or "good" are only defined by the results' validity and generalizability to a greater population.





And for the last time, there is no control group in this study!!!
 

xMacx

New member
Nov 24, 2007
230
0
0
The_root_of_all_evil said:
The third still stands as false because I specified 'Addiction' rather than 'Game addiction'.

The real problem is the way it has been stated. Aspergers is the control group which the Addiction is being compared to. Subset A is Game Addicts and subset B is Aspergers sufferers. This forms part of set A (Gamers) BUT the five other subsets that I talked about earlier ALSO show this link.

Therefore we're not talking about Gaming Addiction, but Aspergers.)
Just as quick examples of my previous post (and to prevent you from thinking the above is all flaming):


1. Aspergers is not the control group. There is no control group. You only use control groups when you have an experimental manipulation to ensure that your treatment causes a difference from baseline (the control group).

2. This is an issue of construct validity. Each of those measures score similarly, but the measures are only ways to score some underlying construct. The underlying construct drives the scores, not the other way around. If odd things happen in the real world that produce results not tied to the construct (like getting a cold), that's called a research confound, and is expected to be controlled in the experimental design by the researcher. If it's not, that's a problem, but it should be assumed that any clear confounds are controlled for.

3. Also related to construct validity - you're making an invalid assumption that because you can generate 5 confounds, the study is about one aspect of the correlation. Note the construct validity approach above - what's being measured here is gamer's personality indices. They only correlated the results. So the focus is not on Aspergers, it's on the relationship between measures for both groups.


The_root_of_all_evil said:
So the study is useful to Aspergers Suffers, as we are able to combat addiction and thus work on the flaws that Suffers are prone to; but does not show that Gaming Addicts suffer in the same way as Aspergers (Given that Aspergers is a genetic deficiency and Addiction is a mental deficency).
This is a correlational study - I don't think it helps anyone at all, really; it just shows that two unrelated groups score similarly on a personality measure. And I don't think anyone would suggest that the results imply that either group are experientially similar.



This is my point - your objections are based on misunderstandings of how science and research work, and I'm not going to be able to ramp you up on all via forum to have a detailed conversation about it in this thread. High level we can talk about it all day, but not this level of detail.
 
Feb 13, 2008
19,430
0
0
Hrrrm. Given that I've taken Psychology to GCSE level and Statistics to Degree level, I find a few of your statements vaguely insulting; but you weren't to know that.

I still see it as you have set a task which you yourself deem to be impossible to prove something.
My position has, and always has been, that research operates on a continuum, and that "bad" or "good" are only defined by the results' validity and generalizability to a greater population.
But, using any study as a headline, rather than a footnote defines it as generalising it; and a simple delve into the study can point out huge flaws in that, the least of which being 'willingness to be involved in a study'.

This is my point, you are not detaching yourself from the study and are defending it with the same fervour that you accuse myself of attacking it, or the media from presenting it.

1. Aspergers is not the control group. There is no control group. You only use control groups when you have an experimental manipulation to ensure that your treatment causes a difference from baseline (the control group).
See, I disagree here. The specific traits envisioned are part of Aspergers and have been detailed as such. That means that the test was on the Gamers with the Aspergers traits being the control they lived up to. Perhaps control is not the exact word, but the conclusion group if that's any clearer.

Please don't think of me as lacking in intellect just because I might occasionally use the wrong word. Jargon is notoriously difficult and alters meaning often.

As for the 5 confounds, each will have a similar personality indices(?) to the Aspergers group and the Gaming group; now if this is proven, then the correlation would not prove similarity between the groups but similarity between the effects observed.

This is a correlational study - I don't think it helps anyone at all, really; it just shows that two unrelated groups score similarly on a personality measure. And I don't think anyone would suggest that the results imply that either group are experientially similar.
So, we have all agreed it's not news, just research. Despite our different ways of coming at it.
 

xMacx

New member
Nov 24, 2007
230
0
0
Again, not trying to insult, but you've got to understand - stats and psych up until graduate school really doesn't get you anywhere. This isn't a jab at you personally, it's just that schools don't present many of the concepts of research design or statistics until upper level university classes, and the things they do teach you in secondary are centered around mathematical probability rather than theory generation and testing. High school won't get you there. Seriously, usually the first year of graduate school doesn't get you there. It's a training thing - like 4-5 years of undergrad, followed by a few years of banging your head against the wall independently, and then you start to get it.

Specific points:

Good headlines are emphasis, not generalization - see Science magazine for an excellent example of headlines that do not generalize.

Seriously - you can't "disagree" that its a control group. It's a research study, not a debate. Control groups have a definition, and the definition that you're using isn't it. Wiki control group - I'm not linking anything anymore, as I don't think it's actually being read. It's a different type of study with a different purpose that uses a control.

Which again gets at the problem - I'm not attacking your intellect. In fact, intellect has nothing to do with it. It's a matter of training. If you haven't had the training, you're not going to have the technical understanding to really make cogent statistical design arguments. You could be the reigning genius of the world; if you don't have the background, it's not going to happen. It's not jargon insomuch as not understanding the underlying statistical and design assumptions of the arguments you're making.

For example, your five confounds point - you're missing the point (confounds are separated from the design because of the challenges of understanding correlations) and again restating the correlation doesn't equal causation argument. Which is fine, but it's a separate argument that I (and several others) pointed out on page 1 and 2. Much like the media argument from before, you're confusing several issues (validity, statistical assumptions, and media reporting) by attempting to explain one with the other.

Again, this requires specialized knowledge. So you're not going to be able to "smart" your way through it (you seem very bright, so not a jab at you).


Per us agreeing that it's a meh kind of finding. See my post - #14. I always thought the study was blah, but for different reasons than what we've been talking about here.
 

tthor

New member
Apr 9, 2008
2,931
0
0
i have aspergers myself, and can honestly say that i am ADDICTED to videogames.
i dont think videogames are nesicarily responsible for aspergers, but merly agree greatly with asperger-istic personalitys on many levels.

(NOTE: if you do not know what aspergers truly is, then i sudjest you learn a little about it before coming to this thread)
 
Feb 13, 2008
19,430
0
0
xMacx said:
Again, not trying to insult, but you've got to understand - stats and psych up until graduate school really doesn't get you anywhere.
I'm English, dear boy.

I still think that your argument is deeply flawed though. Asking us to get something that you say we can't understand even when we do get it to prove you wrong seems a trifle....well...humiliating to take to a discussion.

All I'm saying is the Gaming Research Studies are used by the Media to a bad cause.

I'm not linking anything anymore, as I don't think it's actually being read
I've read everything you've linked. That's just politeness.

(NOTE: if you do not know what aspergers truly is, then i sudjest you learn a little about it before coming to this thread)
Well, I do subscribe to National Autism Monthly, so perhaps.
 

xMacx

New member
Nov 24, 2007
230
0
0
The_root_of_all_evil said:
I still think that your argument is deeply flawed though. Asking us to get something that you say we can't understand even when we do get it to prove you wrong seems a trifle....well...humiliating to take to a discussion.

All I'm saying is the Gaming Research Studies are used by the Media to a bad cause.
Not "we", You.

If that was all you meant, you could have read the first page and moved on. And you have yet to prove anything wrong methodologically - except your ability to understand the assumptions your own arguments. To be sure, when I said you didn't understand, I was referring to you specifically - because no one else tried to take it to that level of detail. You were fine talking about media misrepresentation, and then you took it to a level you didn't have the skills for. The majority of the comments you made are seriously ridiculous from a methodological standpoint - but you don't even understand how bad you failed in understanding it. Somehow, you think you really got it right?

I would have the sense not to go into a chemist's lab and criticize his measurements and methodology; I might know something about research methodology generally, but I don't know much about chemistry. I woudn't be so arrogant as to presume I got something "right" when I don't know enough about the topic to understand what's appropriate for that branch of science.

Or in more terms blunt - you've been wrong, you're still wrong, and worst, you still don't get why you're wrong. Which is the only reason I've been pressing on for the past couple of pages. No one else cares about the study at this point; I haven't focused on the gaming addition/autism research specifically since my second/third comment or so. I'm just trying to explain methodology to illustrate where your thinking doesn't fit with general research assumptions.

But you continue to fruitlessly attempt prove yourself "right" - I don't know if its arrogance or ignorance or what, but you've danced around the points without ever getting any of what you're missing. You get slammed on three points, then pull out the fourth and make a comment against it that ignores the context and meaning of the previous points, as if that somehow means something. It's a hallmark of science novices - lashing out at any and every point because they don't really understand the effect of what they're talking about.

You've been getting owned for the past ten posts or so; you just don't seem to get it. Instead you dance between 2-3 points as if not having a specific focus somehow makes your argument better. But you don't have to take my word for it - give the thread to someone with training in research methodology (above a masters in any science that does empirical research), ask them to read it and see what they agree or disagree with.

Or congratulate yourself in proving someone "wrong" in a field you don't even begin to understand - which is glaringly obvious from your posts. Even when I stop trying to provide information and just put the methodological flaws in your argument out there, you ignore the information that's there. You just don't get enough to go to the depth you're diving, and trying to be clever just looks worse.


And that's frustrating from a research point of view; I think a lot of us want to believe that if the public could be trained, if you provide people with the information, if you could go one on one with each person who distrusted research, you could elucidate the flaws in thinking about research and help people come to a greater understanding of research.

But in real discussions, people (like yourself) seem to always be much more concerned with some arbitrary definition of "winning" than learning what it is they're talking about. Because a research reports or newspaper headlines make high level results accessible doesn't mean the field's immediately accessible to anyone doing a google search.

And that's the gap between researchers, the general public, and the media - people read and make blanket statements based on their lack of understanding about the field. They don't discern between research or the media that reports on the research. The majority of people who say "the media reports bad research" usually only mean "the media reports research that I don't agree with." It's not about good or bad research for much of the general populace (who don't seem to grasp the basic tenets of research to make that judgment call anyways).

But hey - I'm sure none of that relates to you. You got it right, and somehow proved scienctific principles wrong. Perhaps you should go get a job where you can tell people about "bad science."