Maybe.
Seriously, there's responses to everything you've posted so far, but I'd sum it up as this:
Research, much like most other technical branches of science, requires a good deal of training to really "get". But because research often communicates in words instead of just equations (like some of its' technical cousins), people who are not trained in research make the mistake of thinking they understand all of the constraints that go into interpreting research results.
Look at the past few pages and see how many posts are me posting or explaining things that are basic research methods or statistics. Most of your objections are based on not understanding the basic assumptions of research methodology.
Note also how many posts I'm trying to describe that processes are much more complicated than your description and attempt to explain the how and why.
Then note that you generally respond by re-simplifying the issue in a different way, followed by responding that you're confused by the original conclusions. That's because it's not that simple.
Not to be mean at all - I think about 10 posts ago we hit the point where you really do need further training to grasp the intricacies of the arguments you're making. For example, we can't discuss problems with experiments unless you have a grasp on the basic tenets of experimental validity. We can't really discuss the array of sampling issues unless you understand basic issues like selection bias. We can't discuss problems with correlational research unless you already understand why correlation doesn't equal causation. Validity, also, has a pretty specific meaning in research, with a number of different facets. It's not a true or false equation. One of the difficulties we're having in this conversation is that there's a language being used that requires some technical knowledge - like understanding that experimental validity means much more than "true".
I'm seriously not saying this to be a jerk - I was having a conversation with one of my colleagues about the same thing. There is a certain level that's just not fruitful without some kind of training, and I think we may have reached it. Further talk is just going to go around in circles.
For the record, I completely agree with you on numbers one and two, think 3 is kind of stating the obvious (anything can lead to bad research, even good previous research), and already talked about the issues with "proving" anything in a previous post.
My position has, and always has been, that research operates on a continuum, and that "bad" or "good" are only defined by the results' validity and generalizability to a greater population.
And for the last time, there is no control group in this study!!!
Seriously, there's responses to everything you've posted so far, but I'd sum it up as this:
Research, much like most other technical branches of science, requires a good deal of training to really "get". But because research often communicates in words instead of just equations (like some of its' technical cousins), people who are not trained in research make the mistake of thinking they understand all of the constraints that go into interpreting research results.
Look at the past few pages and see how many posts are me posting or explaining things that are basic research methods or statistics. Most of your objections are based on not understanding the basic assumptions of research methodology.
Note also how many posts I'm trying to describe that processes are much more complicated than your description and attempt to explain the how and why.
Then note that you generally respond by re-simplifying the issue in a different way, followed by responding that you're confused by the original conclusions. That's because it's not that simple.
Not to be mean at all - I think about 10 posts ago we hit the point where you really do need further training to grasp the intricacies of the arguments you're making. For example, we can't discuss problems with experiments unless you have a grasp on the basic tenets of experimental validity. We can't really discuss the array of sampling issues unless you understand basic issues like selection bias. We can't discuss problems with correlational research unless you already understand why correlation doesn't equal causation. Validity, also, has a pretty specific meaning in research, with a number of different facets. It's not a true or false equation. One of the difficulties we're having in this conversation is that there's a language being used that requires some technical knowledge - like understanding that experimental validity means much more than "true".
I'm seriously not saying this to be a jerk - I was having a conversation with one of my colleagues about the same thing. There is a certain level that's just not fruitful without some kind of training, and I think we may have reached it. Further talk is just going to go around in circles.
For the record, I completely agree with you on numbers one and two, think 3 is kind of stating the obvious (anything can lead to bad research, even good previous research), and already talked about the issues with "proving" anything in a previous post.
My position has, and always has been, that research operates on a continuum, and that "bad" or "good" are only defined by the results' validity and generalizability to a greater population.
And for the last time, there is no control group in this study!!!