You've put me in an awkward position of defending the failure rate of a company that was more than double its competitors. Almost 25% is awful. It just isn't 50%. I had one fail on me as well. So I don't want to defend them, I'm just correcting an error.
slash2x said:
See I looked at the other study and realized that is some rather hard to confirm data, as it said it was random. If they are directly linked to the replacement process where are the exact numbers?
Exact numbers? You have the total sample size (2,500 360s) and the failure rate (23.7%). That's all you need for exact numbers. Their numbers are even down to the failure type (I've linked their study below). For example, without the RROD the failure rate was around 11.7% and the RROD itself caused 12% of all 360 consoles to fail. If you want the number you just figure out what X% of 2,500 is. For example, out of 2,500 machines, 592.5 failed (.237*2500 is 23.7% of 2500. The 23.7 was likely rounded up or down which accounts for why half a machine would fail. The actual number could be either 592 or 593 if rounding was used and there were not 1/2 broken machines).
If you want a more detailed data, read their study: http://www.squaretrade.com/htm/pdf/SquareTrade_Xbox360_PS3_Wii_Reliability_0809.pdf
Randomization is what makes studies accurate. If you randomize the samples correctly and include enough total samples, you have legitimate results that relate to the overall set that the samples are subsets of. Please understand that I loved statistics in college and took as many courses as I could. So don't feel bad if you don't know why randomization is so important or exactly what it is as your response seems to indicate. Most people do not know. Major parts of my coursework were just related to why well-known studies weren't set up properly and lack of randomization or failing to establish a sample set that encompasses the entire group are the absolute most common errors to make.
Example: If you want to find out how common it is for smokers to get lung cancer, you don't get your sample space from a hospital list of smokers who were admitted for smoking related illnesses. You have to try and get your samples from overall smoker numbers and not just ones admitted to hospitals. Selecting only smokers that have had to visit a hospital because of smoking ruins the sample set and only accounts for how likely a smoker is to have lung cancer IF they're admitted for a smoking related illness. Next, if you select the subsets of that set by some actual criteria instead of randomly, then you'd skewing the results in a different way. For example, if you only select women from the study because you (hypothetical/ambigious "you", not you as in Slash2x) are a female in college and are afraid of big burly male smokers, the results would not be random and would no longer necessarily have anything to do with smokers in general. A study starting off to establish the rate of smokers getting lung cancer would now only relate to female smokers who have visited a hospital for smoke related illnesses and their odds of the illness being lung cancer.
That's all random means. Of all the units they processed, they randomly selected 2,500 units instead of picking and choosing which machines they were going to evaluate. That is the only honest way to find out the average rate of failure if you aren't going to include the entire group which would be in the millions. Thanks to our understanding Central Limit Theorum and good ol' Gaussian Distribution (Normal Distribution or more commonly but incorrectly known as a bell curve), you don't have to do math on every single member of the whole set, you just need a sufficiently large sample size. Some people feel that 30 is enough to generally get normal distributions but larger numbers are certainly more likely to be accurate. 2,500 randomly selected samples from the overall distribution? That's firmly in the realm of a valid sample size.
Now then, the Game Informer Study is not strictly valid for a number of reasons:
Overall Sample Set sampled does not match all 360 console owners:
1. Respondants to the survey were not necessarily console owners. Surely you're aware of the console wars we've been in and you may recognize the likelihood that a ps3 or Wii owner would want to give a negative result for a system that is not their own.
2. Respondants to the survey were readers of Game Informer and not necessarily an average sampling population.
3. Respondants may have been multiple owners of the same console (for example, a dorm room may have 12 owners) which results in duplication of the samples.
Randomization of samples:
1. The respondants who had broken 360's may have been more likely to respond than users who had not had an issue.
2. Had someone been looking for 360 RROD information at the time because their system just broke, they would have found this survey further increasing the liklihood that the samples were not random.
3. The internet, being what it is, may have had a rallying of people with the problem in which other message boards and friends reached out to people they knew (or did not know) were having the problem to encourage them to enter it. People who the information was relevant to would have paid more attention, people who had not a problem may have ignored the information.
In order for studies like this to be random, the testers need to control who they get answers from. There is a myriad of issues that easily arise from this.
Criticisms I can levy at Square Trade:
Overall Sample Set sampled does not match all 360 consoles:
1. This only accounted for system failure within the first two years of ownership. In 2009, there had been 3-4 years total. However, since the Jasper Chipset released in 2008 largely abated the RROD, including additional years should only bring the average down, not up.
2. This only includes malfunctions reported to them. This is sidestepped by random sampling of all consoles they cover.
3. The numbers appear to start in 2007. Fortunately, 2007 was still with the original chip set and failure rate but I'd like to have had the numbers go back the year prior. So we have enough of a baseline before the falcon chipset to know that that chipset only made the problem worse.
Randomization:
1. Perfect. They simply used a basic calculator function to randomize the numbers reviewed. Can't do better than that.
These guys knew what they're doing. They even showed results based on the Falcon chipset that spiked tremendously in 2008 because that chipset did even worse than the original.
I do not know if I trust data from a company that has a direct business link to the data line they are working with, anymore than an internet poll.... If you look at the last link I dumped in, it brings up a interesting point. If the number was wrong why did Microsoft not refute this count? Maybe it was 37% and that is just not that stellar either.
Two answers here:
1. Square Trade is not Microsoft. They are directly impacted when more people get a system repaired under their warranty. They did this study because this directly relates to the profitability of maintaining warranties for the 360. They do this for other products they manage as well. Think about it, if they supported the ps3, then 9 out of 10 warranties never got cashed in. That's whatever the price paid for the warranties of the 10 (all ten paid for the warranty) accounting for their profit and to pay for the 10% that failed. However, with the 360, 1 out of every 4 took a hit. That makes it a lot harder to profit unless the warranty costs more than 1/4 the average cost of repairing one. If there is any company that would have the data and the motivation to report correctly on this, it'd be Square Trade. There's no one else. They're the only one that knows which machines they're covering with a warranty AND which of those machines get sent in. There is no other resource that has this information. A repair center would be skewed to the side of broken as they only get sent broken units and a retail store would only know how many units they sold.
Additionally, and this is important, Square Trade has investors and these results directly impact the company's performance. Had they lied, they would be breaking the law by misleading investors. We're talking huge fines and jailtime just to say 24.7% in a way that does not benefit them at all.
http://en.wikipedia.org/wiki/SquareTrade
It would be different if the company was some small privately owned customer. But this is a HUGE multi-national firm who has been releasing this data on other products for some time now: http://www.squaretrade.com/warranty-buyer-knowledge-base
Keep in mind, Square Trade would benefit if users bought ps3's or Wii's instead of 360's because their profit margins were higher. So it would even be advantageous for the 360 numbers to be 50% or higher to discourage shoppers from buying the 360 when they're in the market for a console.
2. Microsoft's PR department likely thought it would be best if they did not say anything in this situation. It was a good move since their numbers were already more than double that of the direct competitor.
Also, you don't know that they didn't discredit the number. Square Trade releasing the numbers they found may have been something Microsoft allowed them to release to the public that Square Trade would otherwise have not been allowed to make public knowledge.
Your study is from August 2009. My study is from September 2009. HIGHLY unlikely that the two are unrelated. At least, that's when the articles were posted. Microsoft's PR department likely thought the information would be better revealed by a legitimate company who would not benefit from fixing the numbers and who would have the numbers already on-hand. In fact, a company that would be significantly harmed by lying. That company is Square Trade.
I find 24% VERY hard to swallow because I bought 3 xbox 360 systems.... 2 have failed, one has not, the latest one is a S. I purchased one in the Midwest and one in the Deep South(usa), so different shipments and years apart. I know at least a 9 people who have had at least 2 systems as well,and if you ask on Xbox Live or any other gamer I have met they have similar stories. Again from different areas of the USA. So either I only know people with the worst luck or 1 out of every 2 (approximately) systems failed....... Like you said that is just way to damn many.
Your personal experience does not an average make. I could treat your statement here like a study and point out signficant flaws beyond your very small sample size. For example, since you've been frustrated with your 360's failure rate, are you more likely to discuss the topic with people who have had the same issue? Also, would people without the problem necessarily offer up the information that their's hasn't presented any problem? All you're acknowledging is that you know people who have had the problem, not that you have any accurate gauge of how many people don't have the problem which would be necessary to come to any kind of conclusion. Also, I do not know if your own 360 use is significantly more than the average user. Seeing as you bought 3 360's and are on a gaming website, that is not an unlikely conclusion. I will say that the study does acknowledge that they fail to account for sales by region in their study when considering their results. So we don't know if one region fared worse than others.
But there is only a handful of companies that know how many units are covered by a warranty and how many have to cash the warranty in.