Motoring Alliance supports this effort and has partnered with TrueDelta. The best way that it becomes useful for all is if you enter your data on the TrueDelta site.
Now, with that said, some of us with highly modded cars, such as myself, probably see the shop a bit more often and our data points are not at all typical.
Page 1 of 3
-
-
Jason Montague New MemberLifetime Supporter
opcorn:
Jason-
Like x 2
- List
-
-
Our current stats for the Hatchback, Convertible, and Clubman cover owner experiences through June 30, 2012.
Repair frequencies, in terms of repair trips per 100 cars per year:
2012: 16, very small sample size
2011: 65
2010: 61
2009: 69
2008: 79
2007 (hatch and Clubman): 103
2006: 47
2005: 109
2003: 74, very small sample size
We have reworked the scale, which has tipped the 2005, 2007, 2008, and the 2011 into "worse than average," and the 2009 and 2010 close to it.
The 2012 is looking good so far, but we need more data on more cars for a definitive stat. The Countryman isn't yet in the survey, but the 2012 is just half a dozen cars short of the minimum to include it.
To see how competitors compare:
MINI Cooper reliability ratings and comparisons -
This is all rather interesting but I have to ask.
What is the use for my data?
What is the goal of the linked site?
Can you please tell us more about the site. How is it funded, I see there is no advertising on it at this time.
Many of us modified our cars to some extent, how is this figured into the reliability ratings?
I'm sure others will have questions as well. -
All reasonable questions, thanks.
The results are intended primarily for our members, but also for other people looking for information on cars.
The goal is as stated in the OP, better, more up-to-date information.
The site is self-funded. Revenue does come primarily from ads, I'm not sure why you didn't notice them, though I don't put them in obnoxious locations. I think most are clicked on by people who enter the site through organic search, and it's not what they were searching for. I don't think we get a substantial amount of revenue from members, but that's okay.
We carefully review all submitted repairs, and exclude those that are clearly due to mods. This isn't often a problem. In a very small number of cases it isn't clear whether or not the problem would have still happened without the mod, and then a judgment call must be made. -
I don't see the reliability ratings being binned by the type of power plant (e.g., N12 vs N14 vs N18 for the 2nd Generation MINIs), which would seem to be even more important than the binning by model given that the majority of reported repairs in the later years appear related to the drive train.
Of the 10 reports visible for 2011, 9 problems are for the N18 and 1 problem is for the N12. If those percentages even remotely translate to your entire database it would seem unjustified to slap a "sad face" on MINIs with the normally aspirated engine. -
Ahh..I didn't dig deep enough to see the ads. Yep, they are there, but not obtrusive.
-
The problem is that, without more participants, splitting the results by powertrain could result in no stats for either. I'll see what we can do with the November update.
With more participants we could easily split the stats by powertrain. -
jcauseyfd New Member
I fail to see how TrueDelta is addressing many of the concerns other entities suffer when trying to "rate" a vehicle's reliability. The only plus I can see is the access to detailed reports, which enable me to conclude the data may help me in assessing cost of ownership, but nothing about reliability. Even that conclusion is somewhat questionable.
-
Which concerns aren't addressed?
Compared to other reliability surveys we:
--post the actual repair frequencies, not just dots
--promptly update four times a year, not just once a year after a long delay
--ask a clear, relatively objective question on the survey
--post all reported repair descriptions (noted in your post)
I don't think we're yet a good way to measure cost of ownership--this would require especially large sample sizes, and we don't have these for most models yet. Trying to fix this! -
jcauseyfd New Member
Probably the two biggest problems imo are the extremely low response rates and self-selection. Other problems include a the inability to distinguish between differing expectations of owners and (already noted in the thread) the problem of not distinguishing between different trim levels, engines, etc.
With regard to dots, it appears you just use a smiley face icon instead. I see no real difference.
I haven't seen your survey, but I have seen some of the repair histories that were reported. One of the items for a MINI was a report that the individual had the vehicle in for an oil change and the dealer went ahead and installed new pads and rotors per the included maintenance plan. I fail to see how that qualifies as a reflection on reliability - it is just routine maintenance. Similarly, I see reports of things like the stuck sunroof or melting hood scoops - problems yes, but not sure they really impact reliability. Another example - carbon build-up. Pretty much inevitable with a DI engine, so that again seems to be more of a maintenance issue than a reliability issue.
The presentation of "repair frequency" also seems misleading. If I look at a 2008 BMW 1-Series or a 2007 MINI Cooper, or a 2005 MINI Cooper, all of them indicate repair frequencies of 100+ per 100 cars. The implication is that 100% of those model years were repaired during the year. I'm pretty sure that is not the case. Like the other survey firms, it is difficult if not impossible to tell whether 100 cars each had a single problem (and a few had multiple problems) or if 1 single car had 100+ problems. The presentation leans toward the former though. The combination of the big number, the red, frowny face and the scale going off the big, red end sends the message "you WILL have a problem" with this vehicle. I see no difference between that and CU or JD Powers giving a car an empty circle or one dot or whatever and thus implying "you WILL have a problem" with this vehicle.
On the "Repair Histories" landing page, we see that "This survey won't tell you the rate of repairs you might face, but it can tell you the kinds of problems you might have." It won't tell me the rate of repairs I might face? Then why do the results include "repair frequencies"? Doesn't "repair frequency" = "rate of repairs"? -
Owners generally expect the car to start, the power windows to work, and so forth. By asking an objective question--did the car require a repair, and not "did the car have a problem you considered serious"--we manage this variable much better than other surveys.
Similarly, by having a continuing survey that starts covering the car when a member joins, and not a retrospective survey, this analysis is less subject to self-selection than others.
It's very easy to split the results by engine--if we have enough participants. We're doing this for other models where reliability differs by engine.
Yes, a smiley face is much like a dot. It's intended to be. The difference is we have the actual repair frequency next to it. You won't find such numbers elsewhere.
All repairs shown on the repair histories pages are not included in the analysis. For instance, brake pads are excluded after 24,000 miles. I can assure you that most people spending hundreds of dollars to decarbonize an engine don't see it as simply maintenance. Nor are manufacturers telling people they should expect this service much like they expect to have to replace the brake pads. If it truly becomes common with all DI engines, the general public is going to be outraged. While a stuck sunroof doesn't impair the basic operation of the car, it does reflect the reliability of that part of the car, the sunroof. We clearly define "reliability" as the number of repair trips required. It's a closer fit than other possible terms, such as "quality" and "durability."
A repair frequency over 100 per 100 cars does generally mean that most (but rarely more than 80 percent) of the cars required at least one repair during the past year. For many models in the survey the score is under 30, even under 20. A score over 100 DOES imply that at least one repair per year is very likely. (In comparison, a black dot in CR can represent a reported problem frequency as low as 25 per 100 cars, but very few people realize this because they don't post the numbers.) For cars with enough responses, we publish additional "repair odds" stats, that display the percentage of cars with zero repairs and the percentage with 3+ repair trips. The analysis includes an outlier control, so a single car with a very large number of repairs will not distort the result.
You misquote the disclaimer on the repair histories page, partly because we might need to relocate it. It says that the "repair histories," the repair descriptions, should not be used to infer repair frequency. You've substituted "survey" for "repair histories." On these pages we only display cars for which repairs have been reported. Someone merely glancing at these pages might infer that the car is highly unreliable, since every car displayed has required repairs. But there could be ten times as many cars for which no repairs have been reported.
In sum, of your critiques that of sample size is by far the most valid. It's also the most easily fixable. In part by posting here, I'm doing what I can to fix it. -
We'll advertise this initiative to our club members in an attempt to encourage additional participants and more data.
-
jcauseyfd New Member
I'm not sure your solution is really managing the variable any better than other surveys. I'll use myself as an example. I have occasionally experienced the stuck sunroof problem. I have a solution that gets the sunroof open, yet I have not taken the vehicle in to have it repaired.
So your question "Did the car require a repair?" results in a "No" answer.
If the question is "Did the part operate as intended?", the answer to that is also "No".
I still have no idea as to whether my vehicle is considered "reliable" nor does anyone else.
Where other surveys are explicitly asking whether the vehicle experienced a "serious" problem, your survey achieves the same result implicitly by only recording data points where the owner thought the problem was serious enough to seek and actually obtain a repair.
While you are treating the variable in a different way, I"m not sure it is necessarily better.
I'm not sure there is really any difference between other surveys that ask someone did their vehicle experience a "serious" problem and your survey that asks whether the vehicle had a repair performed.
-
You seem to have some knowledge of statistics, so I'm surprised that you think that the size of the population is relevant in determining the required sample size. It's not even part of the equation.
We've always posted the number of responses. Neither CR nor JD Power do.
We used to post the survey, but recently switched to a dynamic form that varies based on the questions answered. I'd like to also post this one, but no one has asked for this so other things have been higher priorities.
Essentially the form asks people to report all repairs to the car, even minor ones, but not to report scheduled maintenance or problems due to accidents or mods.
On your sunroof issue, it's always possible to split hairs further. But if you've got even sample sizes as large as we do this sorts itself out. You get a lot more variance if you ask people to report problems "they considered serious." With this other approach, you'll have some people not reporting transmission failures because they were covered under warranty while other people report rattles because they feel a new car shouldn't have them.
Also, DIY repairs can and should be reported. We don't require that a shop perform the repair.
Self-selection doesn't distort the results to the same degree in every case. What we're worried about here is people self-selecting because they 1) have had a lot of problems, and want to report them 2) are a fanboi for the car, and want to boost its score.
For the first, a prospective survey will greatly reduce the distortion, as people usually cannot report the problems that led them to join.
For the second, asking a relatively objective question limits its impact. Asking people to report problems "they considered serious" lets people honestly under-report problems. They really like the car, so the problems they've had with it don't seem serious. This can be a huge source of bias in other surveys. With our survey, people have to be dishonest to under-report, and this is less likely.
The additional metrics are available even to people who don't sign up, just with a higher minimum sample size.
Thanks for the tip on the text on the landing page. I hadn't realized it read that way, and thought you were referring to the more accurate text on the destination pages. Fixed.
It's absolutely impossible to gather or report perfect survey information, just like it's impossible to create a car that does everything the best. There are inherent tradeoffs with cost and people's willingness and ability to objectively respond. We work within these constraints as best we can, and I feel we do a better job of it than people who've been doing it for far longer with much larger budgets.
In the end, our #1 weakness, by far, are the sizes of the samples. I did look into splitting the stats for the Cooper and Cooper S yesterday, and we don't have enough responses to do this yet. -
We have updated our reliability stats for the MINI based on owner experiences through September 30, 2012.
Repair frequencies, in terms of repair trips per 100 cars per year:
2012: 16, better than average, small sample size
2011: 66, worse than average
2010: 53, about average
2009: 60, about average
2008 (hatch and Clubman): 74, worse than average
2007 (hatch and Clubman): 81, worse than average
2006: 51, about average
2005: 104, worse than average
2003: 72, about average, very small sample size
We'll have further updates in February and May. We'd love to provide more precise stats, cover all model years, and provide separate stats for the S--just a matter of getting more owners involved!
To see how competitors compare, and to sign up to help:
MINI Cooper reliability ratings and comparisons -
I think surveys such as this can only serve as a guideline. It is difficult to make out any conclusion because we don't really know what all this information means. There are external factors we don't really know much about. Such as, where do you park your MINI at night? Does where you live affect how certain parts wear?
Is there some correlation between sample size and reliability? I think it was discussed in this thread already.
What are the things reported that are breaking? I noted that the upper engine mount on the 2005's appear to be fairly common. That would be a concern for 2005 MINI owners.
A rattling convertible roof is not something that will happen on most of our MINIs. If most of the problems reported have to do with optional equipment, then there would be good reason why MINIs have a worse than average reliability rating. Most Toyota Corollas come in one of 3 variants. Nothing more, nothing less. If everybody here only had base model MINIs and not modded them at all, I'm pretty sure our MINIs would be ranked above average.
To say a particular car model is better or worse than average, then, would be difficult to assess.
Page 1 of 3