Funny Events of the "Woke" world

Asita

Answer Hazy, Ask Again Later
Legacy
Jun 15, 2011
3,231
1,084
118
Country
USA
Gender
Male
Nope, you just will grab onto anything that says I'm wrong without looking at it objectively. Look at the Dunning-Kruger data, that's literally what it says, the whole line of what people thought they did was basically a straight line in average territory. Dunning-Kruger was debunked a long ass time ago. Same thing with masks not working (latest Cochrane review) or vaccine mandates being completely pointless (there was literally never any data at any point validating them). You'll just believe what you want vs actually looking at the data.
Phoenix, once again, I'm the one who quoted the damn study. I'm the one who quoted the author, the psychological journals, the dictionary, and the encyclopedia on the definition of the term. And you might even recall that I led this exchange by explaining how you were misrepresenting the results of the study. You're the one who's trying to reason away why all that doesn't count and claim that your definition is superior because you found an editorial that uses it. And you might further recall that this entire conversation started with you trying to score points against the people who were telling you that you were speaking from a point of ignorance by claiming that it was in fact your critics who were ignorant because [you claimed] they were using the wrong term.

Of the two of us, the one who is brazenly trying to "grab onto anything that says [the other is] wrong without looking at it objectively", is you. And the only person who you might have fooled into thinking otherwise is also you.
 
Last edited:

Trunkage

Nascent Orca
Legacy
Jun 21, 2012
9,094
3,062
118
Brisbane
Gender
Cyborg
Childrens books and childrens media, certainly from around that age, tended to go for very easy targets; fat people, ugly people, people with exagerated features, like their noses. We've moved past that mostly, so I can understand that popular works like Dahl's - which are full of the "ugly, fat EVIL" - might have some people decide to add a bit of nuance. Though I don't know if simply changing a word will make the fat character not still get ridiculed for being fat. In that case it's not so much the word itself, it's the character (in this case Augustus Gloop), and whether they're defined by being "enormous" and made a fool of.

Roald Dahl's books are kinda problematic by nature by todays standards, and changing the words won't make much of a difference. He was mean-spirited in his writing, probably because he was a piece of shit in real life, and the targets in his books won't change because the descriptors changed. The best you can hope for is parents who might be reading these books to their kids providing some nuance themselves.

Honestly, J.K Rowling's books have the same kind of schoolyard nastiness, and they're FAR more popular than Roald Dahl's. It's sort of a nice sentiment though from the Roald Dahl Story Company to show that this type of writing in childrens books is severely outdated.
My daughter is obsessed with the 13th Level Treehouse. The current one we are reading a wheely bin time travel one that has them meet the pond scum that starts life, going to the dinosaur version of their publishing boss called Bignoseasaurus. Because that publisher is called Big Nose... because he has a nose bigger than his face. Then they do stereotypical things in Egypt and Rome like mummies and pyramids and chariot race around the colosseum. It's ridiculous

The only thing that stands out is the rich big nose guy and I wonder if they have to change that in the future... but also, it comes with pictures and it certainly doesn't look Jewish

Anyway, changing someone's description from enormously fat to just enormously is such a little change I could not care about. Adding a line might be different. But they just remade the Witches movie and they have gone from the 80s stereotypical disgusting looking monster to beautiful Anna Hathaway who has some really fucked up teeth and nails. I'm all for the later
 

Gordon_4

The Big Engine
Legacy
Apr 3, 2020
6,504
5,761
118
Australia
My daughter is obsessed with the 13th Level Treehouse. The current one we are reading a wheely bin time travel one that has them meet the pond scum that starts life, going to the dinosaur version of their publishing boss called Bignoseasaurus. Because that publisher is called Big Nose... because he has a nose bigger than his face. Then they do stereotypical things in Egypt and Rome like mummies and pyramids and chariot race around the colosseum. It's ridiculous

The only thing that stands out is the rich big nose guy and I wonder if they have to change that in the future... but also, it comes with pictures and it certainly doesn't look Jewish

Anyway, changing someone's description from enormously fat to just enormously is such a little change I could not care about. Adding a line might be different. But they just remade the Witches movie and they have gone from the 80s stereotypical disgusting looking monster to beautiful Anna Hathaway who has some really fucked up teeth and nails. I'm all for the later
At the risk of being that guy, the Jewish nose stereotype has in the few pieces of outlines of the how’s and such, was a long hooked one. Like Fagin from Oliver Twist usually crops up as an example. The Roman or Egyptian nose tends to be just a really big Schnozz. Fuck, the Duke of Wellington had a nose like that; it was a defining feature of his appearance.

Though I’d hardly argue there’s any kind of artistic science behind anti-Semitic caricatures.
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,245
6,459
118
Country
United Kingdom
Random numbers produce the same results... It's not a human bias.
This is possibly the biggest facepalm of the thread so far. There's so many extremely basic things wrong with it.

1) Random numbers do what, exactly? They cannot exhibit Dunning-Kruger, because Dunning-Kruger is a description of how people think of their own capabilities. Numbers cannot do that.

2) Random numbers do not trend towards an average. An average is something you calculate once you have the numbers. It cannot exist before the numbers have been generated.

3) for a range of numbers, if they're randomly distributed, the average is likely to be somewhere around the middle. But numbers will not be any more likely to bunch around that midpoint than anywhere else in the range.

That bias doesn't make sense when you look at the study as a whole.
So you just don't believe it exists, and you fully disagree with Dunning and Kruger's own description of what happened, denying that people overestimated their own ability.

Even though its undeniably, demonstrably true that they did.

IT DOESN'T EXIST...
What doesn't exist!? God, its hard to figure out what you're talking about when you don't respond to specific parts of a post, you just splurge a bunch of non-sequiturs and we have to match them up to what you're trying to reply to.
 

Phoenixmgs

The Muse of Fate
Legacy
Apr 3, 2020
9,733
833
118
w/ M'Kraan Crystal
Gender
Male
Phoenix, once again, I'm the one who quoted the damn study. I'm the one who quoted the author, the psychological journals, the dictionary, and the encyclopedia on the definition of the term. And you might even recall that I led this exchange by explaining how you were misrepresenting the results of the study. You're the one who's trying to reason away why all that doesn't count and claim that your definition is superior because you found an editorial that uses it. And you might further recall that this entire conversation started with you trying to score points against the people who were telling you that you were speaking from a point of ignorance by claiming that it was in fact your critics who were ignorant because [you claimed] they were using the wrong term.

Of the two of us, the one who is brazenly trying to "grab onto anything that says [the other is] wrong without looking at it objectively", is you. And the only person who you might have fooled into thinking otherwise is also you.
Again, you can just look at the actual study yourself. I'm not at all misrepresenting the results of the study.


This is possibly the biggest facepalm of the thread so far. There's so many extremely basic things wrong with it.

1) Random numbers do what, exactly? They cannot exhibit Dunning-Kruger, because Dunning-Kruger is a description of how people think of their own capabilities. Numbers cannot do that.

2) Random numbers do not trend towards an average. An average is something you calculate once you have the numbers. It cannot exist before the numbers have been generated.

3) for a range of numbers, if they're randomly distributed, the average is likely to be somewhere around the middle. But numbers will not be any more likely to bunch around that midpoint than anywhere else in the range.



So you just don't believe it exists, and you fully disagree with Dunning and Kruger's own description of what happened, denying that people overestimated their own ability.

Even though its undeniably, demonstrably true that they did.



What doesn't exist!? God, its hard to figure out what you're talking about when you don't respond to specific parts of a post, you just splurge a bunch of non-sequiturs and we have to match them up to what you're trying to reply to.
When you use random numbers to run their exact study, you get the same results, it's literally been done. Thus, it's not a human bias.

People know what average is on a test is for the most part.

It was debunked 20 years ago...

The bias does not exist.
 

Asita

Answer Hazy, Ask Again Later
Legacy
Jun 15, 2011
3,231
1,084
118
Country
USA
Gender
Male
Again, you can just look at the actual study yourself. I'm not at all misrepresenting the results of the study.
Bruh, all you have done is misrepresent the study. Do I need to start quoting the damn thing again?

Abstract:
People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.

...


Perhaps more controversial is the third point, the one that is the focus of this article. We argue that when people are incompetent in the strategies they adopt to achieve success and satisfaction, they suffer a dual burden: Not only do they reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the ability to realize it. Instead, like Mr. Wheeler, they are left with the mistaken impression that they are doing just fine. As Miller (1993) perceptively observed in the quote that opens this article, and as Charles Darwin (1871) sagely noted over a century ago, “ignorance more frequently begets confidence than does knowledge” (p. 3).

In essence, we argue that the skills that engender competence in a particular domain are often the very same skills necessary to evaluate competence in that domain-one's own or anyone else's. Because of this, incompetent individuals lack what cognitive psychologists variously term metacognition (Everson & Tobias, 1998), metamemory (Klin, Guizman, & Levine, 1997), metacomprehension (Maki, Jonas, & Kallod, 1994), or self-monitoring skills (Chi, Glaser, & Rees, 1982). These terms refer to the ability to know how well one is performing, when one is likely to be accurate in judgment, and when one is likely to be in error. For example, consider the ability to write grammatical English. The skills that enable one to construct a grammatical sentence are the same skills necessary to recognize a grammatical sentence, and thus are the same skills necessary to determine if a grammatical mistake has been made. In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter.
4.1.3. Summary


In short, Study 1 revealed two effects of interest. First, although perceptions of ability were modestly correlated with actual ability, people tended to overestimate their ability relative to their peers. Second, and most important those who performed particularly poorly relative to their peers were utterly unaware of this fact. Participants scoring in the bottom quartile on our humor test not only overestimated their percentile ranking, but they overestimated it by 46 percentile points. To be sure, they had an inkling that they were not as talented in this domain as were participants in the top quartile, as evidenced by the significant correlation between perceived and actual ability. However, that suspicion failed to anticipate the magnitude of their shortcomings

4.2.3. Summary

In sum, Study 2 replicated the primary results of Study 1 in a different domain. Participants in general overestimated their logical reasoning ability, and it was once again those in the bottom quartile who showed the greatest miscalibration. It is important to note that these same effects were observed when participants considered their percentile score, ruling out the criterion problem discussed earlier. Lest one think these results reflect erroneous peer assessment rather then erroneous self-assessment, participants in the bottom quartile also overestimated the number of test items they had gotten right by nearly 50%

4.3.2. Results and Discussion
As in Studies 1 and 2, participants overestimated their ability and performance relative to objective criteria. On average, participants' estimates of their grammar ability (M percentile = 71) and performance on the test (M percentile = 68) exceeded the actual mean of 50, one-samplers(83) = 5.90 and 5.13, respectively, ps < .0001. Participants also overestimated the number of items they answered correctly, M = 15.2 (perceived) versus 13.3 (actual), r(83) = 6.63, p < .0001. Although participants' perceptions of their general grammar ability were uncorrelated with their actual test scores, r(82) = .14, ns, their perceptions of how their test performance would rank among their peers was correlated with their actual score, albeit to a marginal degree, r(82) = .19, p < .09, as was their direct estimate of their raw test score, r(82) = .54, p <.0001.

As Figure 3 illustrates, participants scoring in the bottom quartile grossly overestimated their ability relative to their peers. Whereas bottom-quartile participants (n = 17) scored in the 10th percentile on average, they estimated their grammar ability and performance on the test to be in the 67th and 61st percentiles, respectively, ts(16) = 13.68 and 15.75, ps < .0001. Bottom quartile participants also overestimated their raw score on the test by 3.7 points, M = 12.9 (perceived) versus 9.2 (actual), f(16) = 5.79, p< .0001.

As in previous studies, participants falling in other quartiles overestimated their ability and performance much less than did those in the bottom quartile. However, as Figure 3 shows, those in the top quartile once again underestimated themselves. Whereas their test performance fell in the 89th percentile among their peers, they rated their ability to be in the 72nd percentile and their test performance in the 70th percentile, ts(18) = -4.73 and -5.08, respectively, ps < .0001. Top-quartile participants did not, however, underestimate their raw score on the test, M = 16.9 (perceived) versus 16.4 (actual), r(18) = 1.37, ns

4.4.2. Results and Discussion
Ability to assess competence in others. As predicted, participants who scored in the bottom quartile were less able to gauge the competence of others than were their top-quartile counterparts. For each participant, we correlated the grade he or she gave each test with the actual score the five test-takers had attained. Bottomquartile participants achieved lower correlations (mean r = .37) than did top-quartile participants (mean r = .66), f(34) = 2.09, p < .05.3 For an alternative measure, we summed the absolute miscalibration in the grades participants gave the five test-takers and found similar results, M = 17 A (bottom quartile) vs. 9.2 (top quartile), f(34) = 2.49, p < .02.

Revising self-assessments. Table 1 displays the self-assessments of bottom- and top-quartile performers before and after reviewing the answers of the test-takers shown during the grading task. As can be seen, bottom-quartile participants failed to gain insight into their own performance after seeing the more competent choices of their peers. If anything, bottom-quartile participants tended to raise their already inflated self- estimates, although not to a significant degree, all fs(16) < 1.7.


With top-quartile participants, a completely different picture emerged. As predicted, after grading the test performance of five of their peers, top-quartile participants raised their estimates of their own general grammar ability, £(18) = 2.07, p = .05, and their percentile ranking on the test, f(18) = 3.61, p < .005. These results are consistent with the false-consensus effect account we have offered. Armed with the ability to assess competence and incompetence in others, participants in the top quartile realized that the performances of the five individuals they evaluated (and thus their peers in general) were inferior to their own. As a consequence, top-quartile participants became better calibrated with respect to their percentile ranking. Note that a false-consensus interpretation does not predict any revision for estimates of one's raw score, as learning of the poor performance of one's peers conveys no information about how well one has performed in absolute terms.

Indeed, as Table 1 shows, no revision occurred, r(18) < 1.

Summary. In sum, Phase 2 of Study 3 revealed several effects of interests. First, consistent with Prediction 2, participants in the bottom quartile demonstrated deficient metacognitive skills. Compared with top-quartile performers, incompetent individuals were less able to recognize competence in others. We are reminded of what Richard Nisbett said of the late, great giant of psychology, Amos Tversky. “The quicker you realize that Amos is smarter than you, the smarter you yourself must be” (R.E. Nisbett, personal communication, July 28, 1998). This study also supported Prediction 3, that incompetent individuals fail to gain insight into their own incompetence by observing the behavior of other people. Despite seeing the superior performances of their peers, bottom-quartile participants continued to hold the mistaken impression that they had performed just fine. The story for high-performing participants, however, was quite different. The accuracy of their self-appraisals did improve. We attribute this finding to a false-consensus

4.4.3. Study 4: Competence Begets Calibration

The central proposition in our argument is that incompetent individuals lack the metacognitive skills that enable them to tell how poorly they are performing, and as a result, they come to hold inflated views of their performance and ability. Consistent with this notion, we have shown that incompetent individuals (compared with their more competent peers) are unaware of their deficient abilities (Studies 1 through 3) and show deficient metacognitive skills (Study 3).
6. Concluding Remarks

In sum, we present this article as an exploration into why people tend to hold overly optimistic and miscalibrated views about themselves. We propose that those with limited knowledge in a domain suffer a dual burden: Not only do they reach mistaken conclusions and make regrettable errors, but their incompetence robs them of the ability to realize it. Although we feel we have done a competent job in making a strong case for this analysis, studying it empirically, and drawing out relevant implications, our thesis leaves us with one haunting worry that we cannot vanquish. That worry is that this article may contain faulty logic, methodological errors, or poor communication. Let us assure our readers that to the extent this article is imperfect, it is not a sin we have committed knowingly
And since you insist on quibbling over the definition of the term:
Per Dunning himself in Advances in Experimental Psychology, Chapter 5 - The Dunning-Kruger Effect: On Being Ignorant of One's Own Ignorance:

In this chapter, I provide argument and evidence that the scope of people's ignorance is often invisible to them. This meta-ignorance (or ignorance of ignorance) arises because lack of expertise and knowledge often hides in the realm of the “unknown unknowns” or is disguised by erroneous beliefs and background knowledge that only appear to be sufficient to conclude a right answer.

As empirical evidence of meta-ignorance, I describe the Dunning–Kruger effect, in which poor performers in many social and intellectual domains seem largely unaware of just how deficient their expertise is. Their deficits leave them with a double burden—not only does their incomplete and misguided knowledge lead them to make mistakes but those exact same deficits also prevent them from recognizing when they are making mistakes and other people choosing more wisely. I discuss theoretical controversies over the interpretation of this effect and describe how the self-evaluation errors of poor and top performers differ. I also address a vexing question: If self-perceptions of competence so often vary from the truth, what cues are people using to determine whether their conclusions are sound or faulty?
Shall I go on?
 
Last edited:

Buyetyen

Elite Member
May 11, 2020
3,129
2,362
118
Country
USA
It was debunked 20 years ago...

The bias does not exist.
If you could prove that, you would have presented empirical data already. But all you have is sophistry and we know why. Just because you can make shit up doesn't mean you should.
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
12,245
6,459
118
Country
United Kingdom
When you use random numbers to run their exact study, you get the same results, it's literally been done. Thus, it's not a human bias.
What on Bast's green earth are you talking about? Care to explain how the same study-- which involved asking people how they saw their own ability, and then comparing it to performance-- could be done with "random numbers"? You gonna ask the numbers how they view themselves?

This is just utter nonsense.

People know what average is on a test is for the most part.
In case you've forgotten: you're arguing that random numbers trend to this "average". Not people.

It was debunked 20 years ago...

The bias does not exist.
So just blind reiteration of positions that have already been comprehensively refuted, with nothing new.
 

The Rogue Wolf

Stealthy Carnivore
Legacy
Nov 25, 2007
16,950
9,651
118
Stalking the Digital Tundra
Gender
✅
Bruh, all you have done is misrepresent the study. Do I need to start quoting the damn thing again?



















And since you insist on quibbling over the definition of the term:
Per Dunning himself in Advances in Experimental Psychology, Chapter 5 - The Dunning-Kruger Effect: On Being Ignorant of One's Own Ignorance:



Shall I go on?
Why are any of you bothering with this? Phoenixmgs isn't arguing out of any desire to compare knowledge; he's arguing because his ego can't handle a world that doesn't hold him as its highest priority. It doesn't matter how well you prove him wrong, because he'll never, ever admit it. Everyone should do what I did- put him on ignore and let him rant into the void. It's all the attention he deserves.
 

Ag3ma

Elite Member
Jan 4, 2023
2,574
2,209
118
Random numbers produce the same results... It's not a human bias. If you had a bias for picking say "tails" more than "heads" in a coin flip random numbers wouldn't give you the same results.
Okay, so, we're now what, three years on in discussing matters of scientific literature? And you STILL have not learnt that you shouldn't just believe something because one scientific paper appears to say so according to a brief, unskilled skim?
 
  • Like
Reactions: Satinavian