Does AI create cognitive surrender?
How LLMs are turning our brains to mush
There are many flippant responses to whether or not AIs are reducing our cognitive abilities, most of which question the underlying assumptions of that assertion. I will not engage in that discussion. Instead, I’ll point to this paper that asserts that they are: Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender.
“When [the AI] was accurate, participants’ accuracy improved substantially; when faulty, accuracy dropped well below the Brain-Only baseline, illustrating cognitive surrender.”
I question the judgment of calling back to Kahneman’s book when labeling one’s own paper, given the fraud that riddles his field (and I say this as someone sympathetic to behavioral economics and who once studies under Richard Thaler). Nevertheless, the technology is important, young people need to learn it, but young people also need to learn, and these two goals seem to be at odds.
tl;dr
In a narrow test, which measured if people copy AI answers on math puzzles, it found that people did around 80% of the time, even when the computer was wrong. They label this as “cognitive surrender” but “sensible delegation” is a less loaded, and more accurate label.
I imagine the results would have been similar if there had been a confident human giving wrong answers, ideally in a white coat.
Labeling this as “system 3” is pompous.
The people giving false answers became more confident over time, especially compared to those who had to figure things out for themselves.
The effect was worse when the outcome was low stakes.
I think that the hidden observation here, which the paper does not tackle, is that confidence inflation is a consequence of repeating authoritative sources. Namely, people generally delegate what they say to whom they believe is a reasonable choice, and then the more they repeat that source, the more confident they become in what they are repeating. A useful technology for a social species. Less useful for any kind of truth seeking.
For parents worried about their kids cheating at homework, you are right to be worried. Good uses for AI at home with kids should include:
Never giving children “the answer”. They will just take it and not think.
Require the thinking to be visible at each step.
Self-tests keep you honest
A belief that thinking is important in and of itself
I also think that using it as a creative tool, where there is a real world outcome the child is trying to achieve, also makes it more likely that the child will not “cheat” and instead seek the correct answer.
