
When it comes to large language model-driven tools, there are generally two broad categories of users. On one side are those who view AI as a powerful but sometimes flawed service that requires careful human oversight and review to detect logic or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an omniscient machine.
Recent research goes a long way toward creating a new psychological framework for that second group, who regularly engage in “cognitive surrender” to the seemingly authoritative answers of AI. That research also provides some experimental tests of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives might influence that decision.
just ask the answer machine
Researchers at the University of Pennsylvania sought to build on existing scholarship in “Thinking-Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender,” outlining two broad categories of decision making: one shaped by “fast, intuitive, and affective processing” (System 1); and one shaped by “slow, deliberative, and analytical reasoning” (System 2). Researchers argue that the introduction of AI systems has created a new, third category of “artificial cognition” in which decisions are driven by “external, automated, data-driven reasoning originating from algorithmic systems rather than the human brain.”
Read full article
notes
<a href