A paper All EX practitioners should be paying attention to...

Emma Bridger

Minutes
25th February 2026
Employee Experience
Employee Engagement
AI

AI, human thinking and the rise of “cognitive surrender”

A new academic paper from Wharton raises an important, and slightly uncomfortable question for anyone working in employee experience, organisation design or internal communication:

What happens to human thinking when AI becomes part of everyday work?

The paper, Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender, doesn’t focus on whether AI is impressive or accurate. Instead, it looks at how people actually reason when AI is present and what that means for judgment, confidence and agency at work.

Its conclusions deserve serious attention.


From two systems of thinking to three

Much of what we know about human decision-making is based on the idea that we think in two main ways:

  • Fast, intuitive thinking — automatic, emotional, pattern-based
  • Slow, deliberate thinking — reflective, analytical, effortful



The authors argue that this model is no longer sufficient.

When people consult AI, they introduce a third system of thinking, one that sits outside the human brain altogether. The paper refers to this as System 3: artificial cognition.

AI doesn’t just support thinking. In many situations, it actively participates in it and sometimes replaces it.

What is “cognitive surrender”?

One of the paper’s most important contributions is the concept of cognitive surrender which can happen when we use AI.

Cognitive surrender occurs when people:

  • Consult an AI system
  • Receive a confident, fluent response
  • Accept that response with little or no critical evaluation

This is different from using tools well. The authors distinguish clearly between:

  • Cognitive offloading — using AI to extend or support human thinking
  • Cognitive surrender — deferring judgment to AI and adopting its answer as one’s own

Across multiple experiments, the researchers found that people frequently followed AI advice, even when it was wrong. More strikingly, people often became more confident after consulting AI, regardless of whether the answer was correct.

In effect, accuracy began to track AI accuracy, not human reasoning.

Not a people problem - a design problem

Crucially, the paper does not frame this as a failure of intelligence, motivation or professionalism.

Instead, cognitive surrender is shown to be a predictable response to common organisational conditions:

  • Time pressure
  • Cognitive overload
  • High task complexity
  • Systems that reward speed, certainty and output
  • Tools that present answers as authoritative and final

In these environments, deep reflection is costly and deferring to AI can feel like the most rational choice.

That has significant implications for how work is designed.

Can incentives and feedback help?

The researchers also tested whether people would challenge AI more when:

  • Accuracy mattered
  • Feedback was immediate
  • Consequences were visible

They found that incentives and feedback do reduce cognitive surrender. People were more likely to override faulty AI when they had reason to care and could see the results of their decisions.

However, cognitive surrender did not disappear entirely. It was reduced, not eliminated.

This suggests that nudges and incentives alone are not enough. The deeper issue lies in how organisational systems shape thinking.

Why this matters for employee experience

For EX practitioners, this paper raises some fundamental questions:

  • What kinds of thinking do our organisations make possible or impossible?
  • Where does reflection live in systems optimised for speed?
  • How visible are the consequences of AI-assisted decisions?
  • Do people have both the permission and the capability to challenge what systems produce?

Because when organisations don’t make space for human judgment, judgment doesn’t vanish.

It gets outsourced.

A shift worth noticing

This paper is not anti-AI. It shows clearly that AI can be highly effective, particularly under pressure.

But it also surfaces a quiet shift with long-term consequences:
organisations may unintentionally be designing work in ways that erode human judgment not through intent, but through structure.

For anyone concerned with the quality of employee experience, that feels like something worth paying close attention to.

You can read the full paper here (or for pro members, you can find it saved there with other research and thinking)