September 28, 2025
4 min read
People Are More Likely to Cheat When They Use AI
Participants in a new study were more likely to cheat when delegating to AI—especially if they could encourage machines to break rules without explicitly asking for it
Despite what watching the news might suggest, most people are averse to dishonest behavior. Yet studies have shown that when people delegate a task to others, the diffusion of responsibility can make the delegator feel less guilty about any resulting unethical behavior.
New research involving thousands of participants now suggests that when artificial intelligence is added to the mix, people’s morals may loosen even more. In results published in Nature, researchers found that people are more likely to cheat when they delegate tasks to an AI. “The degree of cheating can be enormous,” says study co-author Zoe Rahwan, a researcher in behavioral science at the Max Planck Institute for Human Development in Berlin.
Participants were especially likely to cheat when they were able to issue instructions that did not explicitly ask the AI to engage in dishonest behavior but rather suggested it do so through the goals they set, Rahwan adds—similar to how people issue instructions to AI in the real world.
On supporting science journalism
If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
“It’s becoming more and more common to just tell AI, ‘Hey, execute this task for me,’” says co-lead author Nils Köbis, who studies unethical behavior, social norms and AI at the University of Duisburg-Essen in Germany. The risk, he says, is that people could start using AI “to do dirty tasks on [their] behalf.”
Köbis, Rahwan and their colleagues recruited thousands of participants to take part in 13 experiments using several AI algorithms: simple models the researchers created and four commercially available large language models (LLMs), including GPT-4o and Claude. Some experiments involved a classic exercise in which participants were instructed to roll a die and report the results. Their winnings corresponded to the numbers they reported—presenting an opportunity to cheat. The other experiments used a tax evasion game that incentivized participants to misreport their earnings to get a…
Click Here to Read the Full Original Article at Scientific American Content: Global…