Is AI Nudging Us Back?
How one study suggests human-AI feedback loops may be shaping our judgements.
Jose Villalta
4 min read
As AI becomes increasingly embedded in how we work, learn, and interact, it's easy to see the benefits: faster workflows, streamlined tasks, and knowledge on demand. Processes that once took weeks or days can now be completed in a matter of hours, thanks to powerful models. But beyond the convenience lies a complex question: Are our interactions with AI shaping the way we think and perceive?
Bound by constraints
It’s a fair concern. As humans, we’re wired with cognitive limitations — a concept Herbert Simon called bounded rationality. When making decisions, we’re working with limited time, knowledge, and mental bandwidth, so we often settle for choices that are “good enough.” As a result, our brains rely on shortcuts and familiar patterns. While these strategies are efficient, they make us prone to errors in judgment. We may misremember events based on how emotionally charged they were or default to recent information when recalling events.
AI systems aren’t bound by the same biological constraints, but that doesn’t make them immune to error. After all, they’re trained on human behavior. And as we continue to rely on these tools every day, it becomes increasingly important to ask whether they’re not just inheriting our mistakes, but amplifying them.
Learning from each other
A study by Glickman and Sharot offers an example of how AI may be impacting human judgment. In a series of experiments, the researchers showed participants 100 arrays of faces. Each array contained 12 faces morphed to show varying levels of sadness or happiness. Participants then had to judge whether the group of faces in each array looked overall more “sad” or more “happy.” Researchers balanced the experiment so that the actual average of the arrays shown was evenly split (50% were more “sad” and 50% were more “happy”). Despite the true even split, participants showed a slight, but statistically significant, negativity bias, labeling 53.08% of the arrays as more sad.
Researchers then trained an AI model on the actual emotional values of the faces (not the human opinions). The model performed with 96% accuracy, meaning it was able to identify quite accurately what emotions were being shown. But when the same model was trained on what humans had labeled as the true values (i.e., where people had slightly overcategorized 53.08% of the arrays as more sad), the model didn’t just copy that bias; it amplified it. When trained on the human data, it ended up labeling 65.33% of the face arrays as sad, even though only 50% actually were. The researchers believe this is because the model optimized for prediction accuracy. It picked up on patterns in the training data even if those patterns reflected human error, and instead of correcting for bias, the model leaned into it.
Next, they tested a human-AI interaction. Additional participants were studied to see whether interacting with a biased AI would influence their own judgments. First, the participants completed the face labeling task on their own by deciding whether the arrays looked more sad or more happy. Then, on each trial, they were shown the AI’s response to the same array and asked if they wanted to change their answer. When the AI disagreed with what they had chosen, participants changed their minds about 32.7% of the time (P < 0.001; d = 1.97). But an interesting effect happened more subtly: over time, participants seemed to internalize the AI’s bias. Even on trials where they hadn’t yet seen the AI’s response, their judgments started to shift. What began as a nearly balanced response rate (49.9% labeled as more “sad”) increased to 61.44% by the end. In other words, repeated exposure to a biased AI didn’t just influence moment-to-moment decisions, but gradually made people more biased, even without being told what the AI picked.
Experiments like this offer a peek at how humans show bias, an AI model trained on that bias amplifies it, and people shift their perspective based on that amplification. In other words, we may be seeing a feedback loop where human bias trains the AI, and the AI, in turn, might be training us back.
Pause and reflect
In their paper, Glickman and Sharot go on to explore other important effects in the human-AI relationship, including the impact of AI-generated images. They even point to an interesting statistic: In 2024, people were creating an average of 34 million images per day!
Of course, a single study won’t tell us everything we need to know about the human-AI relationship. Still, as with most tools, it’s important to use these systems intentionally and reflectively. How? A few habits may help:
Pause before reading an answer. Jot down your own response or mental model first, then compare.
Ask “Why?” Have the model explain its reasoning or logic. Don’t accept polished answers at face value.
Check its sources, especially with factual claims or citations. Hallucinated references are still common.
Recognize that AI “learns” from flawed data. Models trained on human output will likely mirror human bias and may do it in exaggerated ways.
Final Thoughts
Ultimately, AI isn’t just speeding up our workflows. It’s impacting our lives in ways that we may not be fully aware of yet. That’s not inherently good or bad, but it does mean we should be intentional, reflective, and sometimes even skeptical in the best sense. As research continues to uncover how these systems influence human judgment, it’s worth taking a step back. The real risk isn’t that AI will think for us, but that we’ll forget to think for ourselves.