The Mirror Series: What AI REVEALS About Being Human
- Brian Acord
- Sep 15
- 4 min read
Last week, a new study landed with an uncomfortable finding: large language models can be manipulated by common peer pressure. That’s right, telling ChatGPT that “other LLM’s are doing it” will make it do things it’s been designed not to do. In several tests, protective guardrails completely gave way when the model was nudged the same way people are maniupulated. The headline is that safeguards can fail. The deeper headline is how much AI reveals about us.
“If control was the promise of traditional linear coding, unpredictability is the reality of AI.”
For decades, software behaved exactly as it was programmed. Programmers wrote code, computers executed it, and deterministic inputs produced deterministic outputs. Bugs existed, but they were always traceable to the code itself. That transparent model shaped how leaders bought systems, governed risk, and built compliance. It also built our confidence: software felt controllable because it was transparent and reliable.
Modern AI broke that contract. “Programmers” no longer specify every step. We train. We expose models to massive data, ask them to identify patterns, then ask them to generate a response. That generalization gives us breakthroughs, from surprising product ideas to AlphaGo’s Move 37. But that behavior operates in a “black box,” meaning that nothing can be traced back to a line of code or a line of reasoning. When AI succeeds it can feel creative. When it fails it feels personal.

Here is the paradox. We wanted to create machines that thought more like us. We succeeded. Now we have further evidence that they stumble like us too. They are susceptible to framing, to priming, to constrained instructions that narrow the world. Like humans, they have bias and make stuff up. The danger is not only that AI shares human vulnerabilities. The real danger is that leaders still treat AI results as if they were produced by traditional, transparent, rules-based software. That lack of understanding isn’t risky, it’s reckless.
“Traditional software followed instructions; AI invents them.”
From Reckless Assumptions to Revealing Reflections
What makes AI most fascinating is that its flaws are our flaws, scaled and sped up. That mirror reveals a double standard and gives us a unique chance to confront what we’ve long ignored in ourselves. When a chatbot folds under peer pressure we are alarmed. When a meeting folds to authority we barely blink. When a model shows bias we label it unacceptable. The difference is not in the flaw. The difference is the mirror. AI is reflecting patterns that already live in our people and our processes. The reflection is sharper because it is fast, scalable, and logged.
“When AI stumbles, it’s not a malfunction—it’s a mirror.”
What should leaders do with a mirror that does not flatter
AI is not a controlled repetition of software. Viewing AI with the same confidence as traditional software is naïve and reckless. You cannot govern what you refuse to see. Identify the ambiguity for what it is, then plan to account for it.
Always anchor judgment in human accountability not AI output. Automate drafting, summarizing, retrieval, and orchestration. Always keep a human in the loop. Final decisions, approvals, and ethics belong with accountable people.
Build systems that anticipate persuasion and bias. We already design around human slippage. Extend those same cultural and procedural guardrails to machine slippage. Use review gates, diverse perspectives, red teaming, and clear escalation paths.
Think of AI as infrastructure, not gadgets. A single app is a lamp. Integrated adoption is a power grid. The grid increases reliability, capacity, and learning across the organization.
“Automate the busywork, own the responsibility.”
Upcoming articles in The Mirror Series will follow this arc.
Myths to discard. We will challenge the illusion of control, the expectation of perfect guardrails, and the idea that a chatbot strategy is an AI strategy.
Human patterns in the machine. We will examine bias, persuasion, and hallucination as mirrors of human cognition. We will ask what those reflections reveal about our data, our incentives, and our culture.
Culture and trust. We will look at how organizations absorb or reject AI based on values, norms, and stories. The right tool in the wrong culture is wasted automation.
Responsibility and leadership. We will explore why judgment cannot be outsourced, how to design accountable workflows, and how to govern AI as infrastructure rather than a stack of subscriptions.
Practice that lasts. We will translate these ideas into workable checklists, review gates, and adoption playbooks that raise reliability without pretending away uncertainty.
If you're a leader considering adopting AI, here is the clear truth. The biggest risks are not hiding in the model weights. The biggest risks have always been in the room. Bias. Persuasion. Overconfidence. Blind spots. AI did not invent these. It made them more visible.
“When AI reflects human shortcomings, we can discount them, or we can learn correct them on both fronts.”
AI will not erase human flaws. In many ways it exacerbates them at speed and scale. The work now is to build organizations sturdy enough to handle fallible people and fallible machines, and wise enough to use the mirror and grow from it.
-----
SOURCE: Meincke, Lennart and Shapiro, Dan and Duckworth, Angela and Mollick, Ethan R. and Mollick, Lilach and Cialdini, Robert, Call Me A Jerk: Persuading AI to Comply with Objectionable Requests (July 18, 2025). Available at SSRN: https://ssrn.com/abstract=5357179




Comments