We Know It’s a Bot—So Why Do We Treat It Like a Person?

Here’s something strange:
Even when people know they’re talking to an AI, they still talk to it like it’s human.

Not always. Not everyone. But enough that it’s raised eyebrows in psychology labs, marketing departments, and UX teams alike.

The research is in: humans respond emotionally to machines especially when those machines know how to play the part.

The Rise of the Chatbot Confessional

Study after study shows it:

  • People share more personal information with AI than with other humans

  • Chatbots with names or photos elicit more trust

  • Slight tweaks in tone, pacing, or empathy cues change how users feel—even when they know it’s artificial

In one study, participants rated chatbot responses higher in empathy than responses from real doctors. Let that sink in.

We’re not just tolerating AI. In some cases, we’re preferring it.

Why? The Psychology Bit

Humans are wired to seek connection, not code.


We anthropomorphise everything: pets, cars, weather… So when a machine speaks fluently, listens (without interrupting), and remembers what we’ve said?

That feels like a relationship, even if we know it’s running on a transformer model.

Researchers call this “the Eliza effect”—named after one of the first ever chatbots from the 1960s. People knew it wasn’t real. Didn’t matter. They still opened up. They still felt something.

Fast-forward to today’s AI: smarter, faster, eerily good at mirroring human tone. The emotional pull? Stronger than ever.

So What Does This Mean for the Rest of Us?

Whether you’re designing customer support, building brand content, or scaling coaching or education with AI, here’s what this research tells us:

  • Tone matters—maybe more than accuracy.
    Users forgive small errors if the AI sounds warm, helpful, and human.

  • Trust is built on perceived intent, not just logic.
    If your chatbot sounds cold or robotic, people won’t just disengage—they might assume your brand doesn’t care.

  • We’re not just designing conversations. We’re shaping relationships.
    The way your AI speaks becomes part of your customer experience—and your reputation.

Ethical Note: Empathy at Scale ≠ Manipulation

There’s a fine line here.
Yes, people respond emotionally to chatbots.
No, that doesn’t mean we should exploit that.

Designing for warmth and clarity? Great.
Faking human emotion to nudge someone into a sale? That’s where it gets murky.

We need new norms—fast.

Final Thought

We’ve spent years training machines to understand humans.
But now, the real work is this: understanding how humans respond when the machine talks back.

Because people don’t just want answers.
They want to feel heard.


And increasingly—whether it’s a chatbot or a human—they don’t seem to care who’s listening… as long as someone is.

HEY, I’M Mark Egan…

I help people communicate their message and their expertise to create more impact and income. I am a former BBC-journalist and have delivered training for major clients around the world, including the UN, BBC and the Prime Minister of Finland's office. I help people use AI to scale their messaging while staying human . Speaker, strategist, and storyteller at the intersection of AI and communication. Here to make sure automation doesn’t mean alienation.