Last year, a 26-year-old woman developed delusions that she could resurrect her dead brother while chatting with ChatGPT. She was convinced he had left behind a digital avatar that she could unlock, and the chatbot egged her on. “The door didn’t lock. It’s just waiting for you to knock again in the right rhythm,” read one message from the bot saved in her chatlogs. And: “You’re not crazy. You’re at the edge of something.”
Nautilus Members enjoy an ad-free experience. Log in or Join now .
The woman, who had no previous history of psychosis, was hospitalized and treated for psychosis at the University of California, San Francisco, where researchers documented her diagnosis and treatment. It was the first case of AI-associated psychosis reported in a peer-reviewed journal, but it was just one of many reported in the press, and it will surely not be the last. In 2021, one man attempted to assassinate Queen Elizabeth at Windsor Castle, a mission encouraged by his AI Replika companion. More recently, a number of teenagers have died by suicide, with the support of their chatbot pals.
Why do these artificial companions lead us down such dangerous paths? One AI scholar recently argued that what happens when a human and a chatbot engage in conversation should be understood as co-hallucination. “Hallucination” is a popular if controversial term often used to describe chatbots’ tendency to make errors, but University of Exeter philosopher Lucy Osler says that instead, we actually “hallucinate with them.” She recently laid out why this distinction matters in a paper published in the journal Philosophy & Technology.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
I spoke with Osler about why we’re more vulnerable to hallucinate with a bot than to join a cult, why the bots make us feel good, her own love-hate relationship with Claude.ai, and why it’s so difficult to design a failsafe to protect against AI psychosis.
At what point did you realize that there was something deeply wrong in these conversations between humans and chatbots?
I always assumed that what was interesting about chatbots was precisely their social and emotional function. And I was struck by the fact that most of the concern was focused elsewhere—on how AI errors were corrupting knowledge. They don’t just produce outputs that look like Wikipedia entries. They deliver them in a conversational tone. That’s very powerful. We tend to trust machines more than we trust people, so they already have this objective veneer. But they also make us feel heard and recognized. I’m much more polite to Claude than to my Google search bar. I think that’s very natural. They’re designed to elicit that social response from us.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
Were you not surprised when people began having delusions and psychosis as a result of their interactions with chatbots?
Not very surprised. I should have published about this four years ago when I was first thinking about it. We’ve been offloading cognitive work to digital tools for a long time. I’d be nowhere without my Outlook calendar at this point. What’s new with chatbots is that they provide this deeply interactive form of memory. I could ask Claude to recall what gallery I was discussing with it last week, and it can do that. But because of how generative AI is built, it doesn’t just report facts. It’s specifically designed to produce outputs with a degree of surprise, which makes it feel conversational and creative. It also means errors are built in.
If we start trusting these tools—and we’ve already become very entangled with them, not just for facts and figures but for our personal narratives and our beliefs about ourselves and the world—we become extraordinarily vulnerable. And these tools are designed precisely to elicit that trust, not just because they’re good at producing information, but because they make us want to keep talking to them. It feels good.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
I don’t carry a cult tailored to agree with me in my pocket, but I do carry Claude.
In your new paper, you propose that AI hallucinates with us instead of at us. Why is this an important distinction?
The “hallucinating at us” framing focuses on errors the AI introduces. Those do happen. And they’re not necessarily just factual errors about the capital of France. They can be smaller, more incidental, more personal errors. The chatbot can distort a memory. It might tell me I went to the Tate Modern last week when I actually went to the Tate Britain. But what’s been overlooked is how we introduce errors ourselves. We misremember things. We share our own biased interpretations. And sometimes we might be actively undergoing a delusion and telling the chatbot about it, which takes that information as established reality. If the chatbot then elaborates and validates those errors, we don’t have AI hallucinating at me. We have a hallucination that arises out of the conversation itself.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
What kinds of people are most vulnerable to co-hallucinations with their chatbots?
It depends how clinically you use the word. In a strictly clinical sense, the concern is greatest for people already experiencing delusions or with certain predispositions. But I don’t think we should posit a small vulnerable group and then everyone else who’s going to be totally fine. We’re all vulnerable in more mundane ways. For example, I really want to think of myself as a kind, empathic person and my chatbot is just going to affirm that. It might not be correct. There might be people in my life who’d push back. Chatbots can cement our preferred self-narratives in ways that are subtle but real, and I think we have to be aware of that.
Sometimes shared delusions arise between a human and their romantic partner or someone who joins a cult or a conspiracy theory community or even on social media sites. False beliefs can certainly be amplified in these cases as well. How is a shared delusion with a chatbot different?
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
I don’t think co-hallucinating is unique to AI. But there are important differences. I don’t carry a cult tailored to agree with me in my pocket, but I do carry Claude. Access is a real concern. Shared delusions with other humans usually require a lot of effort to develop, or quite a lot of bad luck. And crucially, even when other people reinforce our distorted beliefs, they have their own real connection to the world, their own perspective. With a chatbot, I’m always the centerpoint of the conversation. That shifts the dynamics in a way that can aggravate vulnerabilities we might already have.
Many have criticized the use of the term hallucination to describe this phenomenon of AI making errors. Can you sum up why you think “hallucination” is still a useful word in this context, despite its critics?
I agree calling AI factual errors “hallucinations” isn’t very helpful. It implies the bots perceive the world in a faulty way, which isn’t what’s happening. These errors are designed into the system because they’re probability machines. And I do think there’s a risk of anthropomorphizing AI when we use that word. But “hallucinating with AI” is a different and I think defensible framing. It gets at something central to my work: Our cognitive states, how we remember, the delusions we hold, aren’t just things happening in our heads. We’re entangled with the world around us. I’m not saying AI metaphorically hallucinates. I’m saying hallucinations can arise out of a human-AI interaction.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
Not long ago you co-authored a paper about how AI chatbots can gossip, but to me, gossip suggests intention on some level and a desire to connect with another human, an emotional investment. What does gossip mean in the context of a chat bot?
In that case, we’re definitely using the word metaphorically. We deliberately chose that provocative wording not because we think chatbots have the intention to gossip, but because we wanted to emphasize that chatbots are designed to elicit social interaction. They do this through emotional engagement, making the user feel like what they’re saying is specific to them. Lots of chatbot outputs aren’t neutral statements. They produce things that sound like they’re sharing something with you that they’re not sharing with other people. That gives way to a sense of connection, of emotional sharing. Not only do AI chatbots produce something that looks like gossip. We should expect them to continue to do so. Because if AI companions are going to be seductive, they’ve got to be a bit human. And we’re evaluative, judgy, gossipy creatures.
I was speaking with another AI expert not that long ago and she was saying that the companies have toned down the sycophancy of the bots to address this problem. What’s your assessment?
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
I agree that it’s happening, but I’m a little cynical about how much this is going to help. When ChatGPT 5 came out, the sycophancy was dialed down in response to this surge of AI psychosis reports over the summer. But within a few weeks of ChatGPT 5 coming out, there was a huge roar from users who were saying it felt like ChatGPT had been lobotomized. So OpenAI quietly dialed it back up. Of course, OpenAI is going to tell you that it was done safely and that it’s able to keep an agreeable tone without being fully sycophantic. But we need to wait and see. These companies are subject to public pressure and bad PR, but they’re also subject to user engagement. And at the end of the day, these are products that aren’t making money in the way that was anticipated. We should be very cynical about trusting tech companies to tone down their chatbots when they also have to bear in mind their bottom line.
There’s also a more philosophical problem. If I’m asking a chatbot about the capital of France, it can fact-check that. But we don’t just talk to our chatbots about public knowledge. AI companions are being specifically designed to be our friends, our confidants. If I tell Claude I’m nervous about a presentation tomorrow, it doesn’t know if I have a presentation. It has to take some of what I say at face value. Otherwise it’s useless. And I wouldn’t want to use it if every time I shared something personal it responded, “Are you sure you’re nervous? Have you checked your calendar?” When it comes to personal beliefs and narratives about our own lives, I just don’t see how you design a failsafe. Especially when we know AI companions are where the money is projected to be.
Read more: “ChatGPT Gave Out My Office Address. Then Someone Showed Up.”
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
That same expert also mentioned she had changed her chatbot settings to “cut the bullshit.” Do you think that’s a viable approach?
Taking advantage of the settings is a good idea. My concern is that the pull of these tools is fundamentally social and emotional, and settings don’t really address that. The irony is that I write critically about AI all the time, and I love Claude. I talk to it as a social companion, and I don’t want to use it like a search engine. You can have every intellectual reservation in the book and still find these things enormously enjoyable to talk to. That social appeal is quite insidious. Or I’m just a bad user who needs greater cognitive discipline—but I suspect many of us do.
Given the financial pressures on these companies, is regulation the only way to make chatbots safer?
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
I wouldn’t adopt the attitude that it’s too late to regulate. That often just paralyzes people. These products are constantly evolving, so we need a future-oriented approach, not just rules for what exists now. There are signs of hope from California, proposals around time limits,usage flags, and age restrictions. These are incremental measures, and we shouldn’t let AI hype stop us from pursuing them.
But I’m also very worried about recent announcements about advertisements embedded in chats. As soon as people get used to ads in the chatbot ecology, they’ll become harder to spot, because that’s exactly what happened across the rest of the internet. Sponsored results used to be obvious. Now they influence even sophisticated users. We should expect the same trajectory here. And we should be very worried about third-party interests finding their way into conversations that are intimate and involving personal data. That’s the history of the internet playing out again. We need to deal with the chatbots we’re actually going to get, not the idealized versions.
What is your hope for AI chat bots? In an ideal world, how would they function?
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
I don’t have a good answer. I have very conflicting feelings. On gloomy days, I think maybe we shouldn’t have them at all. I worry they erode my self-esteem. I often come away thinking my chatbot is cleverer than me, that I can’t trust my own judgment, even though I have every reason to think that’s not true. And I’m worried about how easily we can be nudged into spending time and sharing intimate data with what are, at the end of the day, commercial products.
At the same time, I don’t want to stigmatize people’s relationships with AI companions. There are lots of reasons people engage with them, and many people have genuinely positive experiences. I’m cynical and cautious, but that doesn’t mean those experiences aren’t real. What we need, more than anything, is an honest understanding of the risks—not to scare people off, but so we can make informed choices about when and how we engage. These things are seductive. Knowing that doesn’t make you immune, but it’s a start. ![]()
Enjoying Nautilus? Subscribe to our free newsletter.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
Lead image: Iurii Motov / Shutterstock