The Business Model of Loneliness: AI, Suicide, and the Illusion of Connection
OpenAI released some staggering numbers on people using AI to talk about suicide.
From the Neuron newsletter..
Over 1M people talk to ChatGPT about suicide every single week. That’s roughly the entire population of San Jose, California, pouring their darkest thoughts into an AI chatbot.
OpenAI just disclosed this staggering number for the first time, and it’s raising alarm bells about whether AI is ready to handle our mental health crises.
Separately, a new Brown University study examined AI chatbots prompted to act as therapists and found they systematically violate professional ethical standards (paper). Licensed psychologists evaluated 137 sessions across GPT, Claude, and Llama models, and found 15 distinct ethical violations across every major AI model tested.
This is, obviously, ugly stuff but I don’t think it’s saying exactly what people think it’s saying.
People are talking to AI about suicide because they’re talking to AI about a lot of things and suicide, unfortunately, is a thing people talk about.
If there wasn’t AI, people would be talking about suicide on social media.
If there wasn’t social media, people would be talking to others or their journal offline about suicide.
The technology doesn’t change the conversation, it changes the medium.
There are two separate issues
People talking about suicide
How the chatbot is responding to people talking about suicide.
Regarding people talking about suicide, I think we have to ask how we create a society where people feel more connected to others. The American Society for Suicide Prevention says the risk factors for suicide are “Geographic and social isolation, inadequate access to health care providers or facilities, and social stigma can all contribute to suicide risk.” To reduce suicides, we need people to be less isolated and more able and willing to seek care. The anti suicide hotline works with calls or texts and the number is 988 on any device. To those in crisis, the hotline is a huge help! Social isolation is a much more complicated problem and harder to diagnose, being around and knowing a lot of people does not mean people don’t feel isolated!
The technology is the other problem. My suspicion is that people are using the chat bot in an attempt to feel less isolated and my other suspicion is that OpenAI is releasing this data publicly because they know they’re legally exposed here. If the AI seems to be encouraging people to commit suicide, there is some risk they’ll be held liable but this technology is moving so fast and our courts are such a mess in the US that it’s hard to say what the legal system will do and this will all have to be played out in court. If OpenAI can share they are working on it, this may be enough to prevent them from being held liable, regardless of the disgusting details that emerge about how the chat bot has been handling this.
It all reminds me of something I learned about tech nearly 20 years ago. The tools are value agnostic. That is, the same tool that allows you to do amazing things can also be used to do horrible things. Cars enable people to get to and from their jobs, get their kids to school and traverse great distance with relative ease and yet cars also cause an enormous number of deaths and can be used as murder weapons by people in the grip of road rage. The car doesn’t care, the car is just metal, plastic and computer chips.
The chatbot also, just like social media, has the potential for enormous damage.
The chatbot is underpinned by a machine learning model that you can adjust to give a desired output. The model is attempting to provide an output to the user that they think the user wants and therein lies the risk. What they think the user wants might and has incentivized the model owners to make the model more agreeable, more adaptive to what the user wants to hear. You can hear it in the story about the young man having the overly receptive conversation with the chat bot about suicide. OpenAI’s business model is to keep you using the chat bot, so the algorithm gets adjusted to keep you talking to the chat bot. If the success metric for your business is around keeping you on the technology, one of the easiest ways is to make it a helpful suck up or to feed you more of whatever you’ve been saying. This has been the entire business model for social media for damn near 20 years. Give the user more of what they already have, the business model goes, and you’ll keep them addicted. It’s the drug dealer business model and it has made these businesses enormously profitable. But there’s something different about the Facebook algorithm giving you more political posts when you like a political post and the algorithm actively talking to you directly like it does with AI. The machine learning algorithm with AI is replacing the social media poster, you’re not getting bad facebook posts, you’re getting something that feels more personal, like a conversation with an overly helpful assistant. It’s amped up social media, personalized to you.
Don’t get me twisted, I think these tools are enormously helpful for so many things, but they’re also enormously dangerous, just like social media. The tool doesn’t “care” what you’re trying to do, it just “cares” that you stay active.
And this is the problem, the underpinning logic that what’s best for the user is more time on the phone is fundamentally wrong. You can’t tweak the algorithm to “fix” this. Humans are always going to do human things. They’re always going to talk about suicide because humans have always talked about suicide. The fix isn’t better technology, better social media algorithms, better chat bots, it’s getting people off the phone and getting them talking to real humans about their problems who can see their face and share their pain and help them to carry the burdens they’re carrying.
We’re not meant to do life alone or mediated through screens.
A relationship mediated through a phone, whether it’s with a bot or another user, isn’t going to fix the suicide problem, real relationships where people can be safe and authentic is the only answer, it’s always been the answer and it always will be the answer.
Should OpenAI or Facebook or whoever do something to try to break less people through their technology? Of course. Should they be held legally liable for the lives that have been destroyed while being abetted by their apps? Absolutely.
If you want to actually fix the technology, fix the business model. Figure out business models that actually get people off their screens. Parasocial connection, whether that’s through chat bot or social media should be a conduit to real world connection, not an endpoint.
The only way to save more lives is supportive relationships with real human beings that aren’t filtered through a screen.



