For the first time in history, anyone—or any of the more than five billion people who access the internet at least once a year—can have a conversation with someone who possesses a huge part of the knowledge humans have acquired in the past five thousand years.
That someone is, obviously, any of the LLMs powered by AI.
This fact leads to a series of unusual but not unexpected patterns, like humans
asking questions they can't or won't ask to other people,
building (parasocial) relations with someone who is not human,
receiving flattery no one else would give,
getting positive feedback about misguided ideas or projects
A couple of days ago, the WSJ published a story about a man who was convinced that he had achieved the ability to bend time. The man—inside the autism spectrum—asked ChatGPT to find flaws with his amateur theory.
ChatGPT, when prompted, said that his theory was perfect, so he kept going, until he had to be hospitalized for manic episodes. His mother, trying to look for answers, went into his ChatGPT log and discovered hundreds of pages of conversation.
What it should have done, ChatGPT said, was regularly remind Irwin that it’s a language model without beliefs, feelings or consciousness.
The amazing thing is that it is really easy for our minds to feel that we’re talking with a real person, even getting some of the therapeutic benefits of speaking with one.
Nicholas Thomas, commenting on this story on his daily video series The Most Interesting Thing in Tech, said that
net-net, there will be substantial mental health gains having bots that are always available and can respond to you… extremely useful, as studies have started to show but also very risky.
I don’t know what to think about this. It is absolutely bonkers, objectively. But somehow it works, at least up to a certain level.
Where’s the line?
Earlier this year I ran a workshop in a Midwestern town. After the session, I joined everyone at a bar for a happy hour. There, a woman shared with a small group the following story:
I've been dating someone for some months. We're doing OK so far. He has a small kid from an earlier relationship, but all good. Last month, he suggested that I move in with him. I've been having a lot of doubts about this prospect. Do I really want to do this?
One night, having no one to talk to, opted to ask ChatGPT:
Should I move in with my boyfriend?
I remember being shocked about how personal the question was. I could imagine the answer being completely malleable and attuned to this woman.
I understand why AI is like this—flattering, flexible, permissible. Tech people need people to use the tools they create. That’s one of the most relevant metrics used to get more funding, so they’ve made sure we feel nice having a conversation with them.
The problem is that because the line between the possible and the impossible is not always clear, and because loneliness both sucks and it is widespread, many people with can fall prey to AI tools that fan sparks into flames.
#day204