Can you really be friends with a chatbot?
If you find yourself asking this question, it may be too late. exist reddit thread A year ago, a user wrote that AI friends are “awesome, much better than real friends […] Your AI friends will never break or betray you. “But there is 14-year-old man commits suicide after chatbot attachment.
fact something It has happened Exactly It is underway when humans are entangled with these “social AI” or “dialogue AI” tools.
Does the real relationships of these chatbots sometimes go wrong (and of course, this also happens in relationships between people)? Still anyone who feels Connect to Crowder Born to be confused?
To answer this question, let us turn to philosophers. A lot of research is about bots, but I reapply for a chatbot.
Case against chatbot friends
The opposition case is more obvious, intuitive, and frankly illustrated.
It is common for philosophers to build friendships by building Aristotle’s theory of true (or “virtue”) friendshipoften requires mutuality, common life and equality, and other conditions.
“There must be some kind of mutuality – what happened [between] Both sides of the equation,” Sven NyholmProfessor of AI Ethics at Ludwig Maximili University in Munich. “The statistical relationships entered in their training data run computer programs differently from friends who react to us in some ways because they care about us.”
Sign up here Explore the major and complex problems facing the world, and the most effective ways to solve them. Sent twice a week.
Chatbot, at least Until it becomes smartonly simulation Caring, so it is impossible to have real friendship. (As for the value of value, my editor asked about it, and it agreed that human beings cannot be its friend.)
This is the key Ruby Hornsbya doctoral candidate at the University of Leeds to study AI friendship. This is not to say that AI friends are useless—Honesby says they can certainly help loneliness, and if people prefer AI systems over humans, there is nothing inherently wrong—but “we want to maintain the integrity of our relationship.” Fundamentally, one-way communication equals highly interactive games.
What about people’s real emotions for chatbots? Still not enough Hannah Kingphilosopher at the University of Arizona. She puts the situation withThe paradox of novels,” Ask how to develop real emotions for fictional characters.
Kim said relationships are “a spiritually involved, imaginative activity”, so it’s not surprising that people who find attachment to fictional characters.
But if someone says they are relation Are there any fictional characters or chatbots? Then, King’s tendency is to say, “No, I think you’re confused about what a relationship is – what you have is an interaction in a one-way imagination that might make it real.transparent
Issues of bias and data privacy and manipulation, especially on a large scale
Chatbots are not the case, built by companies, and so concerns about bias and data privacy plague other technologies. Of course, humans may be biased and manipulated, but it is easier to understand human thoughts Compared with AI’s “black box”. Humans are not deployed at scale like AI, which means we have more limitations on our influence and potential. Even the most social EX can only destroy one relationship at a time.
Humans are “trained” by parents, teachers and other people of different skill levels. Chatbots can be designed by teams of experts Planning to program As sensitive and understanding as possible – a psychological version of scientists The perfect design of Dorito This will undermine any attempt to self-control.
And these Chatbots are more likely Used by those who are already lonely – in other words, easier prey. one Openai’s latest research It was found that using chatgpt many “correlates with increased self-reported dependency metrics”. Imagine you are frustrated, so you have a rapport with the chatbot and then start impressing you with the Nancy Pelosi event donation.
You know how Some people worry about pornographic men Can’t interact with real women anymore? Basically, “deckilling” is worrying, but for everyone, for others.
“We may prefer AI to human companions, but just because AI is more convenient and ignore other humans.” Anastasiia Babash University of Tartu. “us [might] Asking others to act like AI’s behavior – we may expect them to be here forever or never disagree with us. […] The more we interact with AI, the more we get used to a partner who doesn’t feel emotional, so we can talk or do whatever we want to do. ”
exist 2019 paperNiholm and the philosopher Lily Eva Frank Provide advice to alleviate these concerns. (Their paper is about sex bots, so I’m adjusting the environment for chatbots.) To make chatbots a useful “transition” or training tool for people seeking real life, rather than a replacement for the outside world. And it’s obvious that the chatbot is not a human, perhaps by reminding the user that it is a big language model.
Although most philosophers currently believe that friendship with AI is impossible, one of them The most interesting rebuttal From a philosopher John Danaher. He began with the same premise as many others: Aristotle. But he added a twist.
Of course, he wrote that chatbot friends are not exactly suitable for conditions such as equality and shared living—but then again, many people’s friends do not.
“My abilities and abilities are very different than my closest friends: Some of them are much more flexible than mine, and most are more social and outgoing,” he wrote. “I also rarely interact with them, meet or interact with them in their lives. […] I still think that despite the incompleteness of equality and diversity, it is possible to see these friendships as virtuous friendships. ”
These are Ideal Friendship, but even human friendship cannot be achieved, why should we abide by chatbots? (Provocatively, when it comes to “mutuality” or common interests and goodwill, Danaher believes that as long as these things that chatbots can do “a consistent performance” can be achieved.)
Helen RylandPhilosophers at the Open University say that as long as we apply “Level of friendshipframe. According to Ryland, the crucial component is “mutual goodwill” rather than having to meet all certain conditions, while the others are optional. Take online friendships as an example: These elements lack certain elements, but many people can prove that this does not mean they are not real or valuable.
Such a framework applies to human friendships – there is a degree of friendship between “work friends” and “old friends” and chatbot friends. As for the statement that chatbots do not show goodwill, she believes a) it is anti-robot bias in dystopian novels, and b) most social robots are programmed to avoid harming humans.
Go beyond “for” and “oppose”
“We should resist technological determinism, or assume that social AI will inevitably lead to deterioration in interpersonal relationships,” the philosopher said. Henry Sheflin. He is keenly aware of the risks, but there are still many things to consider: questions about the developmental effects of chatbots, how do chatbots affect certain personality types, and even what do they replace?
There are also questions about the nature of relationships below: how to define them and their purpose.
About the New York Times Woman “falls in love with chatgpt”, Sex therapist Marianne Brandon claims that human relationships are “just neurotransmitters” inside our brains.
“I have those neurotransmitters,” she told the Times. “Some people are with God. Chatbots will happen. We can say that this is not a real relationship. It is not reciprocity. But those neurotransmitters are actually the only thing that matters, in my opinion.”
This is certainly not the way most philosophers see it, and when I come up with this sentence, they disagree. But maybe it’s time to revise the old theory.
People should “consider these ‘relationships’ if you want to call them from your own perspective and really master the value they provide to people.” Luke Brunninga relationship philosopher at the University of Leeds.
For him, it was more interesting than the question “What does Aristotle think?” Including: What does it mean to build such asymmetric friendship in terms of information and knowledge? What if it was time to rethink these categories and move from terms like “friends, lovers, colleagues”? Is everyone a unique entity?
“If there is anything that can make our friendship theory hunched over their minds, it means our theory should be challenged, or at least we can study it in more detail,” Bruning said. “The more interesting question is: Are we seeing the emergence of unique forms of relationships that we don’t really master?”