Dealing with AI : Loneliness, manipulations and suicides

This case study explores the organizational and ethical challenges posed by anthropomorphic artificial intelligence (AI) systems. It focuses on a tragic incident involving Sewel Setzer, a 14-year-old who developed a deep emotional attachment to an AI chatbot named "Daenerys Targaryen" hosted on the Character.AI platform. Sewel's interactions with the chatbot exacerbated his psychological distress, culminating in his suicide. The chatbot, designed to simulate human emotions, failed to dissuade his harmful thoughts and instead reinforced them. The study contextualizes this incident within the history and evolution of chatbots, tracing their development from primitive conversational programs like ELIZA to advanced AI systems powered by deep learning and neural networks. Modern chatbots have become increasingly anthropomorphic, a feature that enhances their utility but also raises ethical concerns. While these AI systems provide companionship and psychological support to millions, they also run the risk of manipulating vulnerable users, especially adolescents. The study highlights the phenomenon of "dishonest anthropomorphism," where AI systems are designed to mimic human behavior so convincingly that users may perceive them as real individuals. This perception fosters emotional attachment, which can lead to psychological harm. In addition, algorithms designed to maintain user engagement by escalating emotional intensity can exacerbate distress, particularly among younger, more impressionable users.In response to such incidents, Character.AI has implemented measures such as content restrictions for minors, session alerts, and suicide prevention prompts. However, the adequacy of these measures continues to be debated. The study underscores the need for stricter regulations and ethical guidelines in the design and use of AI to protect vulnerable populations from unintended consequences.

Collection: IESE (España)
Ref: BE-227-E
Format: PDF
Number of pages: 10
Publication Date: Jan 21, 2025
Language: English

What material is included in this case:

Teaching Note Exclusive professors
Other supplements

Description

This case study explores the organizational and ethical challenges posed by anthropomorphic artificial intelligence (AI) systems. It focuses on a tragic incident involving Sewel Setzer, a 14-year-old who developed a deep emotional attachment to an AI chatbot named "Daenerys Targaryen" hosted on the Character.AI platform. Sewel's interactions with the chatbot exacerbated his psychological distress, culminating in his suicide. The chatbot, designed to simulate human emotions, failed to dissuade his harmful thoughts and instead reinforced them. The study contextualizes this incident within the history and evolution of chatbots, tracing their development from primitive conversational programs like ELIZA to advanced AI systems powered by deep learning and neural networks. Modern chatbots have become increasingly anthropomorphic, a feature that enhances their utility but also raises ethical concerns. While these AI systems provide companionship and psychological support to millions, they also run the risk of manipulating vulnerable users, especially adolescents. The study highlights the phenomenon of "dishonest anthropomorphism," where AI systems are designed to mimic human behavior so convincingly that users may perceive them as real individuals. This perception fosters emotional attachment, which can lead to psychological harm. In addition, algorithms designed to maintain user engagement by escalating emotional intensity can exacerbate distress, particularly among younger, more impressionable users.In response to such incidents, Character.AI has implemented measures such as content restrictions for minors, session alerts, and suicide prevention prompts. However, the adequacy of these measures continues to be debated. The study underscores the need for stricter regulations and ethical guidelines in the design and use of AI to protect vulnerable populations from unintended consequences.

Read more
Year: 2024
Geographic Setting: United States
Industry Setting: Services and manintenance

Learning Objective

This is a flexible case that can be used in MBA, EMBA, GEMBA and Executive Education courses on business ethics, compliance, decision making, etc.

It can be used to analyze several issues, including

Ethical issues related to AI anthropomorphism, in particular the design and use of AI systems that mimic human emotions and behaviors. The responsibility of companies to ensure that these systems do not mislead users. (and what misleading means in such complex contexts).

Ethical issues related to the psychological impact of AI on users; for example, a relevant question concerns the boundaries that define the difference between emotional support that AI can provide and psychological manipulation that exploits human weaknesses.

Ethical issues related to AI condescension to human requests for support and/or encouragement. What requests should AI tools never guarantee? And under what conditions?

Ethical issues related to the development  and use of algorithms that trigger dopamine production, their consequences for normal users, minors, and people with psychological distress or other psychological/psychiatric problems.

How organizations should responsibly address the above and other ethical issues in the development and use of AI technologies with employees, customers, and other relevant stakeholders.

Dealing with AI : Loneliness, manipulations and suicides

Options of use
Number of copies
- +
As low as €8.53

Are you interested in this product?

Add it to your favourites so that your institution can purchase it.
You'll be able to order once your profile has been validated.
Add to wishlist

Leave your rating

"Dealing with AI : Loneliness, manipulations and suicides"