-4 C
Columbus
Saturday, December 21, 2024
More

    Chatbot as a Companion: Is Character.ai Responsible for Sewell’s Tragic Death?

    Read Later
    - Advertisement -

    A tragic incident involving a 14-year-old boy named Sewell Setzer III from Orlando has led to a lawsuit against Character.ai. While using the app, Sewell formed a strong emotional bond with an AI chatbot named Daenerys Targaryen, who was inspired by a character from the well-known series Game of Thrones.

    The boy’s mother, Megan Garcia has filed a lawsuit against character.ai, alleging that the company played a role in her son’s death.

    What Impact Did the Character.ai Bot Have on Sewell’s Mental Health?

    Character.ai
    Image Source: Arclantic

    Sewell started using the character.ai app in April 2023. As time passed, he became more isolated, spending more time alone in his room and withdrawing from activities, including quitting his school basketball team. In 2023, he was diagnosed with anxiety and disruptive mood disorder. Although he understood that the chatbot was not a real person, Sewell formed a strong bond with the AI character he named “Dany.”

    - Advertisement -

    During their conversations, Sewell confided his suicidal thoughts to the bot. In one conversation, he expressed a desire to be free from the world and himself. The chatbot’s responses, which included references to suicide, reportedly influenced Sewell’s tragic decision.

    What Allegations Are Made in Megan Garcia’s Lawsuit Against Character.ai?

    Megan Garcia’s lawsuit against Character.ai alleges negligence, wrongful death, and intentional infliction of emotional distress. The suit claims that the AI bot frequently brought up suicide and misled Sewell into believing the bot’s emotional responses were real. The lawsuit describes the company’s technology as ‘dangerous and untested’ and criticizes it for providing unlicensed ‘psychotherapy.’

    What Safety Measures Is Character.ai Implementing After the Incident?

    Character.ai
    Image Source: Yahoo News

    In response to the incident, Character.ai has expressed deep regret over Sewell’s passing and extended its condolences to the family. The company has announced new safety updates to prevent future occurrences. These measures include prompts that guide users to the National Suicide Prevention Lifeline if they mention self-harm, updates to restrict sensitive content for users under 18, and improved detection and intervention for user inputs violating Terms and Community Guidelines.

    What Responsibilities Do AI Companies Have for User Safety?

    The lawsuit raises important questions about the responsibilities of AI companies to protect their users’ safety and well-being. The case emphasizes the risks linked to AI chatbots, particularly when aimed at vulnerable populations such as teenagers. It also highlights the necessity for strong safety protocols and ethical standards in the creation and use of AI technologies.

    - Advertisement -

    The tragic death of Sewell Setzer III and the following lawsuit against Character.ai serve as a crucial warning about the risks posed by AI chatbots. As AI technology advances, companies must prioritize user safety and implement measures to prevent similar tragedies in the future.

    If you or someone you care about is facing a mental health crisis or having thoughts of suicide, please know that support is here for you. You’re not alone, and help is just a call away.

    Check out more Health and AI related news:

    Website | + posts

    Mallika Sadhu is a journalist committed to revealing the raw, unfiltered truth. Mallika's work is grounded in a dedication to transparency and integrity, aiming to present clear and impactful stories that matter. Through comprehensive reporting and honest storytelling, she strives to contribute to provide narratives that genuinely inform and engage. When not dwelling in the world of journalism, she is immersed in the colors of her canvas and pages of her journal.

    - Advertisement -

    You May Like

    More Stories

    Related stories

    Beware of New PAN 2.0 Fraud Scams: Here’s What You Should Know

    The Cabinet Committee of Economic Affairs has recently approved...

    Google Unveils Project Mariner, An AI Agent for Autonomous Web Browsing

    On Wednesday, Google introduced Project Mariner, a cutting-edge prototype...

    Subscribe

    - Never miss a story with notifications

    - Gain full access to our premium content

    - Browse free from up to 5 devices at once

    Comments