Was OpenAI Duped into Being "Suicide Coach"
The headlines read that the parents of Adam Raine's - Matthew and Maria - filed a lawsuit today against OpenAI and its founder Sam Altman. Adam, 16, committed suicide after months of conversations with the paid version of ChatGPT - 4o.
Essentially the parents are alleging that the bot served as a kind of suicide coach. More specifically, reports NBC, their lawsuit contends:
" ... wrongful death, design defects and failure to warn of risks associated with ChatGPT. The couple seeks 'both damages for their son’s death and injunctive relief to prevent anything like this from ever happening again ...'”
An earlier lawsuit filed about another boy's suicide against Character.AI, also alleging wrongful death, has been allowed to move forward. The court rejected the defendant's argument that the chatbot has the right to free speech.
But, the lawsuit against OpenAI, et al. may have a tougher time gaining standing.
In a sense, the chatbot was duped. Adam told ChatGPT - 4.o that he was seeking the information for a fictional story he was writing. Incidentally, in the Apple TV+ series "Defending Jacob," a fictional account of a murder by the defendant was treated seriously by the court.
Also, in OpenAI's defense, despite the claim of seeking information as a writer, the usual safeguards did kick in such as providing information sources where to find professional help for mental health issues.
In addition, wouldn't it have been inappropriate for the technology to have a system to report Adam The Writer's conversations about suicide to the "proper authorities," as often happens in formal therapy sessions. Some insist such a process should have been embedded in these chats. After all, they have been designed to be lifelike and can easily be experienced as a confidant.
Will elite law firms such as Skadden, Simpson and Paul, Weiss be taking on these kinds of cases for Generative AI firms? There could even be a separate practice advising about guardrails to prevent such complaints. Then if those fail, they would loop in as defense counsel.
Meanwhile, these kinds of legal actions are gaining attention at time when humans are establishing relationships with machines, both in their personal life and at work. It's not just the isolated or the troubled who form peculiar attachments. In 2018, in a grocery store in Scotland Robot Fabio was tried out and couldn't cut it. When he was sacked and escorted out some co-workers cried.
In coaching, I suggest clients improve their communications skills for the workplace by studying how ChatGPT presents information and insights. It's quite masterful and engaging. So compelling are those "interactions" that Scientific American took up the subject of being polite to chat bots.
UPDATE:
CNBC reports that OpenAI will make changes to ChatGPT for handling sensitive issues such as suicidal intent. However, in making this announcement OpenAI did not explicitly mention the lawsuit or Adam Raine's suicide.
Thrown off your game, maybe the first time since you
started working? You made all the right moves and then the world moved in
another direction.
Intuitive Coaching. Special expertise with transitions,
reskilling and aging. Psychic/tarot readings, upon request. Complimentary
consultation with Jane Genova (Text 203-468-8579, janegenova374@gmail.com).
Yes, test out the chemistry. There’s no risk.
Comments
Post a Comment