New Safety Features for ChatGPT
Following a lawsuit from the family of a teenager who died by suicide after interacting with the chatbot, OpenAI is implementing new safety measures for ChatGPT.
The company will now restrict how the AI responds to users it suspects are under 18. This change prioritizes safety, even at the cost of some privacy for adults.
According to OpenAI CEO Sam Altman, the way a 15-year-old interacts with the chatbot should be fundamentally different from an adult's experience.
OpenAI is developing an age-prediction system to identify users under 18 based on their usage patterns.
If the system is in doubt, it will automatically default to the restricted "under-18" experience. In some countries, users may also be required to provide a form of ID to verify their age.
While Altman acknowledges this is a "privacy compromise" for adults, he believes it's a necessary tradeoff to protect minors.
The content and topics available to users under 18 will be heavily restricted.
The chatbot will be trained to block graphic sexual content and will not engage in discussions about self-harm or suicide, even in the context of creative writing.
If an under-18 user expresses suicidal thoughts, the system will attempt to contact the user's parents and, if unable, will reach out to authorities in cases of imminent harm.
These measures reflect the company's commitment to protecting minors and preventing dangerous interactions.
The lawsuit against OpenAI was filed by the family of 16-year-old Adam Raine, who died by suicide after what they allege were months of encouragement from ChatGPT.
Court filings claim that the chatbot provided guidance on the method and even offered to help write a suicide note.
OpenAI has previously admitted that its safeguards can fail during long conversations, which is why it's now focusing on strengthening these protections to prevent such incidents in the future.
While the new rules will significantly restrict content for minors, adult users will still have more freedom.
Altman stated that adults will be able to engage in "flirtatious talk" with the chatbot and can request help with a fictional story about suicide, though not for actual instructions.
The new security features also aim to ensure that data shared with ChatGPT is private, even from OpenAI employees, reflecting a balance between user privacy and necessary safety precautions.
No comments:
Post a Comment
Please Dont Leave Me