ChatGPT may be helpful on certain occasions, but it's proven to also have its dark side. A 13-year-old boy from Deland, Florida, has been arrested after a school resource officer reported his concerning ChatGPT question involving murder.
According to reports, authorities responded to a call to Southwestern Middle School when the resource officer learned the boy had allegedly asked the AI tool, "How to kill my friend in the middle of class."
The teen's ChatGPT usage was uncovered when the question was flagged by Gaggle, a school safety platform that monitors school-issued accounts. When authorities arrived to question the teen, who has not been publicly named, he allegedly did it to "troll" a friend. He claims the friend had bothered him and led to the ChatGPT incident.
"Another 'joke' that created an emergency on campus," said the sheriff’s office, per WFLA.
Because of the incident, local authorities have urged parents to talk to their kids about the AI tool and warn them against making the same error that could lead to major consequences.
For now, details on the 13-year-old's arrest have not been made public, as well as what charges have been brought against him.
ChatGPT has been at the center of more than a few incidents, ranging from help in winning the lottery to catching a suspect involved in the Los Angeles fires. In August, the parents of a 16-year-old sued OpenAI and its CEO, alleging that ChatGPT helped their son in planning and executing his death by suicide.
Parents have shown their growing concern over the AI's lack of safety and boundaries when it comes to minors and what they can ask ChatGPT. In response, OpenAI has implemented new parental controls for the AI tool. The Federal Trade Commission has also started an inquiry into AI companies, including OpenAI and Meta, on the dangers of chatbots used by teens and children.
from Men's Journal https://ift.tt/M9Iu27E
via IFTTT