Texas Lawyer Normal Ken Paxton on Thursday launched an investigation into Character.AI and 14 different expertise platforms over baby privateness and security considerations. The investigation will assess whether or not Character.AI — and different platforms which might be in style with younger individuals, together with Reddit, Instagram and Discord — conform to Texas’ baby privateness and security legal guidelines.
The investigation by Paxton, who is usually robust on expertise firms, will look into whether or not these platforms complied with two Texas legal guidelines: the Securing Youngsters On-line via Parental Empowerment, or SCOPE Act, and the Texas Information Privateness and Safety Act, or DPSA.
These legal guidelines require platforms to supply dad and mom instruments to handle the privateness settings of their youngsters’s accounts, and maintain tech firms to strict consent necessities when gathering information on minors. Paxton claims each of those legal guidelines prolong to how minors work together with AI chatbots.
“These investigations are a essential step towards making certain that social media and AI firms adjust to our legal guidelines designed to guard youngsters from exploitation and hurt,” Paxton mentioned in a press launch.
Character.AI, which helps you to arrange generative AI chatbot characters that you may textual content and chat with, lately grew to become embroiled in a lot of baby security lawsuits. The corporate’s AI chatbots shortly took off with youthful customers, however a number of dad and mom have alleged in lawsuits that Character.AI’s chatbots made inappropriate and disturbing feedback to their youngsters.
One Florida case claims {that a} 14-year-old boy grew to become romantically concerned with a Character AI chatbot, and informed it he was having suicidal ideas within the days main as much as his personal suicide. In one other case out of Texas, one among Character.AI’s chatbots allegedly advised an autistic teenager ought to attempt to poison his household. One other mum or dad within the Texas case alleges one among Character.AI’s chatbots subjected her 11-year-old daughter to sexualized content material for the final two years.
“We’re at present reviewing the Lawyer Normal’s announcement. As an organization, we take the protection of our customers very critically,” a Character.AI spokesperson mentioned in a press release to TechCrunch. “We welcome working with regulators, and have lately introduced we’re launching a number of the options referenced within the launch together with parental controls.”
Character.AI on Thursday rolled out new security options geared toward defending teenagers, saying these updates will restrict its chatbots from beginning romantic conversations with minors. The corporate has additionally began coaching a brand new mannequin particularly for teen customers within the final month — someday, it hopes to have adults utilizing one mannequin on its platform, whereas minors use one other.
These are simply the newest security updates Character.AI has introduced. The identical week that the Florida lawsuit grew to become public, the corporate mentioned it was increasing its belief and security crew, and lately employed a brand new head for the unit.
Predictably, the problems with AI companionship platforms are arising simply as they’re taking off in reputation. Final yr, Andreessen Horowitz (a16z) mentioned in a weblog post that it noticed AI companionship as an undervalued nook of the patron web that it will make investments extra in. A16z is an investor in Character.AI and continues to spend money on different AI companionship startups, lately backing a firm whose founder needs to recreate the expertise from the film, “Her.”
Reddit, Meta and Discord didn’t instantly reply to requests for remark.