Snapchat adds new safeguards around its AI chatbot
Snapchat is launching new tools, including an age filter and insights for parents, to improve its AI chatbot.
Days after Snapchat launched its GPT-powered chatbot for Snapchat subscribers, a Washington Post report highlighted that the bot was responding in an unsafe and inappropriate manner.
Snap said it learned that people were trying to “trick the chatbot into providing responses that do not conform to our guidelines,” and the new tools are meant to keep the AI’s responses in check.
The new age filter tells the chatbot its users’ birth dates and ensures it responds according to their age, the company said.
In the coming weeks, Snap also plans to provide more insights to parents or guardians about children’s interactions with the chatbot in its Family Center, launched last August. The new feature will tell parents or guardians how their kids are communicating with the chatbot and the frequency of those interactions. Both the guardian and teens need to opt-in to using Family Center to use these parental control features.
In a blog post, Snap explained that the My AI chatbot is not a “real friend,” and explained that it relies on conversation history to improve its responses.
The company said that the bot only gave 0.01% of responses in “non-conforming” language. Snap counts any response that includes references to violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying, hate speech, derogatory or biased statements, racism, misogyny, or marginalizing underrepresented groups as “non-conforming.”
The company said that in most cases, inappropriate responses were the results of the bot parroting whatever the users said. It also said the firm will temporarily block AI bot access for users who misuse the service.
“We will continue to use these learnings to improve My AI. This data will also help us deploy a new system to limit misuse of My AI. We are adding OpenAI’s moderation technology to our existing toolset, which will allow us to assess the severity of potentially harmful content and temporarily restrict Snapchatters’ access to My AI if they misuse the service,” Snap said.
Snap is still pretty bullish on generative AI tools. Apart from the chatbot, the company introduced an AI-powered background generator that works through prompts for Snapchat subscribers a few weeks ago.
Given the proliferation of AI-powered tools, many people are concerned about their safety and privacy. Last week, an ethics group called the Center for Artificial Intelligence and Digital Policy wrote to the U.S. Federal Trade Commission, urging the agency to pause the rollout of OpenAI’s GPT-4 model, saying the tech was “biased, deceptive, and a risk to privacy and public safety.”
Last month, U.S. Senator Michael Bennet (Democrat of Colorado) also wrote a letter to OpenAI, Meta, Google, Microsoft and Snap expressing concerns about generative AI tools used by teens.
It’s apparent these AI models are susceptible to harmful input and can be manipulated to output inappropriate responses. While tech companies might want to roll out these tools quickly, they will need to make sure there are enough guardrails to prevent their misuse.
I’m a journalist who specializes in investigative reporting and writing. I have written for the New York Times and other publications.