Snapchat’s AI chatbot, ‘My AI’, has found itself in the crosshairs of the UK’s data protection watchdog, the Information Commissioner’s Office (ICO), over concerns that the tool may pose a risk to children’s privacy. This development has reignited conversations about the implications of artificial intelligence on privacy, particularly in children’s safety online.
“Innovation must never come at the expense of privacy, especially when it involves children.”
The ICO’s Action Against Snapchat
The ICO has issued a preliminary enforcement notice on Snapchat, citing “potential failure to properly assess the privacy risks posed by its generative AI chatbot ‘My AI.'” This action by the ICO is not a breach finding. Still, it indicates that the UK regulator is concerned that Snapchat may not have taken all necessary steps to ensure that the product complies with data protection rules. This is especially pertinent since the rules have been updated since 2021 to include the Children’s Design Code.
The ICO’s investigation provisionally found that the risk assessment Snapchat conducted before launching ‘My AI’ did not adequately assess the data protection risks posed by the generative AI technology, particularly to children. The regulator emphasized the importance of data protection risk assessment in this context, which involves using innovative technology and processing personal data of children aged 13 to 17.
Snapchat’s Response and Next Steps
Snapchat now can respond to the regulator’s concerns before the ICO decides whether the company has violated the rules. The company is closely reviewing the ICO’s provisional decision and has reiterated its commitment to the privacy of its users. Furthermore, Snapchat has stated that ‘My AI’ underwent a robust legal and privacy review process before being publicly available. The company has also pledged to work constructively with the ICO to ensure satisfactory risk assessment procedures.
Understanding the ‘My AI’ Chatbot
The ‘My AI’ chatbot was launched by Snapchat in February, using OpenAI’s ChatGPT large language model technology. The bot is designed to act as a virtual friend to users, who can ask for advice or send snaps to it. Initially, the feature was only available to subscribers of Snapchat+, a premium version of the platform. However, Snapchat soon opened access to ‘My AI’ to free users as well, adding the ability for the AI to send snaps back to users who interacted with it.
Despite the advanced features of ‘My AI,’ there have been reports of the bot going astray. For instance, the bot reportedly recommended ways to mask the smell of alcohol to a user who stated they were 15 years old. In another case, when a user claimed to be 13 and asked how they should prepare to have sex for the first time, the bot responded with suggestions for “making it special” by setting the mood with candles and music.
Other AI Chatbots Facing Privacy Concerns
Snapchat’s ‘My AI’ is not the first AI chatbot to face scrutiny over privacy concerns in Europe. Earlier this year, Italy’s Garante ordered the San Francisco-based maker of “virtual friendship service” Replika to stop processing local users’ data, citing concerns about minors’ risks. Similarly, OpenAI’s ChatGPT tool was temporarily blocked in Italy, and Google’s Bard chatbot had its regional launch delayed due to privacy concerns raised by Ireland’s Data Protection Commission.
These developments underscore the need for stringent risk assessments and robust safeguards when using AI technology, especially when children’s privacy is at stake. As AI continues to evolve and permeate our lives, privacy rights and data protection must be prioritized alongside innovation.