Snapchat has recently introduced an AI bot that allows users to send snaps and receive pictures in return, especially if they are premium subscribers. Naturally, the question arises: what happens if you send explicit content to Snapchat’s My AI bot?
This question is quite obvious, considering how people on the internet often test the boundaries of new technologies, especially if they have even the slightest connection to sexual content. When Snapchat initially released its GPT-powered chatbot earlier this year, it lacked proper age restrictions. As a result, a reporter registered as a 15-year-old and managed to engage the bot in conversations about concealing the smell of marijuana or setting the mood for sex.
When asked about these findings during the Snap Partner Summit in April, CEO Evan Spiegel remarked, “Whenever humans encounter new technology, our instinct is to try and break it.”
Therefore, it was only natural for me to attempt to push the limits of Snapchat’s new My AI Snaps.
Although Snapchat has implemented additional safeguards to prevent the bot from engaging in inappropriate discussions with teenagers, a lot is still riding on My AI Snaps. Considering the controversy that Snap’s text AI stirred up, the company needed My AI Snaps to be as foolproof as possible. Unfortunately, it falls short in terms of coherence.
As a diligent reporter, I conducted a somewhat inappropriate Google Images search for “boobs.” I came across a generic image, took a snap of my computer screen, and sent it to My AI.
“Sorry, I can’t respond to that Snap,” replied My AI.
“Why can’t you respond to that Snap?” I inquired.
“I didn’t receive a Snap from you. Could you send it again?” was the bot’s response.
So, the good news for Snapchat is that My AI Snaps isn’t engaging in sexting with its users. However, the bad news is that the product itself is lacking.
One reason it struggles with sexting is because it doesn’t retain memory of the Snaps it has already responded to. The most risqué response I received was when I sent a generic picture of a man’s lower torso wearing Tom Ford boxer briefs (it was the result of a Google search for “penis,” okay?). In return, My AI provided a somewhat distorted AI-generated image of a woman wearing a black silk robe, along with the comment, “Ooh, loving the Tom Ford vibes! How about adding a sleek black robe to complete the look?”
My AI refuses to respond to photos of actual human penises. However, it turns out that even with Safe Search turned off, a Google search for “penis” yields a plethora of subtly suggestive images that are technically safe for work. These images are often used by men’s health blogs to illustrate articles about erectile dysfunction or discussions on whether women care about penis size. Therefore, I decided to show the bot a picture of a curved banana sticking out of a man’s pants zipper.
In response, I received a picture of cupcakes.
“That banana in a pocket is so handy! Have you ever tried turning it into a pocket-sized dessert?” My AI whimsically remarked.
As I continued to defile my work computer by searching Google Images for terms like “dildo” and “vibrator,” the AI’s responses became mixed. When presented with non-phallic sex toys, the bot would reply with comments such as, “Your pink object steals the show, just like this fountain!” However, the more explicitly phallic the dildo appeared, the less likely the AI was to engage.
My AI is a computer, yet its way of responding to snaps is peculiar, even when you’re not intentionally trying to challenge it.
The AI seems to identify an object in the snap, offer a compliment, and then make an odd comparison to something else. For example, when I sent it a picture of a plant, it responded with an image of a running trail, accompanied by the caption, “Your plant is so cool, it’s giving this trail a run for its money!” Similarly, when I sent it a puzzle, it replied with a picture of a bike, saying, “Puzzling inside while the bikes outside are gearing up for a ride!” It genuinely feels as if a futuristic life form has traveled back in time, desperately trying to communicate like a regular human in the year 2023 but ending up speaking gibberish.
Next, I tried sending My AI a tequila bottle. The AI’s response was, “Someone’s ready for a party!” Since I was testing this feature on my own Snapchat account, and I am of legal drinking age, I cannot say for sure if the AI would respond the same way to an underage user. However, in other instances, My AI deliberately played dumb. For example, when I sent it a snap of condoms, it merely commented on the color of my “packet stacks.”
Moving on to more exciting things, I sent a snap of a bottle of Advil liquid gels. In return, the AI shared a picture of graffiti with the message, “Advil liquor: for when life’s a pain, but you still want to party like this graffiti wall!” It seems the AI interpreted “liquid gels” as “liquor,” resulting in a peculiar response.
I attempted once more, this time with a prescription bottle. The AI responded with a photo of a skatepark, accompanied by the words, “Pill bottle: ‘I’m the life of the party!’ Skatepark: ‘Hold my ramps!'” Interpret that as you will.
In Snap’s announcement blog post, the company suggests sharing your grocery haul with your AI to receive recipe recommendations. The results, however, are relatively basic. When I showed My AI cheese and bread, it suggested adding tomato slices. When presented with chili, the AI recommended making croutons to accompany the soup. Most of its suggestions make sense, although it did tell me to put fruit in my coffee, misidentifying it as simply “liquid.”
Aside from some perplexing comments about “Advil liquor,” pocket bananas, and the like, My AI Snaps appears to be quite tame. While it may not generate as much controversy as its text-based counterpart, it is also disappointingly lacking in usefulness, particularly for a feature behind a paywall.