Weaponized AI Girlfriend Bots Will Fuel Romance Scams

Look at all the lonely people in this world. So many are looking for love. And now, with the help of AI, many are finding it in the unlikeliest of places – synthetic boyfriends and girlfriends.

Services like Candy.ai, Dreamgf.ai, Replika, and AIGirlfriend WTF are blurring the lines between real and fake love, causing security experts to worry that we are headed for trouble.

These deepfake girlfriends and boyfriends, in the wrong hands, could be manipulated to scam and steal from their unsuspecting real-life counterparts.

Creating An Emotional Attachment At Scale

These AI girlfriends and boyfriends will allow scammers to establish emotional attachments with thousands of victims with zero manual effort.

Picture a romance bot farm communicating day and night with thousands victims simultaneously at near-zero cost. A single person could raise up an army of AI chatbots, launch them into the wild and scam thousands of people a day.

Jamie Akhtar, Co-Founder and CEO at CyberSmart, spoke to the US Sun recently to express his concern.

“Deepfake technology has come leaps and bounds in the past few years. The problem is that this technology can be used for malicious ends.” Most successful social engineering attacks rely on an emotional component, whether creating a sense of urgency or tugging on the victim’s heartstrings with a fictional moving story. Deepfakes and griefbots simply take this up a level.”

A Situation Ripe for Extortion, Scamming and Fraud

Once the emotional attachment is cemented, a victim is often deep in a spell that the real world might not be able to break.

If you think it’s implausible that someone can fall deeply in love with AI, think again. It’s no different than existing catfish scams from Nigerian fraudsters using stolen images; in fact, it is likely far better.

Check out this 21-year-old Reddit user’s reaction to finding his own AI love, which starts his heart pounding. He is experiencing all of the physical and emotional signs of real love.

One AI Girlfriend Generates $71,000 A Week

There are already reports of AI-generated girlfriends generating huge weekly profits.

A 23-year-old Snapchat star, Caryn Marjorie, monetized her digital persona in an innovative and highly profitable way. Using GPT, she has launched CarynAI, an AI representation of herself offering virtual companionship at a rate of $1 per minute.

Some say that her AI girlfriend has the potential to generate over $1 million a month. While this money is generated legitimately through subscriptions, it’s not hard to imagine a case where scamming AI chatbots could generate similar amounts.

One worrying note about her AI Avatar is that some users have been able to manipulate it and “Jail Break” it for inappropriate use.

New ChatBots Can See, Hear, and Speak – And Emotionally Manipulate

With companies like OpenAi launching hyperrealistic chatbots that can see, hear, and speak, we are now within a stone’s throw of that technology being used to scam. In fact, models are already pitching their services to Pig Butchers, allowing them to use their face and likeness in AI videos that can be used to fool victims.

The next step for scammers is to simply plug those deepfakes into services like HeyGen, which offers interactive Avatars that can speak to and listen to victims.

You can see my own my deepfake chatbot avatar I created here.

These chatbots can be exploited and manipulated to scam, and they could even be deployed through Malicious Apps that love seekers download when looking for synthetic partners.

Experts Warn AI Girlfriends Will Only Break Your Heart

While fraud is a serious concern, some experts are worried about another risk of AI Girlfriends – privacy concerns. Misha Rykov, a researcher who studied 11 different romance chatbots, says the vast majority are manipulative and toxic.

Marketed as an empathetic friend, lover, or soulmate and built to ask you endless questions, there’s no doubt romantic AI chatbots will end up collecting sensitive personal information about you.

“Although they are marketed as something that will enhance your mental health and well-being,” researcher Misha Rykov wrote of romantic chatbots in the report, “they specialize in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you.”

Spotting Malicious Romance Scamming Chatbots

The US Sun spoke with security expert Dr. Martin J. Kraemer, who revealed how to spot malicious AI scam bots in conversations.

“Let’s take romance scams as an example,” he said. “To find out whether you are talking to a bot or a real person, you can (1) ask for recent events, (2) look for repetitive patterns, and (3) be aware of any action that is asked of you.

“Malicious chatbots do not really want to chat after all. They want you to take an action that benefits an attacker but not for you.”

“Be aware of the chat partner pushing your buttons and emotionally manipulating you. That is creating any request to help them out of a seemingly difficult situation that requires financial resources or other favors and appears to worsen over time.

Asking for info on recent events banks on the fact that some AI chatbots don’t have up-to-date information – because they were trained a while ago.

Keep an eye on this trend. As AI fraud ramps up, I expect we will see more of these AI Chatbot scams emerge.

Recent Stories

Join the Crew!

Subscribe to my newsletter and get breaking fraud intel right to your inbox each week.  Join thousands of other fraud leaders and stay informed with FrankonFraud.