Artificial intelligence, like any other tool, isn’t inherently good or bad. But AI does automate and emulate other things in ways that make it very difficult for a potential victim to identify a scam. AI can be trained to not only emulate a practice, good or bad, but to do so at scale vastly exceeding the kind of volume a human would be able to generate. AIs don’t get mad, they don’t get frustrated, and if properly trained, they don’t make mistakes. And, as we move to Conversational AI and Decision Engines, their potential to scale fraud is unprecedented in the world.
Let’s talk about how AI could significantly increase the success of criminals and how we might also use, and should start spinning up, AI as a defense.
Phishing at Scale
Scam calls have been increasing pretty steadily of late, often using the cover of the Covid-19 pandemic, invoking the names of scary agencies like the IRS or FBI, and difficult to trace or reverse cash collection methods like gift cards. They are primarily effective against older people because some of us are in cognitive decline, don’t (after retirement) get that many calls, and, because of the news, are in a near-constant state of fear. Add the financial fears of living on a fixed income and you have a population ripe for abuse.
This set of issues can make some people easy targets for human scammers who aren’t good at what they do. But a Conversational AI program with access to the web can do massive amounts of research of a possible scam victim in seconds before the call and then dynamically craft the approach, possibly even emulating the voice of a loved one, to execute this scam against a younger group of targets.
In addition, this same process could convince an executive that their boss is on the line, in a crunch, and needs confidential information immediately that shouldn’t be shared. They could do this by not only emulating the voice but the emotions of that top executive. Over the next few years, these systems will become experts on what makes us choose products, how to manipulate us to say “yes,” what approaches work best, and who we seem to trust the most. With that kind of information, and the scale of a modern top-level criminal organization or government, phishing could scale to levels we haven’t even imagined yet.
Blackmail and Deepfakes
I’m sure most of you have received the standard email, or even snail mail, that says something like, “I know what you did, and if you don’t pay me in bitcoin, I’m going to tell your wife/boss/parents/world of the crime you committed against them.” Or, “I’ve been tracking your internet usage and turned on your camera, and I’m going to post the video if you don’t, etc.”
But with the power of Digital Twins and animation, someone could create a video of you doing pretty much anything, recreate your voice, and put you virtually in an embarrassing position with someone else who’s in on the scam. And, once again, a properly trained AI with access to mostly public information off social media could craft a story that would seem believable, at a time when you are alone, thus making an alibi unverifiable, and suddenly you are in the “prove yourself innocent” nightmare. This unfortunate position is where you discover it is almost impossible to prove you didn’t do something.
We can do this now, but the cost is prohibitive. Those costs are dropping like a rock, though, and with an AI connected to a metaverse tool allowing for the digital re-creation of any place, any person could prepare these deepfakes at scale. And even if people realized that most, if not all, of the videos going out were fake, you’d know if they saw those videos, they’d likely see you in a different and far less favorable light even though you had nothing to do with them.
Identity Theft Could Soar
A few years back, someone wanting my gamer tag called into Microsoft support, convinced them they were me and got hold of my gamer tag and account. I had to call Steve Ballmer’s office to resolve the problem, and it still took months. Imagine if they could emulate my voice and appearance (they are already using masks) and use information about me they’d pulled from social media to do the same thing. And if they used an AI program to do this, rather than a handful of attacks across a few people, they could ramp up the effort to international levels while making the scam more realistic and far harder to detect.
You’d wake up to loans you didn’t authorize, folks owning your house you didn’t sell to, empty bank accounts, and if they wanted to shut you down, a host of criminal charges for things you didn’t do. The last time I looked at identity theft, officially, the cost to the victim was around $250K and nine months of their life. With technology, not only would it be far harder to unravel the theft, the massive number of successful attacks could cripple the courts, massively increasing both the cost and the time individually and potentially at a level that could cripple a state or country.
Using AI to Fight AI
Much like BlackBerry/Cylance does about looking at unusual behavior to identify an attack inside a company using AI, we need a national defense capable of doing the same thing to rapidly identify and block these attacks before they get to state or nation damaging levels. That AI defense would need access to much of what we do so it knew what patterns were standard and what patterns were not so it could tell the difference, and we wouldn’t be dealing with so many false positives we’d want to turn the protection off. And we may have to use tools similar to what the attackers used to recover any funds or properties illicitly obtained.
We are at the front end of what could be a massive AI attack, and we need an equally massive AI defense ramped up first. I don’t see that happening yet, and historically, getting something like this funded doesn’t happen until the attack has taken place. Given the potential scale of this risk, I don’t think we can wait that long.
Further reading: Ransomware Demands, Payments Rising Quickly