AI-powered scams and what you can do about them

Date:

AI is there to help you, whether you’re writing an email, creating some concept art, or tricking a person in distress into thinking you’re their friend or relative. AI is so versatile! But since some people don’t want to be tricked, let’s talk a little about what to look out for.

The last few years have seen a huge increase not only in the quality of media, from text to audio to images and video, but also in how cheaply and easily media can be created. The same tool that helps a concept artist create some imaginary monster or spaceship, or helps a non-native speaker improve their business English can also be used maliciously.

Don’t expect the Terminator to come knocking on your door and selling you a Ponzi scheme – these are the same old scams we’ve been facing for years, but with a generative AI twist that makes them easier, cheaper or more reliable.

This is by no means a complete list, just some of the most obvious tricks that AI can supercharge. We’ll be sure to add new tricks as they appear in the wild, or any additional steps you can take to protect yourself.

Cloning the voices of family and friends

Synthetic voices have existed for decades, but it is only in the last year or two that advances in technology have made it possible to create new voices from a few seconds of audio. This means that anyone whose voice has ever been broadcast publicly – for example, in a news report, YouTube video or on social media – is at risk of having their voice copied.

Scammers can and have used this technique to create believable fake versions of our loved ones or friends. Of course, they can be made to say anything, but for the scam, they are most likely to create a voice clip asking for help.

For example, parents might get a voicemail from an unknown number that sounds like their son, telling them that their luggage was stolen while traveling, someone let them use their phone, and can mom or dad send some money to this address, Venmo recipient, business, etc. One can easily imagine car problems (“They won’t leave my car until someone pays them”), medical problems (“This treatment is not covered by insurance”), and other similar problems.

This type of scam using President Biden’s voice has been done before. They have caught the culprits behind it, but in the future scammers will be more careful.

How can you fight against voice cloning?

First of all, don’t try to spot fake voices. They’re getting better every day, and there are many ways to hide any quality issues. Even experts get fooled.

Anything coming from an unknown number, email address, or account should automatically be treated as suspicious. If someone says they’re your friend or loved one, go ahead and contact the person as you normally would. They’ll probably tell you they’re fine and that it’s (as you’ve guessed) a scam.

Scammers often won’t proceed if they’re ignored – whereas a family member probably will. As long as you’re being considerate, it’s okay to continue reading a suspicious message.

Personal phishing and spam via email and messaging

We all get spam sometimes, but text-generating AI is making it possible to send mass emails customized to each individual. With regular data breaches, a lot of your personal data is out there.

It’s one thing when you get scam emails like “Click here to see your invoice!” with obviously scary attachments that seem to have very little effort. But with a little context, they suddenly become quite believable, using recent locations, purchases, and habits to make it look like a real person or a real problem. Armed with a few personal facts, a language model can customize a typical of these emails for thousands of recipients in a matter of seconds.

So what was previously “Dear customer, please see your invoice attached” becomes something like “Hi Doris! I’m from the Etsy promotions team. The item you were looking at recently is now 50% off! And if you use this link to claim the discount, shipping is free to your address in Bellingham.” A simple example, but still. With real name, shopping habits (easy to find out), general location (same) and so on, suddenly the message becomes a lot less clear.

In the end, it’s still just spam. But this kind of customized spam was previously done by low-paid people at content farms overseas. Now it can be done at scale by LLMs with better prose skills than many professional writers.

How can you fight email spam?

As with traditional spam, vigilance is your best weapon. But, don’t expect to detect generated text from human-written text. There are very few people who can do that, and certainly no other AI model can.

No matter how much the text is improved, this type of scam still has the basic challenge of getting you to open suspicious attachments or links. As always, don’t click or open anything unless you’re 100% sure of the sender’s authenticity and identity. If you’re even slightly unsure – and that’s a good sense – don’t click, and if you have a knowledgeable person you can forward it to for a second pair of eyes, do so.

‘Fake You’ Identity and Verification Fraud

Due to the number of data breaches in the last few years (thanks, Equifax), it is safe to say that almost all of us have a considerable amount of personal data on the dark web. If you are following good online security practices, a lot of the threats are mitigated because you have changed your passwords, enabled multi-factor authentication and so on. But generative AI can introduce a new and serious threat in this area.

With so much data available about a person online, and for many people even having a clip or two of their voice available, it is very easy to create an AI personality that sounds like the target individual and has access to most of the facts used to verify the identity.

Think about it this way. What do you do if you’re having trouble logging in, can’t configure your authentication app properly, or you lose your phone? Maybe call customer service – and they’ll “verify” your identity using some trivial fact like your birth date, phone number or social security number. Even more advanced methods like “taking a selfie” are becoming easier to game.

The customer service agent – as far as we know, he’s also an artificial intelligence (AI) – can easily satisfy this fake customer and give him all the privileges you would get if you actually called. What they can do with this situation varies widely, but none of it is good.

Like others on this list, the danger is not how real this fake you will be, but that it’s easy for scammers to carry out this kind of attack widely and repeatedly. Not long ago, this kind of impersonation attack was expensive and time-consuming, and as a result was limited to high-value targets like wealthy people and CEOs. Nowadays you can create a workflow that creates thousands of impersonation agents with minimal oversight, and these agents can autonomously phone the customer service numbers on all of a person’s known accounts – or even create new ones. Only a handful need to succeed to justify the cost of the attack.

How can you fight against identity fraud?

Just as it was before AI came along to boost scammers’ efforts, “Cybersecurity 101” is your best option. Your data is already out there; you can’t put the toothpaste back in the tube. But you can be able to Make sure your accounts are adequately protected against the most obvious attacks.

Multi-factor authentication is easily the most important single step anyone can take here. Any kind of serious account activity goes straight to your phone, and suspicious login or password change attempts will show up in email. Don’t ignore these warnings or mark them as spam, even (especially) if you’re getting a lot of them.

AI-generated deepfakes and blackmail

Perhaps the most frightening emerging AI scam is the possibility of blackmailing you or someone you love using deepfake photos. You can thank the rapidly advancing world of open image models for this futuristic and frightening possibility. People interested in certain aspects of cutting-edge image generation have created workflows to not only render nude bodies but also attach them to any face they can photograph. I don’t need to go into detail about how this is already being used.

But one unintended consequence is the expansion of the scandal that is commonly called “revenge porn,” but is more accurately described as the non-consensual distribution of intimate images (though like “deepfakes,” the original term may be difficult to change). When someone’s private images are released through hacking or a vengeful ex, they can be used as blackmail by a third party who threatens to publish them widely unless a sum of money is paid.

The AI ​​makes this scam work by making sure there’s no real intimate image already present. Anyone’s face can be added to a body created by the AI, and while the results aren’t always reliable, it’s probably enough to fool you or others if it’s pixelated, low-resolution or otherwise partially obscured. And that’s all it takes to scare someone into paying up to keep a secret — though, like most blackmail scams, the first payment is unlikely to be the last.

How can you fight against AI-generated deepfakes?

Unfortunately, the world we’re heading towards is one where fake nude photos of almost anyone will be available on demand. It’s scary, weird, and disgusting, but sadly, the truth is out there.

Nobody is happy with this state of affairs, except the bad guys. But there are a few things for potential victims. These image models can produce realistic bodies in some ways, but like other generative AI, they only know what they have been trained on. So fake images will have no distinguishing markings, for example, and are likely to be blatantly wrong in other ways.

And while the threat will probably never be completely eradicated, there are a continual stream of remedies available to victims, who can legally force image hosts to remove photos, or ban scammers from the sites where they post. As the problem grows, so will the legal and private means of fighting it.

TechCrunch is not a lawyer. But if you are a victim of this, tell the police. This is harassment, not just a scam, and while you can’t expect the police to do intense detective work on the internet to track someone down, sometimes such cases are resolved, or the scammers are scared off by requests sent to your ISP or forum host.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Share post:

Subscribe

spot_imgspot_img

Popular

More like this
Related

Pelican’s waterproof phone pouch is 45% off in this Prime Day deal

Cell phones are expensive. When you invest in a...

Elon Musk couldn’t beat them, but AI might be able to

Sometimes its effects become unbearable.This is the third election...

Meta lifts special restrictions for Trump’s account ahead of 2024 elections

Meta announced on Friday that former President Donald Trump's...