How fraudsters are using AI.

  • 30 April 2024
  • 0 replies
  • 3727 views
How fraudsters are using AI.
Userlevel 8
Badge +3

Fraudsters don’t just pickpocket the odd phone or forge a couple of signatures nowadays. Modern day crooks are getting crafty with artificial intelligence. And they’re using it to pull off their schemes in ways that wouldn’t have been possible a few years ago.

It’s big business for them too. A 2024 study by twenty-four IT found that UK cybercrime costs the economy an estimated £27 billion per year.  While 25% of UK consumers think they’ll be a victim of cybercrime in the future. And with the scammers adding AI to their toolbox of tricks, we’re all going to have to be extra careful in future.

So, to help you know what to look out for, here are a few ways they’re using – or could use - AI to hoodwink people…

Voice cloning

Voice cloning is exactly what it sounds like. By using AI, fraudsters can feed the system recordings of you or someone you know and build an audio version of that person from scratch. Where do they get that sound from though? Well, most of us have plenty of videos up on our social accounts. And they only need about 45 seconds of sound to work with.

Once it’s ready, they can recreate the voice of almost anyone – whether that’s your boss, or a family member – and get them to say just about anything. In tons of different languages too.

The scammers might use your mum’s voice and call you with a fake story. They’ll pretend they’re on holiday, but there’s been an emergency, and need you to send them some money. If you get a call like that, put the phone down and text your mum instead to check. It’s better to be safe than sorry.

Some have gone even further. Certain groups have started to use it to crack into bank accounts that need voice recognition prompts to login. It’s scary stuff.  

Deepfakes

This is quite similar to voice cloning, but it uses a fake video instead. Scammers will create a clip of someone, saying and/or doing something that never actually happened. And then use it to convince you to do something for them.

Imagine you work in accounts for a company and get a video from the CEO telling you to transfer money to a specific account. You might not think twice about it. You do it, and later find out you’ve just transferred huge sums of cash to a trickster.

This isn’t just any old scenario either. A similar situation actually happened in Hong Kong. The New Yorker reported that “a finance worker had been tricked into paying out twenty-five million dollars after taking part in a video conference with who he thought were members of his firm’s senior staff”. That’s one expensive mistake.

Phishing

Phishing is a pretty old-school scam. We’ve all had loads of those emails that look legit but are actually fraudsters (posing as brands and people) trying to snatch your personal info. Well, with AI, they’re about to get a whole lot more sophisticated and trickier to spot.

The scammers use AI algorithms to analyse stacks of online data about you and other potential targets. They’ll then use all that research to craft even more personalised emails or text messages that seem tailor-made just for you. Which’ll make them far more convincing.

They might mimic the writing style of your best mate, one of your colleagues, or even your bank. So, when you get that message from HSBC asking for some sensitive account details, you probably won’t even pause before clicking on that link. Sneaky, right?

Cracking security

AI can give the bad guys an edge against pretty sophisticated security systems now too. By using a machine learning algorithm, they can study how they work (and change) over a period of time. Then, once they’ve found a chink in the armour, they’ll exploit the loophole to break in and snaffle up all of the info they can get their hands on.

Whether they’re cracking passwords, bypassing firewalls, or slipping past antivirus software, the AI can help them to stay one step ahead of any developments in the security.

Generative AI

You’ve probably heard all about the Willy Wonka experience up in Glasgow by now. If you haven’t though, it was anything but sweet.

Pictures were all over the event’s socials and website showing off Willy’s magical factory. And because it looked amazing, it helped them to shift a boatload of tickets. Only issue is, none of it was real. The organisers used generative AI to make their adverts. With software pumping out images of a place that didn’t exist.

When people got there, the reality was a couple of underwhelming sets and a few balloons.                             The actors even had terribly written scripts to follow that had been put together with AI. All in all, the organisers thought they could make a quick buck, with AI doing the heavy lifting to get people through the door.

 

Run into any AI-based scam tactics that we haven’t mentioned? Give us a heads up and tell us about them down below.


0 replies

Be the first to reply!

Reply


Why iD Mobile?