Security

Research

Phishing works so well crims won't bother with deepfakes, says Sophos chap

People reveal passwords if you ask nicely, so AI panic is overblown


Panic over the risk of deepfake scams is completely overblown, according to a senior security adviser for UK-based infosec company Sophos.

"The thing with deepfakes is that we aren't seeing a lot of it," Sophos researcher John Shier told El Reg last week.

Shier said current deepfakes – AI generated videos that mimic humans – aren't the most efficient tool for scammers to utilize because simpler and cheaper attacks like phishing and other forms of social engineering work very well.

"People will give up info if you just ask nicely," said Shier.

One area in which the researcher does see deepfakes becoming prevalent is romance scams. It takes a hefty amount of devotion, time and energy to craft believable fake personas, and the additional effort to add a deepfake is not huge. Shier worries that deepfaked romance scams could become problematic if AI can enable the scammer to work at scale.

Shier was not comfortable setting a date on industrialized deepfake lovebots, but said the necessary tech improves by orders of magnitude each year.

"AI experts make it sound like it is still a few years away from massive impact," the researcher lamented. "In between, we will see well-resourced crime groups executing the next level of compromise to trick people into writing funds into accounts."

Up until now, deepfakes have most commonly been used to create sexualized images and videos – mostly depicting women.

However, a Binance PR exec recently revealed criminals had created a deepfaked clone that participated in Zoom calls and tried to pull off cryptocurrency scams.

Security researchers at Trend Micro warned last month that deepfakes may not always be a scammer's main tool, but are often used to enhance other techniques. The lifelike digital images have lately shown up in job seeker scams, bogus business meetings and web ads.

In June, the FBI issued a warning that it was receiving an increasing number of complaints regarding deepfakes deployed in job interviews for roles that provide access to sensitive information. ®

Send us news
15 Comments

UK and US lead international efforts to raise AI security standards

17 countries agree to adopt vision for artificial intelligence security as fears mount over pace of development

Tech world forms AI Alliance to promote open and responsible AI

Everyone from Linux Foundation to NASA and Intel ... but some big names in AI are MIA

Creating a single AI-generated image needs as much power as charging your smartphone

PLUS: Microsoft to invest £2.5B in UK datacenters to power AI, and more

Cisco intros AI to find firewall flaws, warns this sort of thing can't be free

Predicts cyber crims will find binary brainboxes harder to battle

Don't be fooled: Google faked its Gemini AI voice demo

PLUS: The AI companies that will use AMD's latest GPUs, and more

Trust us, says EU, our AI Act will make AI trustworthy by banning the nasty ones

Big Tech plays the 'this might hurt innovation' card for rules that bar predictive policing, workplace emotion assessments

The AI everything show continues at AWS: Generate SQL from text, vector search, and more

Invisible watermarks on AI-generated images? Sure. But major tools in the stack matter most

Microsoft partners with labor unions to shape and regulate AI

Redmond reassures AFL-CIO workers they won't be pushed out by technology

Hershey phishes! Crooks snarf chocolate lovers' creds

Stealing Kit Kat maker's data?! Give me a break

Boffins fool AI chatbot into revealing harmful content – with 98 percent success rate

This one weird trick works every time, most of the time

Mere minority of orgs put GenAI in production after year of hype

Folks are dipping their toes in without a full commitment

HPE targets enterprises with Nvidia-powered platform for tuning AI

'We feel like enterprises are either going to become AI powered, or they're going to become obsolete'