MIT Technology Review Insights, in association with AI cybersecurity firm Darktrace, surveyed more than 300 C-level executives, directors and managers around the world to understand how they deal with the cyberthreats they face and how to use the ‘AI to fight. against them.
As it stands, 60% of respondents report that human responses to cyber attacks fail to keep pace with automated attacks, and as organizations prepare for a bigger challenge, more sophisticated technologies are emerging. essential. In fact, an overwhelming majority of respondents – 96% – say they have already started guarding against AI-powered attacks, with some activating AI defenses.
Offensive AI cyberattacks are daunting, and the technology is fast and smart. Consider deepfakes, a type of weaponized AI tool, which are fabricated images or videos depicting scenes or people who were never present or who did not even exist.
In January 2020, the FBI warned that deepfake technology had already reached the point where artificial characters could be created to pass biometric tests. At the rate that AI neural networks are evolving, an FBI official said at the time that national security could be undermined by fake high-definition videos created to mimic public figures so that they appear to be speaking the words. that video creators have put in their manipulated mouths.
This is just one example of technology being used for nefarious purposes. AI could, at some point, conduct cyber attacks autonomously, disguise its operations, and blend in with regular activities. The technology is accessible to everyone, including threat actors.
The offensive risks of AI and developments in the cyber threat landscape are redefining enterprise security, as humans already struggle to keep pace with advanced attacks. In particular, survey respondents indicated that email and phishing attacks cause them the most anxiety, with nearly three-quarters saying email threats are the most worrying. This comes down to 40% of those surveyed who said they found email and phishing attacks “very concerning”, while 34% rated them as “somewhat concerning”. This is not surprising, because 94% of detected malware is still sent by email. Traditional methods of stopping email threats rely on historical metrics, i.e. attacks already seen, as well as the recipient’s ability to spot signs, both of which can be circumvented by sophisticated phishing raids. .
When offensive AI is added to the mix, the “fake email” will be nearly indistinguishable from genuine communications from trusted contacts.
How attackers exploit the headlines
The coronavirus pandemic has presented a lucrative opportunity for cybercriminals. Email attackers, in particular, have followed a long-established pattern of taking advantage of the headlines of the day – along with the fear, uncertainty, greed, and curiosity they spark – to luring victims into what are now called “fear” attacks. With employees working remotely, without office security protocols in place, businesses have seen successful phishing attempts skyrocket. Max Heinemeyer, director of threat hunting for Darktrace, notes that when the pandemic hit, his team saw an immediate evolution of phishing emails. “We’ve seen a lot of emails saying things like, ‘Click here to see which people in your area are infected,’” he says. When offices and universities began to reopen last year, new scams popped up at the same rate, with emails offering “cheap or free covid-19 cleaning programs and tests,” says Heinemeyer.
There has also been an increase in ransomware, which has coincided with the boom in remote and hybrid work environments. “The bad guys know now that everyone relies on remote work. If you are hit now and can no longer provide remote access to your employee, that’s it, ”he says. “Whereas maybe a year ago, people could still work, work more offline, but it hurts a lot more now. And we see that the criminals have started to exploit this.
What is the common theme? Change, rapid change and – in the case of the overall shift to working from home – complexity. And this illustrates the problem with traditional cybersecurity, which relies on traditional signature-based approaches: Static defenses are not very good at adapting to change. These approaches extrapolate yesterday’s attacks to determine what tomorrow will look like. “How can you anticipate the phishing wave of tomorrow? It just doesn’t work, ”says Heinemeyer.
Download the full report.
This content was produced by Insights, the personalized content arm of MIT Technology Review. It was not written by the editorial staff of MIT Technology Review.