Artificial intelligence in IT security

When artificial intelligence meets human intelligence

Cyberattacks are becoming increasingly sophisticated, as are their defence systems. Artificial intelligence and machine learning are also finding their way into IT security – on both sides. So what is the current situation?

Text: Andreas Heer, Images: Adobe Stock,

Artificial intelligence is becoming increasingly important in IT security – for attackers and defenders. Security experts all agree. The current Swisscom Threat Radar reflects this, classing AI-supported cyberattacks as a growing threat for companies.

 

But when, how and where such attacks take place is currently anyone’s guess. In a research paper, the security solutions provider Trend Micro rates threats of this nature “in the developmental stage”. And security researchers from IBM proved the feasibility of malware at the Black Hat conference in 2018 when they presented “DeepLocker”, which uses artificial intelligence to protect itself from detection. So far, however, there has been no proof of concept for approaches of this kind.

AI is part of the business model

Machine learning in simpler forms is used for attacks today, as described in the Trend Micro report. For example, security researchers have come across a password cracking tool that uses machine learning to increase the efficiency of brute-force attacks on user accounts. The tool learns how people adapt passwords to make then more secure. Instead of simply testing all character combinations, the tool tries the most likely variations of known combinations. For example, it starts with “password123”, followed by “p@ssword123”. The aim is to guess the correct password with fewer attempts.

 

However, the development of malware that uses machine learning or even artificial intelligence is expensive. Just as in the legitimate economy, this expense must be justified when it comes to organised cybercrime. This means that when these criminals find a business model for using artificial intelligence, they also develop relevant technologies and invest in the necessary resources.

 

WEF developed one such possible scenario in a background article. Phishing attacks were able to use machine learning to send deceptively genuine-looking e-mails on a large scale. The e-mails captured from the computers of existing victims were used as training data, which would lead to more people falling for the phishing mails. Or, from the attacker’s perspective, it would increase efficiency.

 

In general terms, social engineering is likely to be an interesting area for the use of artificial intelligence because many of the required technologies already exist. With deepfake technologies such as voice cloning, for example, it would be possible to simulate the voice of a known caller in order to prompt the victim to disclose confidential information.

Human intelligence is still important

Of course, machine learning in particular can also be used on the defence side, in behaviour-based systems for detection & response and in SIEM/SOAR platform, for example. Here, though, the challenge is reversed: it’s not about detecting patterns, but anomalies. Yet not every anomaly equates to an attack. Maybe Mrs Mayer really did send e-mails at 11 pm on just one occasion, because working from home meant she altered her routines?

 

That’s why, even with “intelligent” systems, there is an increasing need for people to check the “false positives”. Machine learning and artificial intelligence won’t overthrow current IT security concepts in the foreseeable future, but they will help the few security specialists in the battle. A Security Operations Centre (SOC) will also be at the heart of cyber defence going forward, possibly increasingly as a Managed Security Service – with human intelligence in the form of security experts also expected to remain a scarce resource in the future.


More on the topic: