Home AI Security AI Attacker V/s. AI Defender

AI Attacker V/s. AI Defender

271
0

The transition of cyber age to machine age is over with the advent of malicious use of Artificial Intelligence in cybersecurity. Now we have a new war between AI attackers versus AI defenders. Read on to know more…

All of us have been fascinated with Artificial Intelligence (AI). On one side the AI based algorithms, specifically the machine learning algorithms enables news-curating apps such as Flipboard to deliver content to users that match their individual tastes. On the other side, AI based adversarial models in neural networks, have already known to defeat the AI behind image classifiers and computer audio, and eventually are also good at defeating malware detection.

The Artificial Intelligence in theory, learn to write its own code, construct its own algorithms, correct its own mathematical proofs and write better programs than its human designers.

AI-based Malware
At the 2017 Black Hat conference, during a 25-minute briefing titled “Bot vs. Bot for Evading Machine Learning Malware Detection,” the presenter demonstrated how an AI agent can compete against a malware detector by proactively probing it for blind spots that can be exploited. This simplistic approach to creating better malware detection engines is essentially game theory with a two-player game between machines.

It is only a matter of time before threat actors of all calibers employ AI to break down defensive barriers faster than any security product or antivirus detection engine can stop them, much less a team of humans accustomed to being reactive with security.

Malicious Misuses of AI
In March this year, scientists from leading universities, including Stanford and Yale in the U.S. and Oxford and Cambridge in the UK, together with civil society organizations and a representation from the cybersecurity Industry, published an research paper titled, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

While the research paper looks at a range of potential malicious misuses of AI which includes and focuses on machine learning, the researchers state the purpose is to largely exclude the military and concentrate on the cybersecurity aspects. It is, however, impossible to completely exclude the potential political misuse given the interaction between political surveillance and regulatory privacy issues.

The problem discussed in the paper is that this function has no ethical bias. It can be used as easily for malicious purposes as it can for beneficial purposes. AI is largely dual-purpose; and the basic threat is that zero-day malware will appear more frequently and be targeted more precisely, while existing defenses are neutralized – all because of AI systems in the hands of malicious actors.

Potential abuse can come in both areas: manipulation or even alteration of the algorithm; and poisoning the data set from which the machine learns.

The research report warns, “It has been shown time and again that ML algorithms also have vulnerabilities. These include ML-specific vulnerabilities, such as inducing misclassification via adversarial examples or via poisoning the training data… ML algorithms also remain open to traditional vulnerabilities, such as memory overflow. There is currently a great deal of interest among cyber-security researchers in understanding the security of ML systems, though at present there seem to be more questions than answers.”

The danger is that while these threats to ML already exist, criminals and nation-state actors will begin to use their own ML capabilities to increase the speed and accuracy of attacks against ML defenses.

Neural Network Boffins
In an article published titled ‘Researchers create AI attacker to defeat AI malware defender’ by The Register, reported about malicious AI versus the AI defender in a neural network.

Last year, researchers from NVIDIA, Booz Allen Hamilton, and the University of Maryland probably felt justifiably pleased with themselves when they trained a neural network to ingest EXEs and spot malware samples among them.

Their MalConv software ran a static analysis on executables (that is, it looked at the binaries but didn’t run them), and they claimed up to 98 per cent accuracy in malware classification once their neural network had a big enough learning set. At last, in a neural network, they are subject to adversarial attacks.

In a security research paper titled ‘Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables’ published last month, by boffins from the Technical University of Munich, the University of Cagliari in Italy, and Italian company Pluribus One, described one way of defeating MalConv.

The researchers were taking the by-now-standard approach to adversarial attacks: what’s the smallest amount of change needed to disrupt an AI?

They started with simple byte-padding, adding 10,000 bytes to the end of binaries, an approach that degraded MalConv’s accuracy by “over 50 per cent”.

Relative to the malware samples, 10 KB of padding was a tiny change: “less than one per cent of the bytes passed as input to the deep network”, the adversarial paper said.

Even that attack can be reduced, because instead of padding the end of the binary, the “attack bytes” could be put inside the binary to “drastically increase the success of the attack.”

Operating on bytes inside an executable is, however, more complex and brittle, making it difficult to automate, whereas byte padding is simple.

The European researchers also found that a gradient-based sequence of padding bytes worked better than random padding bytes: “adding random bytes is not really effective to evade the network,” the paper said, but “our gradient-based attack allows evading MalConv in 60 per cent of the cases when 10,000 padding bytes are modified”.

That’s because over sufficient training runs, the gradient-based approach created “an organized padding byte pattern specific to each sample” – that is, the malicious model learned what pattern worked best for each of the malware samples it tested.

Potential Targets
In future, AI-based malware could wreak havoc on an unprecedented scale in many different areas and sectors, which includes National power grids and modernized Industrial control systems,
Aerospace and defense. The financial sector commodities, foreign exchanges and futures trading and sensitive Nuclear programs, particularly those that utilize war game scenarios, and satellite and telecommunications networks could be potential targets. AI based autonomous vehicles and Artificial neural networks, especially when used within large constructs such as the Internet of Things (IoT) networks can also be prove to AI based cyber-attacks.

Malware Arms Race
For years, it’s now becoming increasingly clear that we are at the start of a new AI-Cyber revolution, and it’s been years in the making. If organizations are interested in using Artificial Intelligence and Machine Learning to solve critical challenges, they must incorporate both offensive and defensive capabilities into the technology. It is also necessary to test any artificially intelligent product by pitting it against both humans and other machines.

Product development and solution design teams should consider moving away from a purely defensive security posture to one that is more offensive in nature. An artificially intelligent system will likely only get one chance to defend itself against an aggressive and more advanced adversary.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

1 + 1 =