News

How criminals use artificial intelligence to fuel cyber-attacks

Published 01 - September - 2021
Read the full article on Teiss
risktechnologyaiother

AI systems and can be entrenched by programmers or specific data sets. Unfortunately, if this bias leads to poor decisions and potentially even discrimination, legal consequences and reputational damage may follow.

Our culture is rich with dystopian visions of human ruin at the feet of all-knowing machines, and as artificial intelligence (AI) breaks into the mainstream there is a great deal of misinformation and confusion about what it’s capable of and the potential risks it poses. But its potential to deliver improvments and insights is huge.

Computer systems that can learn, reason, and act independently are still in their infancy. Machine learning requires huge data sets. For many real-world systems, such as driverless cars, a complex blend of physical computer vision sensors, complex programming for real-time decision making and robotics is required. For businesses adopting AI, deployment is simpler, but giving it access to information and allowing any measure of autonomy brings serious risks that must be considered.

What risks does AI pose?

Accidental bias is quite common with AI systems and can be entrenched by programmers or specific data sets. Unfortunately, if this bias leads to poor decisions and potentially even discrimination, legal consequences and reputational damage may follow. Flawed AI design can also lead to overfitting or underfitting, whereby AI makes decisions that are too specific or too general.

Both these risks can be mitigated by establishing human oversight, by stringently testing AI systems during the design phase, and by closely monitoring those systems when they are operational. Decision making capabilities must be measured and assessed to ensure that any emerging bias or questionable decision making is addressed swiftly.

These threats are based on unintentional errors and failures in design and implementation, but a different set of risks emerges when people deliberately try to subvert AI systems or wield them as weapons.

How attackers can manipulate AI and how organisations can defend

Poisoning an AI system can be alarmingly easy. Attackers can manipulate the data sets used to train AI, making subtle changes to parameters or crafting scenarios that are carefully designed to avoid raising suspicion, but gradually steer AI in the desired direction. Where attackers lack access to datasets, they may employ evasion, tampering with inputs to force mistakes. By modifying input data to make proper identification difficult, AI systems can be manipulated into misclassification.

Checking the accuracy of data and inputs may prove impossible, but every effort should be made to harvest data from reputable sources. Try to bake in identification of anomalies, provide adversarial examples to empower AI to recognise malicious inputs, and isolate AI systems with safeguard mechanisms that make it easy to shut down if things start to go wrong.

A tougher issue to tackle is inference, whereby attackers try to reverse-engineer AI systems so they can work out what data was used to train them. This may give them access to sensitive data, it may pave the way for poisoning, or it could enable them to replicate an AI system for themselves.

How AI may be weaponised

Cyber-criminals can also employ AI to assist with the scale and effectiveness of their social engineering attacks. AI can learn to spot patterns in behavior, understanding how to convince people that a video, phone call or email is legitimate, and then persuading them to compromise networks and hand over sensitive data. All the social techniques cyber-criminals currently employ could be improved immeasurably with the help of AI.

There’s also scope to use AI to identify fresh vulnerabilities in networks, devices and applications as they emerge. By rapidly identifying opportunities for human hackers, the job of keeping information secure is made much tougher. Real-time monitoring of all access and activity on networks, coupled with swift patching, is vital to combat these threats. The best policy in these cases may be to fight fire with fire.

Using AI to boost company security

AI can be highly effective in network monitoring and analytics, establishing a baseline of normal behaviour and flagging discrepancies in things such as server access and data traffic immediately. Detecting intrusions early gives you the best chance of limiting the damage they can do. While it may initially be best to have AI systems flag abnormalities and alert IT departments so they can investigate, as AI learns and improves it may be given the authority to nullify threats itself and block intrusions in real-time.

Just as AI can model normal behaviour and learn about how users interact with systems, learn to recognise vulnerabilities and malware, and begin to understand what constitutes an emerging threat, it can also learn when alerts are effective. As the data set grows and it receives more feedback on its decision-making, so it gains more experience and gets better at the task of defending your network.

With a major skills shortage in information security, any AI system that can shoulder some of the burden and enable limited staff to focus on complex problems will be of benefit. As companies look to reduce costs, AI is fast becoming more attractive as a replacement for people. It will bring benefits and it will improve with experience, but forward-thinking companies must plan to mitigate the potential risks now.

 

Steve Durbin, ISF CEO

How criminals use artificial intelligence to fuel cyber-attacks
Read the full article on Teiss