AI systems are liable to make mistakes and bad decisions, as a series of high-profile cases have shown – from sexist bias in recruitment tools to Twitter chatbots that learn to become racist in the space of 24 hours.
Success will require a combination of human and artificial intelligence
Until a system has demonstrated maturity and trustworthiness, organisations are rightly unwilling to give it a high level of autonomy and responsive capability – whether it is deployed for information security or any other type of business function. The risk of AI systems making bad decisions means that organisations are likely to always require the presence of a human who can take control and press the off switch when necessary.
However, the desire to keep humans in the loop creates its own challenges. Placing too much emphasis on the need for human oversight can reduce the effectiveness of the AI system, leading to a deluge of notifications and alerts rather than letting the AI take automatic responsive measures.
Security practitioners need to balance the need for human oversight with the confidence to allow AI-supported controls to act autonomously and effectively. Such confidence will take time to develop, just as it will take time for practitioners to learn how best to work with intelligent systems. Given time to develop and learn together, the combination of human and artificial intelligence should become a valuable component of an organisation’s cyber defences.