Blog

Is Artificial Intelligence (AI) the new nuclear weapon?

Mike Yeomans
Published 25 - November - 2020

Data is the new oil, homeworking the new norm, and social media the new news platform. Does that mean AI will be the new nuclear weapon?

An excellent piece from the Financial Times – “Is AI finally closing in on human intelligence?” [subscription required] – highlights the significant steps made by GPT-3 in natural language processing, with the system creating text that reads grammatically correctly and actually makes sense. GPT-3 has been trained on 45 terabytes of data (45 billion times the numbers of words a human will ever read) and is an impressive advance from Botnik’s “Harry Potter and the portrait of what looked like a large pile of ash”, which itself was an impressive (if amusing) work of grammatically accurate gibberish: https://botnik.org/content/harry-potter.html

General purpose AI is some way off, but work towards it is underway

GPT-3 is produced by OpenAI and has investments from Elon Musk, Reid Hoffman (founder of LinkedIn), and now Microsoft. Work is underway to make a sophisticated general purpose AI, which could be capably described as sentient, as opposed to being a robot that is highly skilled at just one or two tasks, such as chess-playing “AI” Deep Blue. It is not there yet, but this is what OpenAI hopes to work towards.

Challenges around creating AI remain, with most systems still being glorified auto-complete tools: they accurately predict what the author wants to say based on statistical analysis of historically analysed data, but they cannot think laterally or provide answers to questions outside of a pre-programmed set. To put it another way, AIs know a lot but cannot think at all. Whatever an AI produces is a prediction based on the past (or what the AI thinks is the past), meaning their many and increasing decisions are pre-determined by those deciding what data the AI is trained on and implicitly what it believes to be true. This can cause security problems, with poisoning of datasets causing them to exacerbate existing biases and end up spouting racist, xenophobic abuse. However, a more sinister spectre lies on the horizon.

We’re already off and running in the AI arms race – but there’s no referee

Ever more automation is occurring, with life on track to be governed by algorithms and that’s no bad thing – it makes our lives a lot easier and more efficient. However, without proper management there is cause for concern. In the context that only the super-rich can afford AI (companies or nation states), tech trade wars are raging, and industry leaders such as the FAANGs can at times seem ungovernable. There is real cause for concern.

For perhaps 60yrs after the founding of the United Nations (UN), global norms and rules (particularly for peace and security) were predominantly set by the 5 permanent members of the UN security council thanks to their nuclear arsenals. Only they (and in time a handful of others) could create and maintain these ultimate weapons, so they set the rules for how the world was run. The order remained this way, but computing and cyber have begun to change this picture.

Until very recently the US has reigned supreme in cyber, dictating the legal norms and frameworks. The US holds the best offensive cyber capabilities, developed the vast majority of modern computing languages and creates the operating systems that nearly all computers run off (Windows, iOS, Android, and Linux (the last two are opensource, but most of the key developers reside in the US, subject to US norms, culture, and jurisdiction)).

However, now other nations are developing their cyber capabilities and prowess, accelerated by the trade battles implemented in the past 4-years. Huawei is rolling out HARMONY OS, which in time could mean around 3-billion people are using devices not based on US code and it’s hard to deny that the European Union’s General Data Protection Regulation (GDPR) has changed computing behaviours the world over. And this is just the beginning.

Cyberspace has seen nations connected in ways never envisaged at the UN’s inception in 1945 and while in theory countries could literally cut cables and add whole countries of IP ranges to allow or deny lists, this seems unlikely to happen, so cross-border activity seem likely to continue (from commerce and communications, to election tampering and industrial espionage). Those possessing the best tools will able to do the most to shape the world. Some organisations around the world know this and are working hard to make sure they are not a digital non-nuclear state. This is where AI can be seen as a very dangerous weapon – both within and between countries. Algorithms already decide what information humans are allowed to consume (spam filters), what they should buy (adverts) and even whether they are deemed socially acceptable to travel in business class. The scope of automated decision-making will only increase as tech pervades more of our lives.

Will AI create unchecked power?

The wielding of power in this way isn’t new. Power has always resided in the barrel of a gun, with those able to control the flow of information and compel others being the ones in charge. Westphalian states raised armies, modern nation states hold nuclear arsenals, and even private corporations have set the rules for those they governed (just look at the City of London Corporation or East India Trading company). What is perhaps new is that an organisation no longer necessarily needs soldiers with swords and rifles to be in charge. Automated decision-making and the interconnected world mean what happens online can translate to real actions in the physical world. As humanity becomes increasingly dependent upon machine learning and automation, it becomes more dependent upon those governing and programming these systems.

Does that mean humanity faces extinction by some sentient, Terminator-style AI? Probably not. Humanity did not get to the moon by improving upon existing high altitude aircraft and the reality is AI’s advances have begun to stagnate – a complete paradigm shift in terms of thinking and development will likely be required to achieve a general purpose AI; and for added comfort, just remember Douglas Adams’ quote “Computer, if you don’t open… …this moment I shall… …reprogram you with a very large axe”. However, the technological advances represented by GPT-3 do mean a shift is occurring in favour of those programming AI machines, with major implications for the global socio-political order.

A tech arms race is underway and given the trend towards “smart” everything, AI is very possibly the new intercontinental ballistic missile (ICBM). AI-enabled attacks are capable of wreaking havoc on the far side of the globe. While governments or private enterprises wielding such ultimate power is not new, it cannot be in doubt that AI and digitisation generally represent major socioeconomic and political disruption, with distributions of power liable to change from the status quo of the past 70-years. This disruption will create winners and losers and while AI offers opportunities to benefit humanity, without careful regulation by governments, business and individuals, it may prove more harmful than good, as wealth further stratifies and power concentrates into less unaccountable hands. This outlook, while perhaps bleak, is not guaranteed. Nuclear weapons risk global apocalypse but have helped to prevent horrific wars between countries. AI can benefit humanity and improve life as much as it could harm it as well. The question is, what will you and your organisation do to make sure the changes this disruption brings work in the world’s favour?

For further reading and help looking at AI in your organisation, download the ISF briefing paper Demystifying Artificial Intelligence in Information Security