What a cool, new, artificially intelligent world! There’s ChatGPT to write your love letter, CV or paper. Loudly or Udio to compose your own music. DALL-E to produce “exceptionally accurate images” of your own creation. And eventually, in the future, much-improved AI might allow you to rewatch your favourite movie with you starring in the main role. Stefan becomes Ethan, James or Obi-Wan! Copyright issues aside, the future has plenty of ideas in store. And there are also AI and LLMs (large language models) “to remarkably increase the efficiency, sensitivity and modelling of CERN experiments”. Or to help with the stock market and portfolio management. But with every good idea comes the dark side. AI used for military applications*. Or for cyberattacks.
So, while AI – coming with its own computer security problems, too – might make daily life easier, it will also open up a new series of attack vectors.
The obvious first vector is that AI has started to be misused to create “better” phishing emails that are even harder to spot by our automatisms and by you because their content makes more sense, their language is more accurate and the number of mistakes and typos is minimal. The cat-and-mouse game between (now AI-driven) attacks and our (surely AI-driven) spam and anti-malware filtering is entering a new boss level. Good AI vs bad AI.
Secondly, researchers and security companies have already started to use AI and LLMs to automatically identify vulnerabilities in software and online systems. Helping both friendly hackers and malicious attackers, AI can be used to probe deep into software applications, identify a flaw and either notify everyone of the need to fix it or exploit that zero-day vulnerability and start compromising any system using that software. With well-trained AI, we can all improve our code-base (by the way, do you use our GitLab security scanners?) and avoid the daily blunder of bugs and typing mistakes leading to misconfigurations, buffer overflows or remotely exploitable vulnerabilities. We can scan CERN’s internet presence for existing vulnerabilities. But so can the attackers. And they do, more and more efficiently scanning CERN’s webservers and webservices. AI attacks, the next level. And this is only the beginning. As in the past with “worms” – malware that automatically spreads – malicious AI can start to evolve while scouting for new systems. It learns (the intention of AI) and strikes harder on the second or third pass. Unfortunately, attacking AI has, like any other attacker, the advantage of time and the element of surprise. (AI) defence is no mean feat.
The spiral of offence versus defence just took another turn. All we can do is to remain vigilant and careful and to enforce our defence-in-depth strategy.
*Remember Skynet, anyone?
________
Do you want to learn more about computer security incidents and issues at CERN? Follow our Monthly Report. For further information, questions or help, check our website or contact us at Computer.Security@cern.ch.