main-article-of-news-banner.png

AI must have better security, says top cyber official

Source: bbc.com

 

Lindy Cameron from the National Cyber Security Centre said it was key to have robust systems in place in the early stages of AI development.

As companies rush to develop new AI products, there are fears that security it is being overlooked.

As a result, malicious attacks could have a "devastating" effect, a former intelligence chief added.

In the future, AI will play a part in many aspects of daily life in our homes and cities through to high-end national security and even fighting wars.

But for all the benefits, there are also risks.

"As we become dependent on AI for really good things like delivery of food, autonomous vehicles, utilities or all sorts of things that AI will help to control in the future, attacks on those systems could be devastating," says Robert Hannigan, who used to run the UK's communication intelligence agency GCHQ.

The concern is that companies - competing to secure their position in a growing market - will focus on getting their systems out for sale as fast as possible without thinking about the risks of misuse.

A head shot of Lindy Cameron with a city skyline background
Lindy Cameron is the National Cyber Security Centre chief (IMAGE SOURCE,NCSC / Credits image: bbc.com)

"The scale and complexity of these models is such that if we don't apply the right basic principles as they are being developed in the early stages it will be much more difficult to retrofit security," says Lindy Cameron, CEO of the NCSC, which supports UK organisations with cyber security and responds to incidents.

 

'A cat and mouse game'

AI systems can be used as tools by those seeking to do harm.

For instance, coming up with malicious code to hack into devices, or writing fake messages to be spread on social media.

What is particularly dangerous is the systems themselves can also be subverted by those seeking to do harm.

For many years, a small group of experts has specialised in a field called 'adversarial machine learning', which looks at how AI and machine learning systems can be tricked into giving bad results.

"The systems are very brittle unfortunately", explains Lorenzo Cavallaro, a professor of computer science at University College London, "it is always a cat and mouse game".

A shot inside a car with a screen showing the computer's view of the car, and a driver whose hands are off the steering wheel
Could how self-driving cars see signs be changed by AI? (IMAGE SOURCE,PA MEDIA / Credits image: bbc.com)

Take for example AI trained to recognise images. Researchers ran a test by placing stickers on a 'stop' road sign, which made the AI think it was a speed limit sign - something with potentially serious consequences for self-driving cars.

Another field involves 'poisoning' the data which the AI is learning from.

Results generated by AI can be biased because of data sets that are not representative of the real world. But poisoning means deliberately creating bias by injecting bad data into the learning process.

"It is hard to spot," says Professor Cavallaro. "You can only identify it in retrospect with forensic analysis."

A major problem for AI systems is they can be hard to understand. The risk is that even if someone simply fears their model might have been poisoned by bad data, then it becomes harder to trust it.

"It is a fundamental challenge for AI right across the board as to how far we can trust it," says former GCHQ head Robert Hannigan.

Robert Hannigan
As systems increasingly use AI, attacks could be devastating, says Robert Hannigan  (IMAGE SOURCE,GCHQ / Credits image: bbc.com)

 

'Is this a real thing?'

The dangers are not just of hackers seeking to cause disruption, but to wider national security.

If AI was used to analyse satellite imagery looking for a military build-up, then a malicious attacker could work out how to either miss the real tanks or see an array of fake tanks.

These concerns were previously theoretical, but signs are now emerging of real-world attacks on systems, according to Andrew Lohn, a senior fellow at Georgetown's Center for Security and Emerging Technology.

"For a while all the academics like me were asking the people in industry: 'is this a real thing?' and they would just give us winks and nods. But there's just now starting to be an acknowledgment that this is a real thing happening in industry."

It seems to be happening first where AI is used to improve cyber security by detecting attacks. Here adversaries are seeking ways to subvert those systems so their malicious software can move undetected.

A new article co-authored by GCHQ's chief data scientist looks at how large language models (LLMs) like ChatGPT could also give rise to new and unanticipated security risks.

It says there are 'serious concerns' around individuals providing sensitive information when they input questions into models, as well as over 'prompt hacking' in which models are tricked into providing bad results.

Officials believe it's vital to learn lessons from the early days of internet security. For example, by making sure those writing the software and building products take responsibility for security.

"I don't want consumers to have to worry," says Lindy Cameron of the NCSC. "But I do want the producers of these systems to be thinking about it."

LeackStat 2023