Singapore University of Technology and Design

08/07/2024 | News release | Archived content

‘Explainable AI’ could protect critical infrastructure from cyber-attacks

The country is hit with the third most cyber-attacks in the world, after the US and Ukraine. Recent examples include huge disruption to NHS appointments after patient data was stolen in a ransomware attack, and campaigns against MPs and the Electoral Commission.

Artificial intelligence (AI) systems can make an important contribution to tackling those threats, flagging anomalies to system operators. But these systems are often opaque, leading to concerns about trust and accountability.

A team from the University of Bristol's School of Computer Science aims to tackle that challenge with a new prototype system aimed at protecting CNI, which includes key engineering-related sectors such as nuclear infrastructure, energy, health, space, transport and defence.

"All of these systems, over a period of time, are increasing automation - hence vulnerabilities and cyber-attacks are increasing, so our focus is to protect industrial control systems by detecting and mitigating anomalies caused by cyber-attacks," said research supervisor Dr Sridhar Adepu.

"We are working on securing these systems and trying to find what kind of attacks are possible, what the different attack vectors are. If there is an attacker exploiting any attack vulnerabilities, what can the attacker do?"

The system is designed to look for anomalies caused by cyber-attacks. The prototype uses two anomaly detection algorithms, which the team said had "significantly" shorter training times and faster detection capabilities than other approaches while maintaining comparable efficiency rates.

If an attacker compromises sensors or actuators and manipulates their values, the operators or engineers working on CNI might make decisions based on incorrect data. Targets could include building management systems, Dr Adepu suggested, or automated metro lines, where an attacker might tell a train to speed up instead of slowing down.

The algorithms were tested using a dataset from a water treatment testbed at the Singapore University of Technology and Design.

Previous systems often used rule-based detection mechanisms, but the researchers said this approach struggles when dealing with large systems and the vast range of readings from a heavily automated system.

If you have 32 sensors with on-off states, for example, the total number of possibilities is 4.3m, said Dr Sarad Venugopalan, co-author of the work. This increases exponentially as more sensors are added.

"That's why rule-based and number detection is a bit hard - practically impossible," he said. "You might need to use AI-based probabilistic mechanisms to find anomalous systems in such cases."

Instead of an opaque system, the team developed what it calls eXplainable AI (XAI), so the operator running the system can understand exactly why there is an anomaly and which sensor or actuator caused it. This means they can look at a specific location rather than an entire system, which might have thousands of actuators and sensors.

By having the AI system explain its recommendations to human operators - using textual messages, for example - the team hopes to enable human operators to understand and verify them before making critical decisions. The system should work more like a decision-support tool rather than an "unquestioned oracle", they added.

The effectiveness of various XAI models was evaluated, providing insights into which models best aid human understanding. A scoring system is being developed to measure the perceived correctness of the AI's explanations.

The research is part of the MSc thesis of Mathuros Kornkamon. The prototype recently won an award for best paper at the ACM Cyber-Physical System Security Workshop at the ACM ASIACCS 2024 conference.

The team plans to work with university innovation programmes, and hopes to deploy its system with customers in industry.

Is the introduction of AI in engineering a risk or an opportunity? Let us know what you think by taking part in our AI survey today.

Want the best engineering stories delivered straight to your inbox? The Professional Engineering newsletter gives you vital updates on the most cutting-edge engineering and exciting new job opportunities. To sign up, click here.

Content published by Professional Engineering does not necessarily represent the views of the Institution of Mechanical Engineers.