As part of our work to support the nuclear industry in embracing innovation, we ran an expert panel meeting on the use of artificial intelligence (AI) in the nuclear sector along with the Advanced Nuclear Skills and Innovation Campus (ANSIC). The aim of the expert panel was to establish a roadmap for effective and enabling regulation of AI in the nuclear sector.
The panel discussed the opportunities and challenges in the application of AI and recognised that they are likely to be the same across other industrial sectors.
The panel identified three broad opportunities for the deployment of AI:
The principles of good software development processes and maintaining configuration control apply to AI system development. The panel recognised the importance of good systems of work and data (success and failure data) to train AI systems and the need for robust cyber security arrangements.
Standards are being developed but they tend to lag behind the technology. There is a need to understand ways of proving the performance of AI systems and opportunities to transfer information from other sectors into nuclear.
Testing of AI systems can only demonstrate so much, due to the complexity of the systems and challenges of undertaking a meaningful and complete set of tests. A phased approach to deployment is important to build confidence and experience. Understanding the consequences of maloperation and identifying effective controls can be increased by:
Defining and recognising failure of AI systems is crucial. There is an important trade-off between constraining the application of AI to such an extent that its usefulness is limited and recognising when failure can be tolerated and when it cannot. Concepts already used in nuclear are applicable to AI, such as:
The sector needs access to or ways to develop skills and experience on the application of AI in nuclear.
We asked the panel to identify potential applications of AI that could be challenging to regulate. These will then be put into the Regulatory Sandbox we are developing. Sandboxing enables innovators to test and trial new solutions in a safe environment without the pressures of the usual rules applying. This will allow the industry and regulators to develop appropriate confidence in AI, understand ways of regulating AI and gather industry views on what an appropriate safety case could look like.