A regulatory sandbox giving artificial intelligence (AI) developers a single end-to-end space to build and test systems is needed to help scale the technology through the NHS, health bosses have said.
The heads of 12 healthcare regulators and organisations met on January 28 to discuss the role of artificial intelligence (AI) in healthcare and how to proceed with its delivery.
NHSX chief executive, Matthew Gould, NHS Digital chief, Sarah Wilkinson, NHSX director of AI, Indra Joshi, National Data Guardian, Fiona Caldicott, and Care Quality Commission (CQC) chief digital officer, Mark Sutton, attended the meeting, alongside representatives from the Medicines and Healthcare products Regulatory Agency (MHRA), National Institute for Health and Care Excellence (NICE), and Centre for Data Ethics and Innovation.
They agreed greater clarity of each organisation’s role in regulating AI was needed, alongside a joined-up approach to regulation to create a single point of contact for developers.
Writing in a blog post NHSX chief executive Matthew Gould said there are two risks to the UK being a world leader in AI: firstly that AI that is unsafe will be used, secondly that the opportunity to use AI will be wasted or delayed “as both clinicians and innovators hold back until they know there is a regulatory framework that gives them cover”.
“Smart regulation could really help make the UK the best place in the world to develop AI in health. The benefits will be huge if we can find the sweet spot, where we maintain the trust that AI is being used properly and safely, while creating a space in which compliant innovation can flourish,” Gould wrote.
“We aren’t there yet. There are multiple regulators involved, creating a bewildering array of bodies for innovators to navigate and creating confusion for organisations in the NHS and social care who want to make the most of these innovations.”
Agreements from the meeting:
- clarity of role: in which the MHRA is responsible for regulating the safety of AI systems; Health Research Authority for overseeing the research to generate evidence; NICE for assessing their value to determine whether they should be deployed; CQC to ensure that providers are following best practice in using AI; with others playing important roles on particular angles
- a joined up approach, in which innovators do not have to navigate between lots of different bodies and sets of rules. NHSX will aim to set up a single platform, bringing all the regulatory strands together to create a single point of contact, advice and engagement
- a joined-up regulatory sandbox for AI, which brings together all the sandbox initiatives in different regulators and gives innovators a single, end-to-end safe space to develop and test their AI systems
- sufficient capability to assess AI systems at the scale and pace is required, either in-house in the relevant regulators, particularly MHRA, or through designated organisations working to clear standards set by those regulators
- quick progress on working out how to handle machine learning. NHSX will lead on policy before passing to regulators to implement
- better communication with clinicians, innovators and – crucially – the public, so people with views, expertise and concerns can feed them in rather than feel there is a secret process being done to them
Gould said that regulation for machine learning still needed to be ironed out and a clear path for innovators to get regulatory approval for their AI also needed to be established.
The £250m National AI Lab, announced by the government in August 2019, will “have regulation as one of its core streams of activity”, he added.
“It’s a huge agenda. But it really matters, and we need to move this all forward at pace,” Gould wrote.
“The prize – if we can get this right – is making the UK a world leader in AI for health, giving the NHS the benefits of this new technology safely, reducing the burden on its staff and improving outcomes for patients.”
Share this post if you enjoyed! 🙂