The recent achievements of Artificial intelligence (AI) open up opportunities for new tools to assist medical diagnosis and care delivery. However, the optimal process for the development of AI is through repeated cycles of learning and implementation that may pose challenges to our existing system of regulating medical devices. Product developers face the tensions between the benefits of continuous improvement/deployment of algorithms and of keeping products unchanged to collect evidence for safety assurance processes. The challenge is how to balance potential benefits with the need to assure their safety. Governance and assurance requirements that can accommodate the live or near-live machine learning (ML) approach will be needed soon, as it is an approach likely to soon be of high importance in healthcare and in other fields of application. We have entered a phase of regulatory experimentation with various novel approaches emerging around the world. The process of social learning is not only about the application of AI but also about the institutional arrangements for its safe and dependable deployment, including regulatory experimentation, likely within sandboxes. This paper will reflect on the discussions from two recent Chatham House workshops on regulating AI in software as a medical device (SaMD), hosted by the UKRI/EPSRC project on ‘Trustworthy Autonomous Systems: Regulation and Governance’ node, with a special focus on the recent regulatory attempts in the UK and internationally.
人工智能(AI)近期取得的成果为辅助医疗诊断和医疗服务提供了新工具的机遇。然而,人工智能开发的最佳流程是通过反复的学习和实施周期,这可能会对我们现有的医疗器械监管体系构成挑战。产品开发者面临着持续改进/部署算法的益处与保持产品不变以收集安全保证流程证据之间的矛盾。挑战在于如何在潜在益处与确保其安全性的需求之间取得平衡。很快将需要能够适应实时或接近实时机器学习(ML)方法的治理和保证要求,因为这种方法可能很快在医疗保健和其他应用领域具有高度重要性。我们已经进入了一个监管实验阶段,世界各地出现了各种新的方法。社会学习的过程不仅涉及人工智能的应用,还涉及安全可靠地部署人工智能的制度安排,包括可能在沙盒环境中的监管实验。本文将回顾由英国研究与创新署/工程与物理科学研究委员会“可信自主系统:监管与治理”项目节点主办的最近两次查塔姆研究所关于软件作为医疗器械(SaMD)中人工智能监管的研讨会的讨论情况,特别关注英国和国际上近期的监管尝试。