Each partner institution of the Swiss Learning Health System (SLHS) is working on a specific topic that will lead to policy briefs and stakeholder dialogues.
Policy Brief on Quality & Risk Management of Ethical AI Use in Human Health Research
Evaluating and reporting on the ethical use of AI in health research is complex for due to several factors:
- Multiple principles underlying ethical AI
- Interdisciplinary stakeholders
- Characteristics of the AI system being used
- A variety of ethical guidelines and frameworks
Options for Action:
- Embed ethical AI considerations in the research project life cycle. Integrate ethical assessments throughout the entire process, from initial idea generation to design, development, evaluation, and clinical translation. This involves forming a project advisory committee including representatives of clinicians, AI developers, patients, ethicists and other stakeholders for shared decision-making and accountability. Design a specific ethical AI assessment and reporting strategy tailored to the project's unique challenges and select validated assessment and reporting frameworks whenever possible. Publish information about this strategy to benefit the wider scientific community.
- Professionalize health-AI projects portfolio in hosting organizations. Organizations involved in health-AI development should maintain a portfolio of their projects. This includes using stage-based evaluations aligned with each project’s lifecycle. Building "ethics as a service" capacities within organizations can provide expertise, optimize resource use, and ensure compliance. Risk and quality management offices should expand their oversight to include ethical AI assessments and reporting.
- Funding for responsible and transparent health-AI innovation. Funding bodies and scientific publishers should encourage and incentivize projects that demonstrate rigorous ethical assessments of AI from the outset and enforce transparency and reproducibility requirements. Support for evaluation science, operationalization efforts, and data/code sharing is crucial. Private funders also have a responsibility to incentivize attention to societal responsibilities in AI products they finance.
- Provide guidance on regulation applicability and agile regulatory processes. Establish clear oversight and accountability mechanisms for AI performance. Regulators need to increase agility in their processes by reinforcing regulatory science, building capacity, involving stakeholders, encouraging self-regulation, proactively screening new technologies, and initiating public-private partnerships for "AI assurance laboratories".
AI-based science should adhere to rigorous scientific principles despite its complex challenges of explainability, reproducibility, governance, ethical implications. Research stakeholders have a responsibility to proactively address these challenges by establishing quality and applicability assessments for AI systems and sharing responsibility for AI systems validation.
Eliane Maalouf, author of the Policy Brief
Author: Eliane Maalouf
Institution: Université de Neuchâtel
Email: eliane.maalouf@unine.ch
If you are interested in the topic please contact us.