Medical & Healthcare AI Assessment
ML in Health Science
Are all clinical decisions generated by your healthcare AI system confirmed by a qualified healthcare professional?
Every AI-generated clinical decision is reviewed and confirmed by a qualified healthcare professional before it is applied to a patient.
A qualified healthcare professional regularly reviews the AI application's clinical outputs (for example, weekly or monthly) as part of routine quality assurance.
Clinical AI decisions are applied without review by a qualified healthcare specialist.
Are you and the team members responsible for developing your healthcare AI system clearly identified and easy to contact?
Core team members are clearly identifiable (names, roles, affiliations, professional profiles), and contact information is available and verifiable through real accounts (for example, institutional accounts, World ID, verified social networks, or verified scientific profiles).
Information about the team is missing, generic, or cannot be reliably attributed to real persons.
How transparent are your healthcare AI model’s architecture and training data?
Information about the model and its training data is treated as confidential and is not available to external reviewers.
The model and/or its training data are publicly available (for example, in an open repository or published dataset), and technical documentation is accessible to external reviewers.
The model and/or its training data are not public, but can be made available to qualified external reviewers (for example, regulators or auditors) on request under appropriate agreements.
Has your healthcare AI system been independently validated before its application in clinical settings?
The system has been validated by the development team.
The model has undergone independent testing (for example, blinded expert comparison) using a clearly documented validation protocol.
The system was evaluated only on its own training data.
Are the patients or clients who receive your AI-supported services clearly informed that AI is used in their diagnosis or treatment?
Users are not informed that AI is used in diagnostics, communication, or decision support.
AI usage is mentioned in general terms and conditions.
Before AI-supported decisions are applied, users receive clear information and provide documented consent (for example, signed or logged confirmation).
Does your AI system have a public feedback page or active social media account that is not moderated solely by the developer team?
AI users can provide feedback via a general email address or contact form that is not visible to other users.
A dedicated, transparent feedback channel exists (for example, a public page or independent social account) with free-text comments and the option for anonymous reports, and it is not moderated by the AI developer team.
No clear mechanism exists for users to submit feedback or complaints about the AI system.
Are healthcare professionals actively involved in the design, validation, and ongoing monitoring of your healthcare AI system?
Healthcare professionals are core team members with clearly documented roles in design, validation, and ongoing monitoring.
Relevant healthcare professionals were consulted or involved in training, validation, or testing.
There is no documented involvement of healthcare professionals in the AI project.
Back
Next
This is an indicative score only and does not replace a formal AI Act or regulatory compliance review.
research paper
© v1.1 28.11.2025