Policy Reviews

Policy Reviews

Examining policies around privacy, data governance and usage for being explainable and specific with outcomes

Policies around data privacy and data governance often have a gap between declared intent and implementation. Sometimes this is because the teams drafting these texts are not included in the product/service development cycle. The unintended consequences and end consumer experience are often developed from the effects of this gap.

This project aims to examine the texts of policies to identify data governance practices. And as an outcome it will be producing commentary, opinion and feedback to improve the texts themselves as well how it is easier for the reader to comprehend.

Hosted by

Deep dives into privacy and security, and understanding needs of the Indian tech ecosystem through guides, research, collaboration, events and conferences. Sponsors: Privacy Mode’s programmes are sponsored by: more
Sankarshan Mukhopadhyay

Sankarshan Mukhopadhyay

@sankarshan

A short review of Ethical Guidelines for application of AI in healthcare published by ICMR

Submitted Jul 13, 2023

Earlier this year ICMR released the “Ethical guidelines for application of Artificial Intelligence in Biomedical Research and Healthcare 2023”. The document is available for download from this link.

The purpose of these guidelines is to ensure ethical conduct and address emerging ethical challenges in the field of Artificial Intelligence (AI) in biomedical research and healthcare. These guidelines are also meant to provide a framework for ethical decision-making in medical AI during the development, deployment, and adoption of AI-based solutions.

If you are looking for a TL;DR version then this post at Medianama is a good place to start.

For reasons which are very easily understood this document is a “guideline” - it is not a standard or a set of requirements which can be used to evaluate and assess the range of AI-based solutions, products and services which are becoming publicly available. The challenge posed by these consumer-grade products and services is that they often characterize the “AI” part as the key differentiator. In this context, the consumer or, patient (or, data subject) needs to have a good grasp of the implications around a AI-based approach.

In the first section the document enumerates a set of “Ethical Principles for AI in Healthcare”. Alluding to the need for responsible AI, it lists the ethical principles as Autonomy, Safety and Risk Minimization, Trustworthiness, Data Privacy, Accountability and Liability, Optimization of Data Quality, Accessibility, Equity and Inclusiveness, Collaboration, Non-Discrimination and Fairness Principles and Validity. Structurally, these are important and necessary ideas to consider in any discussion around responsible AI and especially when AI co-mingles with healthcare. The shortcoming in the current set of guidelines is that (a) it chooses not to clearly define all the aspects and (b) it mixes up principles which impact the human and human rights with those which are likely to be part of operational governance of a system.

The absence of definitions could be attributed to a perception that the terms oare commonly understood and thus can be taken to mean exactly as they are used in everyday conversations. The mixing of scopes makes it somewhat challenging to grasp the impact of these principles in safeguarding the human (patient) from harms. As an example, the narrow scope of privacy as “Data Privacy” leaves open the possibility of interpretations during design phase to be less wide-ranging.

If the introduction and integration of AI in healthcare is expected to be assistive rather than inferencing then it is also critical to highlight the importance of identifying and mitigating risks. The management of risks through well set governance mechanisms is key to the elevated trust in such systems. The discourse around the sectoral integrations of AI often begin with transparency - a capability that is needed for expert analysis of logical flows as well as the datasets which are used to train the system. This brings up the most important aspect - should ethical guidelines of AI in healthcare also include guidelines around robust data governance through the entire data pipeline? It would be a pragmatic and wholesome approach if such is included.

The guideliness are well intentioned and timely - but they also carry a sense of “a camel is a horse designed by committee”. It attempts to do too much, it covers everything superficially and in the end provides evaluation criteria that is left to the implementors to comprehend and quantify. Today there is a need to have strong and opinionated evaluation mechanisms which incubate innovation that is human-centric and is built around patient-centricity. A loose structure that perhaps depends too much on self-regulation for good is not what Ethical AI guidelines for healthcare and biomedical research should be.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Deep dives into privacy and security, and understanding needs of the Indian tech ecosystem through guides, research, collaboration, events and conferences. Sponsors: Privacy Mode’s programmes are sponsored by: more