Health care using AI is bold, but much caution first

India cannot jump into AI-driven health care without first addressing the foundational issues within its health system

Updated - September 13, 2024 12:34 pm IST

‘The stakes are serious in human health, where the consequences of a mistake can be life-threatening’

‘The stakes are serious in human health, where the consequences of a mistake can be life-threatening’ | Photo Credit: Getty Images/iStockphoto

News about the possibility of a “free AI powered primary-care physician for every Indian, available 24/7” within the next five years is ambitious. It raises critical questions about feasibility, sustainability, and the readiness of India to tackle such enormous undertakings.

Primary health care (PHC) ensures the right to the highest attainable level of health by bringing integrated services closer to communities. It addresses health needs, tackles broader health determinants through multisectoral action, and empowers individuals to manage their health. We risk undermining this fundamental aspect of PHC by relying on Artificial Intelligence (AI) as it is impersonal, making people passive recipients of care rather than active participants.

AI excels in processing and automating repetitive tasks but lacks characteristics of human intelligence such as understanding the physical world, retrieving complex information, maintaining persistent memory, and engaging in reasoning and planning. These are all fundamental to medicine, where understanding the nuances of a patient’s condition goes beyond pattern recognition.

Delivering health care demands a human-centric approach of empathy and cultural understanding. Consciousness — the awareness and understanding of the real-world environment — underpins human decision-making, distinguishing human intelligence from AI. AI cannot replicate the moral and ethical reasoning that comes from conscious experience. Unlike other domains, health-care data is scattered, incomplete and often inaccessible for AI training, making it difficult to train a model.

Data, models and issues

Naegele’s rule from obstetrics, which has been in use for over 200 years, can be used to highlight the challenges in health care. It is based on 18th century reproductive habits of European women, which may not be applicable today. This method is used to predict the birth date of a child during pregnancy. It relies solely on the length of the last menstrual cycle and has a 4% accuracy. It fails to account for critical factors such as maternal age, parity, nutrition, height, race, and uterus type, which are essential for accurate prediction. Developing a better predictive model than Naegele’s rule requires vast amounts of personal data, which belong rightfully to patients. This illustrates the inherent paradox in AI development in health care — the need for extensive data collection to improve accuracy is at odds with privacy and ethical concerns.

The costs involved in establishing infrastructure to capture, collect, and train this data are substantial. As reproductive health and fertility rates change over time, constant fine-tuning of AI models is necessary, leading to recurring expenses. Health-care data is complex and personal, making it difficult to standardise it across populations.

India’s diversity complicates the issue further. This diversity means that data for AI models must be extensive and deeply contextualised, but generating such data requires access to personal and behavioural information.

AI’s utility in health care

AI can play a crucial role in specific, well-defined tasks within health care, particularly through narrow intelligence, diffusion models and transformers. Narrow intelligence focuses on specialised tasks such as predicting hospital kitchen supply needs, managing biomedical waste, or optimising drug procurement. Diffusion models, which are adept at predicting patterns from complex datasets, can help screen histopathology slides or screen only a subset of the population using medical images.

Also read | AI-powered EMR brings major changes in patient medical data management 

Large Language Models (LLMs) and Large Multimodal Models (LMMs) are emerging as powerful tools in medical education and research writing. These can provide rapid access to medical knowledge, simulate patient interactions, and support the training of health-care professionals. By offering personalised learning experiences and simulating complex clinical scenarios, LLMs and LMMs can complement traditional medical education

A significant issue with AI in health care is the “black box” problem, where the decision-making processes of AI algorithms are not transparent or easily understood. This poses risks in health care, where understanding the rationale behind a diagnosis or treatment plan is critical. Health-care providers are left in the dark about how certain conclusions are reached, leading to a lack of trust and potential harm if the AI makes an incorrect or inappropriate recommendation.

Google DeepMind’s AI mysterious algorithm defeating world-class players in the GO game (board game) can be celebrated. While such feats are acceptable for games, they raise concerns in real-life health-care decisions. The stakes are serious in human health, where the consequences of a mistake can be life-threatening.

India and the issue of AI governance

A recent petition in the Kenyan Parliament by content moderators against OpenAI’s ChatGPT has highlighted the ethical complexities in AI development, revealing the exploitation of underpaid workers in training AI models. This raises concerns about the exploitation of vulnerable populations in AI training. It underscores the importance of safeguarding the interests of Indian patients because the data required to train the model legally belong to patients.

While population-level data generated by health systems can be useful, it is prone to ecological fallacy. India lacks comprehensive regulation or legislation addressing AI such as the European Union Artificial Intelligence Act, making it all the more critical. AI tools in health care must be developed and deployed with the core medical ethics of “Do No Harm”.

AI-powered health care in India promises increased efficiency and reduced error rates. Advanced AI technologies require significant investments in research, data infrastructure, and continuous updates — costs that someone must bear. India cannot leapfrog into AI-driven health care without first addressing the foundational issues in its health system. The complexities of patient care, the need for high-quality data, and the ethical implications of AI demand a more measured approach.

Dr. C. Aravinda is an academic and public health physician. The views expressed are personal

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.