Medicine and the Law: AI Transformation of the Healthcare Sector

Alex Cheng Wei Ray

With the meteoric rise of artificial intelligence (AI) powered software like ChatGPT which has been made easily accessible for everyone,1 and given the fact that ChatGPT was able to achieve a pass for both the law school and medical school examinations,2,3 the topic of AI has reignited debates in the medico-legal community.

AI refers to "the ability of algorithms encoded in technology to learn from data so that they can perform automated tasks" with little or no human intervention.4 AI has gotten more sophisticated over the years and now offers more possibilities to improve diagnosis and clinical care,5 enhance health research and drug development, and assist with the deployment of public health interventions and new forms of public health surveillance. AI could also facilitate the development of personalised medicine,6 "smart" prosthetics and telemedicine, and support the management and planning of health systems.7 However, despite these potential benefits, AI also presents a wide range of issues and challenges which we will explore in this article.

Areas of concern

Data and privacy

AI systems rely on large datasets as inputs so that they can be trained to give the appropriate outputs in the form of decision-making. Some types of data include information about disease risks, lifestyle, mental health, family situation, sexual orientation, images and scans, wearable health device readings,8 implants, laboratory results, etc. The use of these data may lead to violations of patients' privacy and cause discrimination. For example, insurance premiums may be set on the basis of a person's health status or future health risks.9 Furthermore, with the proliferation of data use, incidents of cyber-theft, accidental data disclosure, surveillance and sale of personal data are bound to happen.10

Control, reliability and allocation of responsibility

Many AI technologies operate as "black box" systems whose internal processes and conclusions remain hidden or are difficult to explain to users.11 This can lead to a bias or lack of transparency and undermine the autonomy of healthcare operators, medical practitioners and patients. Furthermore, AI has the potential to transfer decision-making from humans to machines, which raises the question of who should be ultimately responsible when a decision goes wrong.12 As of now, no AI system in healthcare is truly autonomous and the clinicians are still ultimately responsible, but this may not be the case in the future as AI technology gets more mature and may eventually replace clinicians' input entirely. As an analogy, the Go-playing program AlphaGo has beaten all top minds and mastered the game of Go without human knowledge.13

Inequality, justice and access to healthcare

Some experts have argued that AI may not necessarily improve access to healthcare in an equitable manner, and expressed concerns that the benefits of AI-based healthcare may preferentially accrue to the wealthy.14 There may be good reasons to question the over-optimistic representations of AI and to explore the conditions that may improve access to the potential advantages of AI-supported healthcare. This is especially important for low- and middle-income countries, because at a global level, the advancement of healthcare brought about by AI technology appears to be unequally distributed.15

Impact on labour and employment

One school of thought suggests that doctors and other healthcare staff will have more time available for patients once parts of their present workload are simplified by AI.16 However, another school of thought argues that if patients interact more frequently with AI (eg, in the context of telemedicine), the time that medical practitioners spend with patients will be decreased.17 Furthermore, although AI may replace certain routine tasks and make certain administrative jobs redundant, physicians and nurses will be required to spend more time managing technology, analysing data and learning how to use new AI applications.

Updates on recent local AI developments

The rise of AI in healthcare and its widening influence has been monitored closely by the local medico-legal community. Prof Joseph Sung, Dean of Lee Kong Chian School of Medicine, Nanyang Technological University, delivered a lecture titled "Artificial Intelligence in Medicine: Ethical, Legal and Social Perspectives" as the 25th Gordon Arthur Ransome Orator during the Academy of Medicine, Singapore's annual induction ceremony on 23 July 2022. The lecture series was created in 1970 to honour Prof Gordon Arthur Ransome, the late founder and first master of the AMS, and the contents of Prof Sung's lecture gave a broad overview of what the medical community should expect and the impact that AI is likely to bring in the coming years.18

On 18 July 2023, Singapore's Personal Data Protection Commission (PDPC) issued a proposed set of advisory guidelines to offer clarification on how Singapore's comprehensive data protection law – the Personal Data Protection Act – would apply to the processing of personal data in the design, development and deployment of AI systems.19 In issuing these proposed guidelines and engaging with the industry via a public consultation that ended on 31 August 2023, the PDPC has signalled its commitment to adopt an inclusive, multi-stakeholder approach in shaping its regulatory stance over AI. It does not appear that Singapore will be laying down any new standalone legislation like in other jurisdictions (eg, the European Union), but will instead be applying current regulations to AI systems.20

During the Mass Call Address to newly admitted advocates and solicitors on 21 August 2023, the Honourable Chief Justice (CJ) Sundaresh Menon delivered a speech titled "Answering the Call in the Age of Artificial Intelligence", which touched on the challenges and complexities that the legal industry is expected to face ahead,21 of which certain parts of his speech are highly relevant to the healthcare industry as well. The CJ touched on two emerging issues in intellectual property (IP) law that are affected by generative AI: (a) whether a generative AI tool may be the named inventor of a patent, and (b) whether using public data to train generative AI models may infringe on IP rights, including whether and to what extent AI-generated output amounts to a "derivative work" under IP law. Under common law jurisdictions like Singapore, only a human may be named as the inventor, but the deeper issue remains as to whether the law should allow AI models to be recognised as inventors in the future.22 However, given that patents are territorial rights, the same issue may have a different outcome in countries like Saudi Arabia, where an AI robot named "Sophia" has been granted full citizenship.23

More recently on 28 August 2023, Singapore's Chief Health Scientist Prof Tan Chorh Chuan gave an overview of how Singapore is pushing for new technology and AI to achieve good and better healthcare with lower manpower requirements or cost.24 In the interview, he gave a few examples of how AI has been implemented in Singapore healthcare. One example is the Institute of Mental Health using mobile phones and wearables to identify patients with schizophrenia who are more likely to suffer a relapse, so that intervention can be started early. Another example is the deep-learning AI software Singapore Eye Lesion Analyser (SELENA+), capable of reading eye images to detect early diabetic retinopathy, invented by the National University of Singapore's School of Computing and the Singapore Eye Research Institute and rolled out in July 2021 to 20 polyclinics.

The way forward

Given that the current legislation appears to be lagging behind the advancement of AI technologies, it is prudent that medical practitioners should follow the World Health Organization (WHO) guidance on Ethics and Governance of Artificial Intelligence for Health when designing, developing and deploying AI for health. The six core principles identified by the WHO are to:4 (1) protect autonomy, (2) promote human well-being, human safety, and the public interest, (3) ensure transparency, explainability and intelligibility, (4) foster responsibility and accountability, (5) ensure inclusiveness and equity, and (6) promote AI that is responsive and sustainable.


References
  1. Hu K. ChatGPT sets record for fastest-growing user base - analyst note. Reuters [Internet]. 2 February 2023. Available at: https://bit.ly/3PFxOTe.
  2. Kelly SM. ChatGPT passes exams from law and business school. CNN Business [Internet]. 26 January 2023. Available at: https://bit.ly/3ZnJI7h.
  3. Hwang H. ChatGPT Passes Medical Board Exam. AI Business [Internet]. 27 February 2023. Available at: https://bit.ly/3EGOWl1.
  4. World Health Organization. Ethics and governance of artificial intelligence for health. Available at: https://bit.ly/44Y6KTF. Accessed 18 September 2023.
  5. Academy of Medical Royal Colleges. Artificial Intelligence in healthcare. Available at: https://bit.ly/3t3uKaI. Accessed 18 September 2023.
  6. Gomez-Gonzalez E, Gomez E. Artificial Intelligence in Medicine and Healthcare: applications, availability and societal impact. In: JRC Publications Repository. Available at: https://bit.ly/3Poc1xN. Accessed 18 September 2023.
  7. Biundo E, Pease A, Segers K, et al. The socioeconomic impact of AI in healthcare. In: MedTech Europe. Available at: https://bit.ly/48kwigv. Accessed 18 September 2023.
  8. Kapeller A, Felzmann H, Fosch-Villaronga E, Hughes AM. A Taxonomy of Ethical, Legal and Social Implications of Wearable Robots: An Expert Perspective. Sci Eng Ethics 2020; 26:3229-47.
  9. Bledsoe MJ. Ethical Legal and Social Issues of Biobanking: Past, Present, and Future. Biopreserv Biobank 2017; 15(2):142-7.
  10. Price II WN, Mcnair D. Health Care AI: Law, Regulation, and Policy. In: Matheny M, Israni ST, Ahmed M, Whicher D, eds. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. Washington, DC: National Academy of Medicine, 2022: 197-233.
  11. Castelvecchi D. Can we open the black box of AI? Nature 2016; 538(7623):20-3.
  12. Kasperbauer TJ. Conflicting roles for humans in learning health systems and AI-enabled healthcare. J Eval Clin Pract 2020; 27(3):537-42.
  13. Silver D, Schrittwieser J, Simonyan K, et al. Mastering the game of Go without human knowledge. Nature 2017; 550:354-9.
  14. Alami H, Rivard L, Lehoux P, et al. Artificial intelligence in health care: laying the Foundation for Responsible, sustainable, and inclusive innovation in low- and middle-income countries. Global Health 2020; 16:52.
  15. Joyce K, Smith-Doerr L, Alegria S, et al. Toward a Sociology of Artificial Intelligence: A Call for Research on Inequalities and Structural Change. Socius 2021; 7: 2378023121999581.
  16. Spatharou A, Hieronimus S, Jenkins J. Transforming healthcare with AI: The impact on the workforce and organizations. In: McKinsey & Company. Available at: https://bit.ly/3PeHdiY. Accessed 18 September2023.
  17. Gawande A. Why Doctors Hate Their Computers. The New Yorker [Internet]. 5 November 2018. Available at: https://bit.ly/3EEzs1b.
  18. Academy of Medicine, Singapore. Induction Comitia - July 2022. Available at: https//bit.ly/3LpCPfW. Accessed 18 September 2023.
  19. Personal Data Protection Commission Singapore. Public Consultation for the Proposed Advisory Guidelines on Use of Personal Data in AI Recommendation and Decision Systems. Available at: https://bit.ly/3PjJ9Hb. Accessed 18 September 2023.
  20. Aw C. Singapore Consults on Personal Data Guidelines for AI. Available at: https://bit.ly/3PIMiSn. Accessed 18 September 2023.
  21. SG Courts. Chief Justice Sundaresh Menon: Speech delivered at Mass Call 2023. Available at: https://bit.ly/46jVVfP. Accessed 18 September 2023.
  22. Villasenor J. Patents and AI inventions: Recent court rulings and broader policy questions. In: Brookings. Available at: https://bit.ly/44VHVaH. Accessed 18 September 2023.
  23. Stone Z. Everything You Need To Know About Sophia, The World's First Robot Citizen. Forbes [Internet]. 7 November 2017. Available at: https://bit.ly/44R1UHK.
  24. Khalik S. Good healthcare with less manpower is possible with new technology: S'pore chief health scientist. The Straits Times [Internet]. 28 August 2023. Available at: https://bit.ly/3rhfYfW.

Alex Cheng Wei Ray is a family physician who works as a locum medical doctor and is concurrently a practising lawyer at Donaldson and Burkinshaw LLP. Aside from his medico-legal qualifications, he also holds the degrees of Master of Professional Accounting and Master of Business of Administration. He can be contacted at alex.cheng@donburk.asia.

Tag

Previous Article

How AI Will Change Medicine

Next Article

Music as Medicine