Introduction

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last unless we learn how to avoid the risks.” — Stephen Hawking

In the twenty-first century, technology’s role in the domain of healthcare has grown leaps and bounds. Previously in this domain tech was only used to perform clerical work or digitizing health records. But today, Artificial Intelligence (“AI”) has recast new ways for healthcare to use technology. In a vast country like India, where there is only one government doctor available for every ten thousand odd people against the WHO’s recommended ratio of 1:1000, AI can play a pivotal role in making healthcare cheap, accessible and efficient. It has the potential to overhaul disease tracking and management, particularly for infectious diseases.

AI like Watson has already replaced doctors and is competent to interpret X-ray scans and perform diagnoses based on symptoms. Today, AI is being used as a ‘consulting physician’ and shall soon be able to perform intricate surgical operations. Undoubtedly, its role will exponentially rise in healthcare.

Since medicine is not an exact science, there remains a possibility of injury to patients on account of miscalculated or erroneous results at the behest of technology. This raises serious questions relating to the injury arising out of negligence, or questions such as whether the hospital employing AI will be vicariously liable for its acts? What if patients personal data gets leaked by AI? In this article, we have attempted to address questions and put forth attribution of liability model to AI.

Traditional Liability

Liability for Medical Negligence generally falls under the Law of Torts but can also be covered under Indian Penal Code, 1860 (“IPC”)(when negligence found to be “gross” or “reckless”), Indian Contract Act, 1872 (“ICA”) and Consumer Protection Act, 2019 (“CPA”) . Patients can seek compensation from doctors, hospitals, drug developers, medical instruments manufacturers if they prove the fact that injury was caused to them due to the party’s breach of duty of care. These claims come under the following heads.

Medical Negligence

It can be defined as “failure to act by the standards of reasonably competent medical men at the time of the negligence.” Courts often seek expert’s opinions to establish what this “reasonably expected standard of care” was at the time of treatment. It is pertinent to note that the concept of “reasonable standard care” evolves with advances and new researches in medical science. Therefore, AI technology might create more uncertainties for medical practitioners regarding this concept.

Vicarious Liability

Hospitals are vicariously liable for any negligence by their employees or deficiency of services provided by them. The hospital can also be made responsible for any independent contractors or visiting doctors/surgeons. These judicial precedents imply that in case of injury due to AI’s fault, hospitals are also liable. The jurisprudence behind this is to widen the responsibility of the hospital and to allow the complainant to recover compensation for injury.

Product Liability

Product liability has been defined in Section 2(34) CPA, But in medical parlance, it can be understood as ‘the liability of any party associated in the manufacturing and supply chain of a product arising from any product’s defect because of which damage or injury caused to the patient’. Under the Consumer Protection Act, there exists a joint liability of the manufacturer and the seller. If the foreseeable risks of a product are higher than foreseeable benefits, then the medical practitioner should not prescribe the use of the product. If he so does, then under the learned intermediary doctrine, he will be solely liable.

Data Privacy

In this era, data is considered to be the new currency. In the medical domain, every patient’s information has gone digital. This information may include fingerprints, medical reports, insurance details, travel history and much more. AI literally lives by consuming this information.

Recently, in a government application Aarogya Setu, hackers found loopholes which put 90 million users data at risk. This increases anxieties surrounding data leaks in private companies as well.

The Personal Data Protection Bill introduced in 2019 is still pending before Parliament for approval and therefore, the protection to personal data is given under the Information Technology Act, 2000 and the Information Technology (Reasonable security practices and procedures and sensitive personal data or information) Rules, 2011. Section 43A of the Information Technology Act makes ‘body corporate’ responsible for the handling of sensitive personal data which completely ousts liability of AI.

Bringing AI into the Equation

These legal doctrines were built keeping humans as the main subject if applied to AI; these might not function the same way. One of the main reasons is the foreseeability factor, as it is impossible for the creators of AI to foresee what the actions of AI will be, once it is brought to use. Therefore it will be irrational to hold them with tortious liability.

Another fact to be noted is the increased autonomy of AI. This implies that fewer entities like physicians, hospitals, and AI creators have control over it; therefore, the legal doctrines based on principle-agent, control over actions, foreseeability factor by AI creators also fails. This directly impairs the injured patient’s right for compensation.

If we try to apply the product liability rule, then the injury-causing product is considered to be the hardware component of the medical device, and not the AI software.

In the cases of breach of user’s data privacy by AI, the current legal framework completely absolves AI from liability since it cannot be recognized as ‘legal personality’ or a ‘body corporate’. Also, Section 79 of the Information Technology Act makes the intermediary service providers (AI in this case) free from liability as they are considered mere carriers of content.

In this is increasingly escalated use of AI in healthcare, the current legal framework is insufficient.

Fictive Circumstances for the attribution of liability

When AI is an innocent agent working on principle’s instruction:

Since the element of mens rea lacks in AI, it shall be presumed to be an innocent agent.

In the first scenario, assume that the medical malpractice arose due to intentional programming by the developer to commit the offence. In this case, only the developer shall be criminally liable under Sections 300, 337 and 338 of the IPC, and not the doctor.

Alternatively, if the doctor mistakenly alters the data of the patient in AI, which leads to an injury to the patient, it is the doctor who shall be held liable.

When AI is mistaken:

In this scenario, the developer did not intend to commit any offence. He did not foresee the possibility of this act.

In the first scenario, assume that the doctor uses AI for a critical operation on the patient. AI misdiagnoses the symptoms, which leads to injury to the patient. This misjudgment shall be attributed to the developer of AI.

Alternatively, the doctor with mala fide intention alters the data of patient ‘X’ but due to some glitch, the AI changed the data of patient ‘Y’ leading to injury to ‘Y’. In this case, the doctor shall be liable under Section 304A of the IPC.

When AI is autonomous:

This is a slightly futuristic scenario. Where AI is an independent entity not solely relying on the codes and algorithms but also on its own life experiences. It has the autonomy to choose from alternate options.

Assume that AI is in the middle of cardiovascular surgery. The algorithm is suggesting the procedure AI is using could lead to heart rhythm irregularities. Still, AI continues the procedure which results injury to the patient.

In this case, AI shall be solely liable for this act.

Legislative Intervention

Presently, AI is completely immune from all responsibilities. We believe that it is time for the legislators to make necessary changes in prevailing laws before things go south. For instance, the definition of ‘person’ can be broadened to include AI. By granting legal personality to AI, it can be subjected to legal rights and duties, Just like Sophia, a humanoid which was granted citizenship and legal status by Saudi Arabia. Therefore it will no longer be treated as an agent but as a principle. Stringent penal provisions should be enacted for AI as a perpetrator like suspension of license to practice. Also, corporate criminal liability should be attached to AI developers so that the corporate veil can be lifted.

Or common enterprise liability can be imposed on all the entities who are involved in use and implementation of the AI system. One benefit of this method is all the entities involved will be extra cautious for safety and equally responsible. Also, there will be no burden of finding the error in AI for the complainant which would be very difficult in case of Black-box AI (Type of AI in which inputs and operations are not visible to the user or another interested party).

Conclusion

It is a unique position for India to establish AI healthcare driven approach to overhaul the broken healthcare infrastructure. With vast startup community, India has the opportunity to tackle health-related problems with the help of AI. In this quest, the government has already started various initiatives to pave the way for AI in health care. Yet, many hindrances still stand regarding the attribution of liability of AI. This can only be resolved with an extensive regulatory framework which ensures transparency and accountability at the same time which does not obstruct innovation.

The article has been written by Ranjeet Soni and Rohit Shrivastava

This article was first published in the Blog namely The Criminal Law Blog