top of page
Writer's pictureDaniel Gao

Does the Doctor Take the Blame?

Artificial intelligence holds significant potential for application in the medical field, especially in diagnoses. But who's liable when it's wrong?


Based on data gathered from past patients, AI can aid doctors in interpreting CT scan images and ECGs, facilitating disease diagnoses, offering treatment recommendations, and predicting patient outcomes. Furthermore, it has the capacity to assist healthcare professionals in streamlining their workloads and saving time by automatically generating patient charts, test requests, and prescriptions. However, as healthcare practitioners embrace AI, they must also be mindful of potential legal considerations:


1. Traditionally, medical diagnoses and treatment plans are formulated by

physicians. When AI technology is employed, it becomes crucial to secure

informed consent from patients.


2. AI relies on data from a multitude of patients, necessitating the utmost

confidentiality and privacy protection for personal health data. Patients should be informed about how their medical records used and retain the right to decide

whether it can be utilized.


3. The question of liability for clinical decisions made with the aid of AI should be carefully defined and balanced. In the United States, there is currently limited

legal precedent regarding the liability of medical injuries resulting from reliance

on AI-generated information. In the Consumer Protection Act 1987 (CPA) and its associated Directive, defendants who put ‘defective’ products into circulation can be liable for the damage they cause. However, there is little precedent for AI because whether or not AI falls under 'product' in court remains vague.

a. Physician Liability: Physicians are traditionally held accountable for adhering to the established standard of care within their respective fields. If AI-generated outputs are incorrect, it could potentially impact the clinical decisions made. Physicians may find themselves burdened with liability for errors originating from the AI, affecting their motivation to utilize AI tools. Many physicians may be hesitant to assume liability for AI decisions when the black box neural network of the AI system remains opaque and uninterpretable.

b. Liability of the AI Developer: Inaccurate decisions may stem from poor training data and model frameworks. For instance, an AI model may fail to detect skin diseases in black patients if it was primarily trained on images of white patients.

i. However, costly lawsuits would diminish the incentives for technology companies to innovate medical AI solutions.

ii. In response, AI development companies have increasingly resorted to disclaimers and protective clauses to safeguard themselves. As an example, ChatGPT's homepage includes a warning to users that it 'may occasionally produce incorrect information."

c. Patient Liability: If the final clinical decision is reliant on patients,

it can potentially jeopardize patient safety, particularly because

most patients lack a background in medicine and AI technology. It

is the responsibility of the physician to provide patients with clear,

plain-language explanations about their diagnosis, the diagnostic

methods employed (including any AI systems used), available

treatment options, and the benefits and risks associated with each

option. The physician should also express their professional

opinion on the best course of action.


AI detection of pneumonia in chest radio graphs


My Take:

Personally, I believe that the physician or hospital should be held accountable for AI mistakes. At the end of the day, Generative AI is merely a tool, not a crutch, and doctor's reserve the right to reject an AI's decision. Furthermore, all patients should have the right to be notified when generative AI is used for their diagnosis.


Conclusion:

Any scheme of liability will incentivise or disincentivise certain behavior, ultimately shaping the cost and spread of technology in the healthcare industry. As more powerful generative AI tools emerge, there is an immediate need to establish legal precedents concerning the nature and legal liabilities of AI as a service to safeguard the well-being of patients.


Milbank Q. 2021 Sep; 99(3): 629–647.

Published online 2021 Apr 6. doi: 10.1111/1468-0009.12504



221 views3 comments

Recent Posts

See All

3 Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
Guest
Dec 12, 2023
Rated 5 out of 5 stars.
Like

Guest
Nov 04, 2023
Rated 5 out of 5 stars.

In my opinion, it is always doctor's responsibility to ensure that patients get the appropriate treatment.

Like

George Gao
George Gao
Oct 17, 2023

This is a very promising field of AI applications and worth for debating and exchanging opinions

Like
bottom of page