Most of the users are choosing large language models like Grok, Llama, Gemini, and ChatGPT, and they are changing the way doctors work by making decisions, communicating, documenting, and being more efficient in places like the emergency department. While adoption has outpaced regulation, about 23% of healthcare systems have Business Associate Agreements (BAAs) that are in line with HIPAA, and 66% of US doctors are using AI tools.
The Breach Rule, the Security Rule, and the Privacy Rule are the three main rules of HIPAA. They set the legal process for protecting patient health information (PHI). The breach rule says that any illegal disclosures must be reported and says that disclosures must be made if more than 500 people are involved. The security rule protects electronic PHI (ePHI), and the privacy rule applies to the sharing of identifiable health data for operations, payment, and treatment.
In these cases, the legal responsibility is shared but not evenly. Covered entities and other healthcare organizations are mainly responsible for following HIPAA rules. This includes checking out AI vendors and making sure that the right BAAs are in place. If something goes wrong, the government may investigate it and take steps to fix it, then fine the company.
Certain clinicians may incur immediate legal repercussions, which will endure under institutional policies, potentially resulting in corrective measures such as termination or retraining. Criminal liability is not very common and is usually only for intentional misuse. State laws may add to the risks that come with federal law, such as making people civilly liable for breaking the law when it comes to sensitive information like reproductive health or mental health.
If there is a possible breach, it should be reported right away with all the important information. Then the institution will do a risk assessment to see if it was reported early or late. Reporting early helps lower the risk and shows that you are following the rules.
The preventive measures must be taken initially to reduce the risk, while healthcare systems must impose clear AI policies and inhibit the use of PHI for vet vendors, unofficial tools, and employ technical safeguards alongside staff training and access controls.
Clinicians must use AI with caution and safeguards to use only approved tools, carefully reviewing any AI-generated content before use.
As concerns raising for leaking sensitive information and data exposure with the increased use of AI tools, as it stores and process the input received from the user. Involuntary sharing of PHI without approval remains a HIPAA violation.
The legal process is growing with state and federal laws shaping liability. Healthcare institutions should incorporate strong governance, including BAAs, clear policies, and secure AI tools.
Clinicians should be careful when using AI, stay away from unapproved systems that have visible data, and follow the rules of their institution. In general, it’s important to find a balance between modernization and compliance to protect patient privacy and keep trust in clinical practice.
Reference: Schoolcraft D, Meltzer A, Sangal R et al., Health Insurance Portability and Accountability Act liability in the age of generative artificial intelligence. J Am Coll Emerg Physicians Open. 2026;7(2):e00275. doi:10.1002/emp2.00275


