The Akron Legal News

Login | September 13, 2024

Generative AI and HIPAA

RICHARD WEINER
Technology for Lawyers

Published: August 25, 2023

The Science Blog is jumping into the discussion on some legal ramifications of generative AI, reporting on an article recently published in the Journal of the American Medical Association (JAMA) concerning a clash between generative AI (like ChatGPT) and HIPAA rules.
I don’t have a subscription to JAMA, so I couldn’t access that article. Maybe you can.
According to the articles, doctors’ offices are “turning to chatbots like… ChatGPT to organize notes, produce medical records or write letters to health insurers.” But these new practices may be violating HIPPA privacy laws.
Doctors are now using chatbots to create communications, much of which has private patient data (under HIPAA, “personal health information,” or PHI).
Doctors are also using chatbots to help organize the notes that they take during a patient conference, diagnosis, etc. More PHI.
This creates a situation where confidential, private patient notes get typed into a chatbot, which then makes the doctor’s life easier by writing the things that the doctor used to have to write.
Of course, the chatbot has access to a fair percentage of the Internet and, at that point, that PHI is no longer housed inside the security of the medical institution. It’s free!
Is that data secure out there?
The security of large language models attached to a chatbot is an open question in the first place.
But even if the PHI is technically secure, it doesn’t pass HIPAA muster for other reasons—namely, “business associates” rules under 45 CFR 160.103.
Business associates are insurance companies, lawyers, billing companies, etc. who work within the medical profession as a matter of course.
PHI, of course, goes outside the confines of medical institutions for any number of reasons.
In this case though, the PHI is being shared with a third party (OpenAI, in the case of ChatGPT), who has unknown security and who would not be a “business associate” under the rules.
In order to get permission to share PHI with a doctor, a business associate has to sign a specific agreement that specifies how the PHI will be used.
OpenAI or Bard or whoever is running that chatbot that the doctor is using has not signed that agreement.
Therefore, doctors using generative AI to organize their notes are probably violating HIPAA privacy laws having to do with business associates. And probably so are the generative AI platforms.
Unless they have signed agreements with the chatbots.
But what do you think the odds are of that?
I’m sorry to keep harping on AI, but that’s what’s filling the legal tech news channels.
More to come, I’m sure.


[Back]