ChatGPT in Healthcare: Navigating the HIPAA-Cups

Article

an AI visual of a human with healthcare icons rotating around it

Artificial Intelligence’s (AI) popularity and interest has skyrocketed thanks to the release of ChatGPT in November 2022. ChatGPT is a natural language processing AI chatbot created by OpenAI. This language-learning software is trained by user feedback in order to better understand human interaction. Since its release, various industries have been looking towards implementing ChatGPT, and other similar Generative AI (GAI) tools.

The healthcare sector has not been isolated from the recent GAI wave. Healthcare professionals have already begun looking into the various ways that GAI could be implemented into their workflows. GAI can have a far-reaching impact in many different areas of healthcare including summarizing patients’ medical histories, creating treatment plans, suggesting possible diagnoses based on symptoms, and aid in routine administrative tasks such as scheduling appointments and triaging patients. As with all new technology, however, GAI does have drawbacks, and for those in the healthcare sphere, it is best to use a guarded approach to utilizing GAI.

One of the biggest concerns surrounding the use of GAI in healthcare is whether or not the tool complies with the Health Insurance Portability and Accountability Act (HIPAA). HIPAA requires certain “Covered Entities” to protect sensitive patient data. Any health insurance company, clearinghouse, or provider that handles Protected Health Information (PHI) are required to comply with HIPAA regulations. “Business Associates,” which are third-party organizations that provide services for or on behalf of a Covered Entity, and receive PHI from a Covered Entity in connection with those services, are also required to comply with HIPAA standards and sign “Business Associate Agreements” (BAAs) with Covered Entities agreeing to comply with HIPAA regulations.

Previously, OpenAI warned against the use of uploading confidential information to ChatGPT in its Terms of Service and on its FAQ page. However, on March 1, 2023, OpenAI updated its policies by noting that certain data submitted by users will not be used to train or improve its models unless individual users explicitly opt-in to share that data. OpenAI has also announced that it will be able, upon request, to sign BAAs in support of an applicable customer’s compliance with HIPAA. If a BAA is not in place between OpenAI and the Covered Entity, any PHI uploaded to ChatGPT will violate HIPAA. Even if a BAA is in place with OpenAI, individual employees of a Covered Entity still need to proceed with caution, as HIPAA requires all disclosures of PHI to a Business Associate to have a legitimate healthcare operations, treatment, or payment purpose; and such disclosures should only use the minimum necessary amount of PHI in order to accomplish such task.

An additional issue concerning ChatGPT is its ability to “hallucinate” by creating false or misleading information that, on its face, appears correct, but is unsupported. Put differently, GAI like ChatGPT, can make mistakes. Thus, ChatGPT output must be reviewed by a healthcare expert, and in fact, not only is that a best practice, it’s a ChatGPT requirement. In its Usage Policies, OpenAI requires that “consumer-facing uses of our models in medical, financial, and legal industries . . . must provide a disclaimer to users informing them that AI is being used and of its potential limitations.” Beyond disclaimers, it’s worth noting that, as of this writing, ChatGPT is only trained on information up to September 2021. Additionally, as ChatGPT is trained on human-generated material, it may, and does, perpetuate biases that are already found in healthcare.

Healthcare providers can avoid violating HIPAA when its employees utilize GAI tools by implementing policies that restrict or block their staff from using ChatGPT. If a certain department or staff member identifies a legitimate business case for the use of ChatGPT, such access should be granted only after the employee is properly trained on the use and pitfalls. The Covered Entity should further explore whether a BAA with OpenAI is appropriate.

Healthcare providers should also have strict security measures for storing and transmitting PHI if using ChatGPT, such as the following:

  • Data should be encrypted and the GAI platforms used should be secure and HIPAA compliant. There are platforms such as gpt-MD and SuperOps.ai which act as proxies for OpenAI and purport to allow healthcare providers to use ChatGPT in a HIPAA-compliant way. Individual Covered Entities should confirm each platform’s HIPAA compliance if utilizing third-party proxies.
  • Regular risk assessments and audits should be conducted to ensure compliance with HIPAA and applicable state privacy laws.
  • Only authorized personnel should be able to interact with both the GAI platform and the data.
  • Properly de-identifying health data before it is uploaded to Chat GPT can mitigate the risk of improper disclosure of PHI and allow the data to be utilized without violating HIPAA regulations. Any outputs from ChatGPT or other generative AI tools should be reviewed by a human expert to ensure its accuracy.
  • Healthcare providers should also provide training and education to all healthcare staff about the limitations and potential risks associated with AI in order to reduce the risk of a potential HIPAA violation by an uninformed staff member.

When used correctly, AI can be a powerful tool that streamlines tasks and gives patients a better experience overall. However, it still should be used with caution. Healthcare providers need to be mindful of how they utilize AI, especially when it comes to the privacy of their patients.

 


[1] The authors of this article were having difficulty with a title, so we asked ChatGPT to write one for us. Another option was: "ChatGPT in Healthcare: 'HIPAA-potamus' - Navigating the Murky Waters of AI Confidentiality!"

Industries & Practices

Related Attorneys

Media Contact

Subscribe to Receive Updates
Jump to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.