Healthcare / MedoraMD / March 2026

HIPAA-Compliant AI: What Healthcare Teams Need to Know

What HIPAA actually requires for AI scribes, what BAAs cover, and how to evaluate compliance before you deploy anything.

The most overused phrase in healthcare AI

"HIPAA compliant" is the most overused phrase in healthcare AI marketing. Every vendor slaps it on their landing page. Most of them couldn't pass a basic compliance audit if HHS walked through the door tomorrow.

Here's the problem: HIPAA compliance isn't a badge you earn once. It's not a feature you check off. It's an ongoing operational discipline that covers how you handle, store, transmit, and dispose of protected health information. And when you introduce AI into a clinical workflow, the compliance surface area expands dramatically.

If you're a practice administrator, compliance officer, or clinician evaluating AI tools, this guide covers what actually matters — not the marketing version, but the operational one.

What HIPAA requires for AI systems

HIPAA doesn't have a section titled "AI Rules." But the Security Rule, Privacy Rule, and Breach Notification Rule all apply the moment an AI system touches protected health information (PHI). Here's what that means in practice:

PHI handling. Any AI scribe, documentation tool, or clinical assistant that processes patient data — voice recordings, transcribed notes, demographic information, diagnoses — is handling PHI. The system must treat every piece of data with the same safeguards you'd apply to an electronic health record.

Encryption. The Security Rule requires that electronic PHI (ePHI) be encrypted both at rest and in transit. This isn't optional. If your AI vendor stores audio files on a server without encryption, they're in violation. If transcripts move from the AI engine to your EHR over an unencrypted connection, that's a breach waiting to happen.

Access controls. Only authorized individuals should be able to access PHI. That means role-based access controls (RBAC), unique user identification, automatic logoff, and authentication mechanisms. "Everyone on the team has the same login" is a compliance failure.

Audit logging. The Security Rule requires that covered entities and business associates maintain logs of who accessed what PHI, when, and what they did with it. Your AI vendor needs to produce these logs on demand — not generate them retroactively when an auditor asks.

Minimum necessary standard. The Privacy Rule requires that PHI use and disclosure be limited to the minimum necessary to accomplish the intended purpose. If your AI scribe is capturing the full audio of a visit but only needs the clinical discussion, it should have mechanisms to segment or limit what it processes and retains.

The BAA question

A Business Associate Agreement is not a formality. It's the legal document that makes your AI vendor accountable under HIPAA. Without a signed BAA, you have no legal basis for sharing PHI with that vendor — full stop.

Here's what to look for in a BAA:

  • Scope of permitted uses. The BAA should clearly define what the vendor can do with PHI. Processing for documentation? Yes. Using it for product analytics? That should be explicitly excluded.
  • Subcontractor obligations. If your AI vendor uses third-party cloud providers or sub-processors, the BAA must require that those subcontractors also comply with HIPAA and have their own BAAs in place.
  • Breach notification timelines. HIPAA requires notification within 60 days of discovering a breach. Your BAA should specify the vendor's obligation to notify you — ideally within 24-48 hours, not the maximum window.
  • Data return and destruction. When the relationship ends, what happens to your PHI? The BAA should require the vendor to return or destroy all PHI and provide certification of destruction.

Red flags: A vendor that won't sign a BAA, says they don't need one because they "don't store data," or offers a BAA that excludes their AI processing layer. If the AI model sees the data, it needs to be covered.

Encryption standards: what actually matters

Vendors love saying "we use encryption." That statement by itself is meaningless. Here's what you should verify:

At rest: AES-256. This is the gold standard for data encryption at rest. It's what the U.S. government uses for classified information. Your AI vendor should encrypt all stored PHI — audio recordings, transcripts, metadata, user data — with AES-256. If they're using AES-128 or something weaker, ask why.

In transit: TLS 1.3. All data moving between the AI system and your network (or the cloud) should be encrypted with TLS 1.3. Older versions — TLS 1.0 and 1.1 — have known vulnerabilities and should be disabled entirely. TLS 1.2 is acceptable but 1.3 should be the target.

Key management. Encryption is only as strong as its key management. Ask your vendor: Where are encryption keys stored? Who has access? Are keys rotated regularly? Do they use a hardware security module (HSM) or a cloud KMS like AWS KMS? If the vendor can't answer these questions clearly, their encryption posture is weaker than they claim.

"End-to-end encryption." This term gets misused constantly. True end-to-end encryption means data is encrypted on the sending device and only decrypted on the receiving device — no intermediary, including the vendor, can read it. Most AI vendors can't offer true E2EE because their AI model needs to process the data in plaintext to generate output. What most vendors actually mean is "we encrypt in transit and at rest." That's fine, but they should say so honestly.

Data residency and storage

Where your patient data physically lives matters more than most teams realize. Key questions:

Where does the AI process audio and text? Is speech-to-text happening on the device, in the cloud, or a hybrid? Cloud processing is common because it delivers better accuracy, but it means PHI is leaving your network. You need to know which cloud, which region, and what protections are in place.

Where is PHI stored? Some vendors store transcripts and audio files in their cloud indefinitely. Others delete them after processing. You need a clear data retention policy: how long is PHI stored, in what form, and what triggers deletion?

Who has access? "Our engineers can access the data for debugging" is a compliance problem if those engineers aren't background-checked, trained on HIPAA, and operating under the minimum necessary standard. Ask for specifics: how many employees can access production PHI, under what conditions, and is access logged?

Data sovereignty. If your vendor processes data through servers in multiple countries, you may face additional regulatory considerations. For U.S. healthcare organizations, PHI should stay within the continental United States unless you have a specific and documented reason to allow otherwise.

The AI model question: is your patient data training someone else's model?

This is the compliance and ethical issue that most healthcare teams don't ask about — but should. When an AI vendor processes your patient encounters, they're generating valuable training data. The question is: are they using it?

The risk. If your AI scribe vendor feeds de-identified (or insufficiently de-identified) patient data back into their model training pipeline, you've got problems. HIPAA's de-identification standards under the Safe Harbor method require removal of 18 specific identifiers. Expert determination is the alternative, but it requires a qualified statistical expert to certify the risk of re-identification is very small. Most AI vendors don't do either rigorously.

What to ask:

  • Does your company use customer PHI to train, fine-tune, or improve your AI models?
  • If you use de-identified data, what de-identification method do you use — Safe Harbor or Expert Determination?
  • Can I opt out of data being used for model improvement?
  • Is your no-training policy contractually binding in the BAA, or just stated in your privacy policy?

A privacy policy is not a contract. If the vendor says "we don't use your data for training" but it's not in the BAA, it's not enforceable.

Audit and access controls

A compliance officer evaluating an AI vendor should verify the following:

Role-based access control (RBAC). Different users should have different access levels. A physician should see their patients' notes. A billing coordinator might see procedure codes but not full clinical narratives. An administrator might manage users but not access patient data at all. The AI system needs to support this granularity.

Audit trails. Every access, modification, and deletion of PHI should be logged with a timestamp, user ID, and action performed. These logs should be immutable — no one, including administrators, should be able to alter or delete them. Ask how long audit logs are retained and whether they meet the HIPAA six-year retention requirement.

Breach notification procedures. The vendor should have a documented incident response plan. Ask for it. It should cover: how breaches are detected, who is responsible for investigation, what the notification timeline is, and how affected individuals will be notified. If the vendor says "we've never had a breach," that's not a security posture — it's either luck or a lack of monitoring.

Penetration testing and vulnerability assessments. Ask when the vendor last had an independent security assessment. Request the executive summary. Vendors who take security seriously will have recent third-party pen test results and a remediation timeline for any findings.

How MedoraMD handles compliance

We built MedoraMD with the assumption that compliance isn't a feature — it's the foundation. Here's what that looks like in practice:

  • BAA included. Every MedoraMD customer gets a signed BAA before any PHI is processed. It covers our AI processing pipeline, our cloud infrastructure, and all sub-processors.
  • AES-256 encryption at rest. All stored data — audio, transcripts, metadata — is encrypted with AES-256. Encryption keys are managed through AWS KMS with automatic rotation.
  • TLS 1.3 in transit. All data transmission between MedoraMD and your systems is encrypted with TLS 1.3. Older TLS versions are disabled.
  • SOC 2 Type II readiness. We've built our infrastructure and processes to SOC 2 Type II standards, covering security, availability, and confidentiality. Formal certification is in progress.
  • No PHI used for training. We do not use customer patient data to train, fine-tune, or improve our AI models. This is contractually binding in our BAA, not just a privacy policy statement.
  • Automated audit logging. Every access, modification, and export of PHI is automatically logged with immutable audit trails. Logs are retained for the HIPAA-required minimum and are available on demand.
  • On-premise option available. For organizations with strict data residency requirements, MedoraMD can be deployed on-premise so PHI never leaves your network.

Questions to ask any AI vendor before you sign

Before you deploy any AI tool that will touch patient data, have your compliance officer or practice administrator ask these questions. Get the answers in writing.

  1. Will you sign a BAA? If no, stop the conversation. If yes, request a copy for legal review before signing anything else.
  2. What encryption standards do you use at rest and in transit? Look for AES-256 and TLS 1.2/1.3 specifically. Ask about key management.
  3. Where is PHI processed and stored? Get the specific cloud provider, region, and data center locations. Verify data stays within the U.S.
  4. What is your data retention policy? How long is PHI stored? What triggers deletion? Can you request deletion on demand?
  5. Do you use customer data to train your AI models? If yes, understand the de-identification method and your opt-out rights. If no, get it in the BAA.
  6. Who on your team can access production PHI? Get a number. Ask about background checks, HIPAA training, and access logging.
  7. What does your incident response plan look like? Request the document. Verify breach notification timelines are faster than the HIPAA maximum.
  8. Do you have a recent third-party security assessment? Request the executive summary of the most recent pen test or SOC 2 report.
  9. What sub-processors handle PHI? Get the full list. Verify each has a BAA with the vendor.
  10. Can you support on-premise or private cloud deployment? If data residency is a hard requirement for your organization, this may be non-negotiable.

If a vendor can answer all ten of these clearly and back them up with documentation, they're taking compliance seriously. If they hedge, deflect, or say "we'll get back to you," treat that as the answer.

See how MedoraMD handles HIPAA compliance

We'll walk you through our security architecture, show you the BAA, and answer every question on the checklist above. No pitch deck — just the technical details your compliance team needs.

MORE FROM ADVANCEAI