HIPAA Meeting Transcription (2026)
Transcribe healthcare meetings without exposing PHI. On-device processing means no data leaves your Mac — no BAA, no cloud dependency, no breach vector.
TL;DR
Cloud meeting transcription tools require BAAs because they handle PHI on their servers. Resonantprocesses audio entirely on your Mac — PHI never leaves the device. No BAA required because there's no business associate. Structural privacy, not policy privacy.
The problem with cloud transcription in healthcare
Healthcare meetings — patient consultations, case conferences, clinical team discussions, insurance calls — contain Protected Health Information. When you use a cloud transcription tool, that PHI travels to and is processed on someone else's servers.
- BAA complexity. You need a Business Associate Agreement with any vendor handling PHI. This requires legal review, vendor vetting, and ongoing compliance monitoring.
- Breach exposure. Healthcare data breaches cost an average of $10.93M (IBM, 2023). Every vendor with access to PHI is a potential breach vector.
- Model training concerns. Some transcription vendors have faced questions about using customer data for AI model training. Even with opt-out, the data still resides on their infrastructure.
- Audit trails. HIPAA requires documentation of where PHI goes. Cloud transcription adds complexity to your data flow documentation.
On-device: a different compliance model
Resonant processes meeting audio on your Mac's Neural Engine. The audio never leaves the device. Transcripts are stored locally. There is no data transmission to Resonant or any third party.
This fundamentally changes the compliance conversation:
- No BAA needed. No data is shared with a business associate. There's no third party in the transaction.
- No breach vector. If data never leaves the device, it can't be breached on someone else's servers.
- Simple audit. Data stays on one device. The audit trail is: “audio was captured and processed locally on [device].”
- No model training concerns. We never have your data. There's nothing to train on.
Healthcare use cases
- Telehealth consultations. Transcribe patient calls running through your Mac without uploading audio to a cloud service.
- Clinical team meetings. Case conferences where patient details are discussed.
- Insurance and billing calls. Conversations involving patient identifiers and treatment details.
- Supervision and training. Clinical supervision sessions where patient cases are reviewed.
- Research interviews. Participant interviews where research data should remain controlled.
Important notes
- Resonant is a transcription tool, not a certified EHR or medical documentation system.
- Recording consent requirements vary by state and setting. Always follow applicable laws.
- Device security (encryption, password, physical access) remains your responsibility.
- Consult your compliance team or healthcare attorney for your specific workflow.
Related resources
- Medical dictation for Mac— clinical dictation workflows.
- Private meeting transcription— the privacy architecture in depth.
- Meeting transcription without a bot— how capture works invisibly.
Frequently asked questions
Is Resonant HIPAA compliant?
Resonant processes audio on your Mac with no data transmission. Since no PHI is shared with Resonant, the BAA requirement doesn't apply. Consult your compliance team for your specific use case.
Do I need a BAA?
No. A BAA is required when PHI is shared with a third party. Resonant never receives your data — there's no business associate relationship.
Can I transcribe telehealth calls?
Yes. Resonant captures system audio from any app. Telehealth calls running through your Mac (Zoom, Doxy.me, etc.) can be transcribed locally.
Where are transcripts stored?
Locally on your Mac only. Not in any cloud, not on any server. You control the data entirely.
What Resonant offers beyond dictation
Resonant isn't just a faster way to type. It's a voice workspace with capabilities no other dictation tool provides.
MCP server for AI tools
Resonant exposes 11 MCP tools that let any AI agent — Claude, Codex, and more — query your entire voice workspace — meetings, dictations, memos, ambient context, and daily journal. Your AI assistant knows what you said this morning. Learn more
Meeting transcription with speaker labels
Dual-channel recording — your mic and system audio on separate channels. NVIDIA Sortformer diarization identifies who said what. No bot joins the call. No audio leaves your Mac. Learn more
Ambient context capture
Passively records which apps you use, window titles, URLs, and dwell time — all locally. This makes dictation context-aware and gives your AI tools a queryable work timeline. Learn more
Two on-device speech models
NVIDIA Parakeet TDT v3 (0.6B, 25 languages) and Qwen3 ASR (0.6B, 30+ languages), both compiled to CoreML and running on Apple Neural Engine. Under 4% WER on English benchmarks. Learn more
Cloud cleanup with hallucination detection
Optional AI post-processing fixes STT errors and adapts to context (email, message, code). Guardrails detect when the LLM rewrites your meaning instead of cleaning your grammar. Learn more
Start with private Mac dictation
Local speech recognition is free and runs on your Mac. Pro adds cloud cleanup, rewrites, summaries, and sharing when you want the full workflow.