Early access beta — functional, not final. Paid supporters get first access.

Resonant
Back to blog
PhilosophyJan 28, 2025

Why We Built a Local-Only Voice App

Cloud dictation works. You talk, it transcribes, the text appears. The problem isn't accuracy or speed — it's where your words go.

Every cloud dictation service sends your audio to a remote server. Your voice — what you dictate to your therapist notes, your journal, your legal briefs, your private messages — passes through infrastructure you don't control. Most services say they don't store it. Some do. Either way, the data leaves your machine.

We didn't think that was necessary anymore.

The hardware caught up

Apple Silicon changed the equation. The Neural Engine in modern Macs can run speech recognition models that rival cloud services — entirely on-device, with no network connection required. The quality gap that justified cloud processing has closed.

Resonant uses these on-device models to process your voice locally. Your audio never leaves your Mac. There's no upload, no server, no account required. The transcription happens in milliseconds, right where you are.

Privacy as architecture

Most apps treat privacy as a policy. We treat it as architecture. Resonant doesn't have a server to send your data to. There's no telemetry, no analytics on your dictation content, no cloud sync of your voice recordings.

This isn't a feature toggle. It's how the app is built. Your words stay on your device because there's nowhere else for them to go.

What this means for you

You can dictate anything — medical notes, legal documents, personal thoughts, sensitive business communications — without wondering who else might be listening. The app works offline. It works on airplanes. It works in classified environments. It just works, locally.

That's why we built Resonant. Not because cloud dictation is broken, but because your voice deserves to stay yours.