Resonant

Resonant + Zed

Zed is fast.
Your typing is the bottleneck.

You picked Zed because you care about milliseconds. The editor renders at 120fps. Your LSP is tuned. Your shell is minimal. The one remaining bottleneck between thought and text is your keyboard — a 40 wpm interface pretending to be an IDE.

Resonant is the local voice layer for that bottleneck. Press a key, speak a commit message or a doc comment or an AI prompt, and clean text lands in Zed. On-device speech recognition. No cloud audio.

Used by developers at

AnthropicGoogleNvidiaStripe

Typed vs. dictated

Same moment in the editor.
A durable artifact instead of a shrug.

Commit message

Typed2 words — future you will hate it

fix bug

Dictated86 words — real history in 14 sec

Fix race condition in the session refresh worker. The previous implementation assumed that the refresh token mutation would settle before the next scheduled tick, but under load the scheduler would kick off a second refresh while the first one was still mid-flight, causing both calls to invalidate each other. The fix adds a single-flight guard keyed on the session ID so only one refresh can be in progress per session at a time. Also adds a regression test that simulates the timing.

Doc comment on a tricky function

Typed3 tokens — technical debt

// TODO: explain

Dictated79 words — context someone can act on

This function looks like a simple debounce but it is not. It holds the last argument, resets a timer on every call, and on trailing-edge fires the stored argument — but if the caller passes a new reference that is shallow-equal to the last one, it skips the fire entirely. This is an optimization we added after the ProfilePanel re-render storm of last month. Do not simplify without reading issue #482 first.

Zed AI chat prompt

Typed2 words — model guesses wildly

refactor this

Dictated82 words — spec in 14 sec

Refactor this module to extract the retry policy into its own struct, then inject it into the HTTP client. The goal is to make the retry behavior unit-testable without spinning up a real network, and to let the CLI override the default policy with a flag. Keep the existing public API shape — the struct should be the implementation detail, not the interface. Match the style of the existing BackoffPolicy in the auth module.

The insight

You already tuned
every other layer.
Tune the input one.

The Zed crowd is the speed crowd. You benchmarked your terminal. You have opinions about how many milliseconds are acceptable between keystroke and paint. You moved off VS Code specifically to stop watching an Electron frame drop while you tried to write a for loop.

Everything in your stack has been optimized except one thing: the interface between your brain and the editor. You can think at thousands of words per minute. You can speak at two hundred. You can type at forty. Every doc comment, commit message, and AI prompt you write passes through that forty-wpm funnel, and the funnel is where the quality goes to die.

Voice dictation isn’t about writing code — code still flows through muscle memory and autocomplete. It’s about everything around the code: the prose that explains it, the prompts that steer AI to help with it, the messages that tell future teammates why it exists. That’s where speaking wins.

The keyboard ceiling

At 40 words per minute, every sentence you write costs attention. You start estimating whether the comment is worth the keystrokes. Almost nothing clears that bar, so almost nothing gets written. Your repo accumulates silent code and commit messages that say “wip.”

The problem isn’t discipline. It’s the input rate.

The voice upgrade

At 200 words per minute, the math reverses. The commit message gets written because it’s faster to say it than to decide it wasn’t worth saying. The Zed AI prompt gets the full context because talking the context out is how you were thinking about it anyway.

The editor stays fast. So does everything around it.

Where it fits

Six moments around the code
where voice earns its keep.

Commit messages

Good commit messages explain what changed and why. Bad commit messages say “fix stuff.” The difference is almost always typing fatigue. Dictate the commit while the change is still loaded in your head and your git history becomes a document worth reading.

Doc comments

The comments that matter most are the ones explaining non-obvious design choices and subtle invariants. Those comments require prose, and prose is what engineers skip. Voice makes a two-paragraph doc comment feel the same as a one-line one.

Zed AI prompts

The quality of a Zed AI response is directly proportional to the context in your prompt. Dictate the constraints, the adjacent style, the failure modes you want avoided, and the shape of the answer you want. You’ll run out of laziness before you run out of useful context.

Pull request descriptions

A PR description that explains the motivation, the approach, and the risk is worth ten rounds of review comments. Voice is how that description gets written in the two minutes between pushing and opening the PR.

README and docs edits

README files rot because keeping them current is a typing chore. Dictate the paragraph that explains the new env var or the new build flag while you’re still in the editor. The docs stay honest.

Scratchpad thinking

Every Zed user has a scratch buffer or a notes file they think out loud into. Voice makes that buffer useful — half-formed ideas, design notes, and debugging hypotheses get captured instead of evaporating when the terminal clears.

Architecture

Local models.
Nothing leaves the machine.

Resonant uses on-device speech models that run on the Apple Silicon neural engine. There is no network call during transcription. Your audio buffer is processed in memory and discarded. What arrives in Zed is the same text you would have typed, just faster.

This matters for anyone working in a private repo. The prompt you dictate into Zed AI might include unreleased API names, proprietary algorithms, customer data schemas, or an internal security concern. A cloud dictation service would route all of that through a third-party speech pipeline. Resonant does not.

There’s no extension to install in Zed, no account to create, no background telemetry stream. Resonant is a macOS utility that writes text into whatever app has focus, and it treats your words the way a good editor treats your files — as yours.

Free. Local. On-device speech models.

The last speed upgrade
for Zed users.

Real commit messages, real doc comments, real AI prompts — at the speed you think, not the speed you type.

Requires macOS 14+ · Apple Silicon