Resonant

Resonant — Voice for AI-native work

Your AI tools
have no memory.
Give them one.

Resonant captures everything you say — dictations, meetings, memos — and makes it queryable by any AI agent, including Claude and Codex, via MCP. Your voice becomes a knowledge base your AI tools can search.

11 MCP tools. Ambient workspace context. On-device speech recognition. No audio leaves your Mac, ever.

Used by engineers at

AnthropicCursorGoogleNvidiaStripe

MCP — the memory layer

Your AI assistant
knows what you said
this morning.

Resonant exposes an MCP server with 11 tools. Claude Code, Codex, and more can query your meetings, dictations, memos, ambient context, and daily journal — automatically.

No copy-pasting transcripts. No “let me find my notes.” Your AI tool asks Resonant directly and gets structured data back — with timestamps, speaker labels, and app context.

What did I commit to in this morning's standup?

search("standup", type: "meeting")

From your 9:30am standup: You committed to finishing the JWT migration by Thursday and asked Sarah to review the webhook retry PR.

I described an API design earlier — find it and use it as the spec.

search("API design", type: "dictation")

Found dictation from 2:14pm in VS Code: "The endpoint should accept a Bearer token, validate against the JWKS endpoint, return a 401 with retry-after..."

What was I working on yesterday afternoon?

ambient_timeline(date: "yesterday", start: "12:00")

VS Code (auth-service) 12:00–14:30, Slack (eng-team) 14:30–14:45, Chrome (Grafana) 14:45–15:20, VS Code (webhook-retry) 15:20–17:00.

The speed gap

You type at 40 wpm.
You speak at 200.
Your prompts suffer.

Every Cursor prompt you abbreviate because typing the full thing takes too long — that's a worse outcome. Every Claude message where you cut the context because the keyboard made it feel like work — that's a worse response.

Voice removes the bottleneck between your thinking and the model. Say everything. Every constraint, every edge case. Resonant transcribes it locally in under a second.

Typed prompt~40 wpm

fix token validation order in auth middleware

Missing: which file, which tokens, which environment, what you already tried, why the order matters.

Voice prompt — Resonant~200 wpm

open the auth middleware in the api folder — there's an edge case where tokens issued before the schema migration aren't being validated correctly, the expiry check runs before the version check, swap the order and add a log when it catches an old token so we can see how often it's hitting in prod

Full context. Dictated in 8 seconds. Runs locally on your Mac.

Ambient context

Resonant remembers
your workday.

Passively records which apps you use, window titles, URLs, and dwell time — all locally. This data feeds your daily journal, makes dictation context-aware, and is queryable by your AI tools via MCP.

“What was I working on before the meeting?” is a question your AI tool can answer — because Resonant saw the apps, the files, the time you spent. Learn more →

Where it fits

Six moments in an AI-native
engineer's day.

Architecture brainstorms

Talk through the tradeoffs before you commit to anything. Resonant captures the reasoning — not just the conclusion. Query it later via MCP when you need to remember why.

Cursor prompt drafts

Say everything the function needs to do. Every constraint, every edge case, every 'oh and also.' Voice removes the pressure to be brief — and brief prompts produce worse code.

Meeting recall

Your AI tool queries your meeting transcripts via MCP. What did we decide? Who owns what? What was the deadline? Exact quotes with timestamps.

Error context dumps

Describe the stack trace, the reproduction steps, the thing you already tried. Give Claude the full picture. Stop getting answers to the question you typed instead of the problem you have.

Ambient context lookups

"What was I working on before lunch?" Your AI tool queries the ambient timeline and gets a real answer — which apps, which files, how long.

Memo capture → future context

Record a voice memo during a walk. It's transcribed, titled, and searchable. Next week when you need that insight, your AI tool finds it via MCP.

Real prompts

What you'd type vs.
what you'd actually say.

What you'd type insteadCursor

fix token validation order in auth middleware

Dictated in 8 seconds
Cursor

open the auth middleware in the api folder — there's an edge case where tokens issued before the schema migration aren't being validated correctly, the expiry check runs before the version check, swap the order and add a log when it catches an old token so we can see how often it's happening in prod

What you'd type insteadClaude

help me design a notifications data model

Dictated in 12 seconds
Claude

I'm thinking through the data model for the new notifications service — we have three event types, user actions, system events, and scheduled digests, and they have different retention policies and different fan-out patterns, I'm leaning toward three separate tables rather than a polymorphic design but I want to think through the foreign key implications before I commit to that

What you'd type insteadChatGPT

why is my test failing with nil pointer

Dictated in 10 seconds
ChatGPT

I'm looking at this stack trace — it's a nil pointer on the cache layer, but only in the test environment, and only when the test suite runs in parallel — I think it's a race condition in the mock setup but I want to understand if there's a pattern here before I start patching individual tests

Architecture

Your code. Your
architecture. Your
machine.

Resonant processes everything on your Mac using Apple Neural Engine. Audio never leaves your device. The MCP server runs locally — queries and responses stay on your machine.

The finished text — the prompt, the comment, the memo — is the only thing that leaves your machine. Everything else stays local. How the on-device AI works →

How it works

One key. Anywhere.

01

Press the hotkey

Works in any text field — Cursor, ChatGPT, Claude, Slack, Notion, a terminal. No app to open.

02

Speak the full context

Say everything. Every constraint, every edge case. Voice removes the pressure to be concise.

03

Processed on your Mac

Apple Neural Engine transcribes locally. No cloud, no round-trip, no exposure.

04

Clean text + memory

Text pastes into the active field. The dictation is saved to your workspace — searchable and MCP-queryable.

Free. Local. Always.

Give your AI tools
the memory they need.

Voice workspace with MCP. No subscription. No cloud. Just a hotkey, a Mac, and AI tools that remember.

Download for MacD

Requires macOS 14+ · Apple Silicon