MCP Server — 11 tools
Resonant exposes an MCP server that lets any AI agent — including Claude and Codex — query everything you've said, done, and seen. Your voice history becomes a knowledge base your AI assistant can search.
No competitor does this. Your dictations, meetings, memos, app usage, and daily journal — all queryable by your AI tools, all stored locally on your Mac.
What is MCP
Model Context Protocol (MCP) is an open standard that connects AI assistants to external data sources. Instead of you copy-pasting context into prompts, your AI tool queries the source directly.
Resonant implements an MCP server that exposes your entire voice workspace — dictations, meetings, memos, ambient context, and journal entries — as structured, searchable data. Your AI assistant doesn't guess what you've been working on. It knows.
Without MCP
“In my standup this morning I mentioned the auth migration — I think I said we'd move to JWTs by Thursday? Can you check... actually let me find the notes first...”
Context lost. Time wasted searching. AI can't help because it doesn't know what you said.
With Resonant MCP
“What did I commit to in this morning's standup?”
Claude queries your meeting transcript, finds the JWT migration commitment, and gives you the exact quote with timestamp.
In practice
What did I commit to in this morning’s standup?
MCP tool call
search({ query: "standup", type: "meeting", limit: 1 })
From your 9:30am standup (Zoom, 12 min): You committed to finishing the JWT migration by Thursday, mentioned the rate limiter needs load testing before deploy, and asked Sarah to review the webhook retry PR.
CursorI described an API design earlier today — find it and use it as the spec for this endpoint.
MCP tool call
search({ query: "API design", type: "dictation", limit: 5 })
Found a dictation from 2:14pm in VS Code (auth-service/routes.ts): "The endpoint should accept a Bearer token, validate against the JWKS endpoint, return a 401 with a retry-after header if the token is expired, and rate limit to 100 requests per minute per org..."
What was I working on yesterday afternoon?
MCP tool call
ambient_timeline({ date: "2026-03-28", start: "12:00", end: "18:00" })
Yesterday afternoon: VS Code (auth-service) 12:00–14:30, Slack (eng-team) 14:30–14:45, Chrome (Grafana dashboard) 14:45–15:20, back to VS Code (webhook-retry) 15:20–17:00. 3 dictations and 1 memo recorded during this window.
CursorI recorded a memo about the notification system architecture — use it as context for this refactor.
MCP tool call
search({ query: "notification architecture", type: "memo" })
Found memo from Mar 26 (3 min 42s): "Three event types — user actions, system events, scheduled digests. Different retention policies. Leaning toward three separate tables over polymorphic. Fan-out for digests should be async via a job queue, not inline..."
Tool reference
Each tool is a structured API that your AI assistant can call. Search across your entire voice history, pull meeting transcripts, read journal entries, and understand your ambient work context.
Find anything across your voice history
Full-text search across all dictations, meetings, and memos. Returns matched items with timestamps, app context, and surrounding text.
"search for mentions of JWT migration"
Returns a feed of recent voice activity — dictations, memos, meetings — ordered by time. Configurable limit and type filters.
"what have I been working on today?"
Understand your work environment
Returns the ambient context timeline — a chronological record of app switches, window titles, URLs, and dwell time throughout the day.
"what was I doing between 2pm and 4pm?"
Aggregated app usage statistics from ambient context. Shows which apps you used, for how long, and when — grouped by time window.
"how much time did I spend in Slack today?"
Returns the current ambient context snapshot — active app, window title, URL, selected text, and visible screen content.
"what am I looking at right now?"
Access specific items and metadata
Retrieves the auto-generated daily journal for a specific date. Includes timeline, highlights, and task blocks grouped by app.
"show me my journal from yesterday"
Retrieves a specific dictation by ID with full metadata — raw transcript, cleaned text, app context, window title, URL, and duration.
"get the details of that dictation I did in VS Code"
Retrieves a full meeting transcript with speaker labels, timestamps, duration, and the detected meeting app (Zoom, Meet, Teams, etc.).
"pull the transcript from my 10am meeting"
Retrieves a voice memo with its auto-generated title, AI summary, transcript, and duration.
"find that memo I recorded about the API redesign"
Returns voice workspace statistics — total words, dictation count, meeting hours, streak data, and progression level.
"how many words have I dictated this week?"
Returns the current Resonant configuration — active model, language, trigger key, cloud cleanup status, and enabled features.
"what model is Resonant using?"
Compatibility
Claude Code
Auto-discovers

Cursor
Auto-discovers

VS Code
Manual config

Windsurf
Manual config
Any MCP-compatible client works. Resonant uses the standard stdio transport.
Setup
Download and run Resonant on your Mac. The MCP server starts automatically in the background.
Claude Code and Cursor auto-discover the MCP server. For VS Code and others, add the server config to your MCP settings.
"What did I discuss in my last meeting?" "What was I working on yesterday?" Your AI tool queries Resonant and gets real answers.
Privacy
The MCP server runs locally inside Resonant. Your AI tool communicates with it over a local socket — no network requests, no cloud relay, no API keys. The voice data that feeds these tools never leaves your machine.
When Claude or Cursor queries your meeting transcript, the query and the response both stay on your device. The only thing that reaches the AI provider is the text your tool decides to include in its prompt — the same text you'd paste manually.
MCP doesn't create a new privacy surface. It automates the copy-paste you were already doing.
Free. Local. Always.
Everything you say, structured and searchable. No subscription. No cloud. Just your voice, your Mac, and your AI assistant.
Requires macOS 14+ · Apple Silicon