Resonant

Resonant + Confluence

Capture the doc
while the memory is warm.

Confluence is where documentation goes to rot, because typing three thousand words after a long meeting is nobody's idea of a good time. Voice flips that. Dictate the runbook, the ADR, or the onboarding page while the context is still loaded in your head.

Press a key, narrate the page, and clean prose lands in Confluence. On device. No cloud audio. Works in Cloud and Data Center.

Used by teams at

AnthropicGoogleNvidiaStripe

Typed vs. dictated

Same page.
Dramatically different value.

Runbook section

Typed8 words — nobody can follow

restart the worker if queue backs up

Dictated128 words — complete procedure

If the background job queue exceeds 50,000 pending items for more than ten minutes, the worker pool is likely stuck on a poison message. First, check the dead-letter queue in the admin console for a message that has been retried more than five times — that's almost always the culprit. Move it to quarantine. Then, restart the worker pool from the deploy dashboard; do not use kubectl directly because the graceful shutdown hook depends on the sidecar container receiving a SIGTERM in a specific order. After restart, watch the queue depth for five minutes. If it doesn't drain, escalate to the platform team — there's a known issue with the Redis client under head-of-line blocking that may require a full Redis failover.

Architecture decision record

Typed6 words — no reasoning

we chose postgres over mongo

Dictated121 words — durable record

We evaluated Postgres and MongoDB for the events service. The decision was Postgres, for three reasons. First, 90% of our queries are relational — joins between events, users, and accounts — and the MongoDB queries were already looking like manual joins in application code. Second, the team already operates Postgres for four other services, so the on-call burden does not increase. Third, the JSONB support covers the schemaless portions of the event payload without giving up transactional guarantees. The tradeoff we accepted: write throughput is lower than MongoDB, and we will need to revisit partitioning when events cross 100 million rows. We estimate that is twelve months away based on current growth.

Onboarding page

Typed5 words — useless

ask your manager for access

Dictated113 words — actually onboards

On your first day, you'll need access to four systems before you can do anything useful: Okta (your manager provisions this before you arrive), GitHub (request through the IT portal — the template is called 'Engineering GitHub Access'), our staging AWS account (ping #platform-help with your Okta username; it takes about an hour), and the internal wiki (automatic once Okta is set up, but sometimes lags by a day). If any of these are missing by end of day one, message your onboarding buddy — don't wait until day two, because several of the provisioning flows are batch jobs that only run overnight and you'll lose a day.

The insight

Documentation
written cold
loses eighty percent.

Every runbook, ADR, and onboarding page has a half-life. The moment the meeting ends, the architecture review wraps, or the incident is resolved, the useful details start evaporating. An hour later, you remember the decision but not the tradeoffs you considered. A day later, you remember the outline but not the examples. A week later, the page gets written as a sanitized summary that skips the specific things future readers will need.

This is why documentation culture is so hard to build. The task isn't hard — the writing is just cold. And typing speed makes it worse: by the time you've typed the first paragraph, the context is already fading.

Voice captures documentation at the speed of speech. You dictate the runbook immediately after the incident, while every step is still sharp. You narrate the ADR while the argument is still fresh. The finished page contains the details that would have been lost in any other workflow.

Cold capture

Write the doc tomorrow, or next week. Half the specifics are gone. The examples get vague. The warnings get generic. The document becomes a placeholder that future readers still have to ask someone about.

The tribal knowledge stays tribal.

Warm capture

Dictate the doc within ten minutes of the event that caused it. The specific commands, the specific numbers, the specific edge cases are all still there. The document reads like someone who actually did the thing wrote it.

Because they did, in their own voice.

Where it fits

Six Confluence pages
voice should write.

Runbooks while the incident is fresh

The best time to write an operational runbook is thirty minutes after you fixed the outage. Every detail is still in your head. Voice captures it before the details fade and the runbook becomes a three-line stub.

ADRs after the architecture review

You just spent an hour deciding between three approaches. Dictate the decision record while the tradeoffs are still sharp — the reasons you rejected the other two, the risks you accepted, the conditions that would trigger a revisit.

Onboarding pages after running onboarding

The first engineer you onboard always gets the best experience. By the fifth, you've forgotten what was confusing. Voice lets you narrate the onboarding after each session, while the new hire's questions are still echoing.

Meeting recaps into the project page

A meeting ends, context scatters within minutes. Before the next one starts, dictate the decisions, the open questions, the owners, and the deadline — into the Confluence page, not a Slack thread that disappears.

Research writeups and experiments

Research notes typed after the fact become sanitized summaries that omit the interesting parts. Voice captures the dead ends, the surprises, and the 'this didn't work but we should revisit' footnotes that make the document useful.

Policy and process documentation

Nobody wants to type a three-thousand-word policy document. Voice makes it tolerable — and because spoken language is more conversational, the finished page reads more naturally than anything you would have typed.

Architecture

On-device transcription.
Only text reaches Confluence.

All transcription runs locally on your Mac. No audio is sent to Resonant, to Atlassian, or to any speech-to-text service. The neural models execute on Apple Silicon in real time, and the audio buffer is discarded as soon as the text is produced.

Internal documentation routinely contains things that should never travel through a cloud audio pipeline: unreleased product names, customer details, architectural secrets, incident timelines, hiring notes. With Resonant, those details stay on the machine that produced them.

From Confluence's perspective, Resonant is indistinguishable from the keyboard. There is no integration, no API token, and no new vendor to vet. Security review becomes "it's a keyboard replacement" — and it passes.

Questions

What teams ask
before they roll it out.

Does Resonant work inside the Confluence editor?

Yes. Resonant types into whatever text field has focus, and Confluence's editor receives the text exactly as if you were typing it. Paragraphs, headings, bullet lists, code blocks, and inline formatting all work the same way.

Can I dictate into page comments and inline comments?

Yes. Page comments, inline comments, table cells, and the page title all accept dictated text. The same hotkey works in every field — Resonant does not need a Confluence integration to operate.

Is the audio from my dictation uploaded to a server?

No. Transcription runs on-device using local neural models (Whisper, Parakeet, or Moonshine, depending on your preference). No audio leaves your Mac. Only the finished text reaches Confluence, and it does so through normal keyboard input.

Does this work in Confluence Cloud and Data Center?

Both. Resonant does not care which Atlassian hosting model you use — it types into the focused field regardless. Data Center, Server, and Cloud all work identically from Resonant's perspective.

How long a document can I dictate in one session?

As long as you like. There is no hard limit on a single dictation session. Most people dictate in paragraph-length bursts, pause to think, then continue. The hotkey starts and stops recording on demand.

Free. Local. Works in every Confluence field.

Write the doc
before the context fades.

Runbooks that actually run. ADRs with real reasoning. Zero typing tax, zero cloud audio.

Requires macOS 14+ · Apple Silicon