Resonant + Confluence
Confluence is where documentation goes to rot, because typing three thousand words after a long meeting is nobody's idea of a good time. Voice flips that. Dictate the runbook, the ADR, or the onboarding page while the context is still loaded in your head.
Press a key, narrate the page, and clean prose lands in Confluence. On device. No cloud audio. Works in Cloud and Data Center.
Used by teams at
Typed vs. dictated
Runbook section
“restart the worker if queue backs up”
“If the background job queue exceeds 50,000 pending items for more than ten minutes, the worker pool is likely stuck on a poison message. First, check the dead-letter queue in the admin console for a message that has been retried more than five times — that's almost always the culprit. Move it to quarantine. Then, restart the worker pool from the deploy dashboard; do not use kubectl directly because the graceful shutdown hook depends on the sidecar container receiving a SIGTERM in a specific order. After restart, watch the queue depth for five minutes. If it doesn't drain, escalate to the platform team — there's a known issue with the Redis client under head-of-line blocking that may require a full Redis failover.”
Architecture decision record
“we chose postgres over mongo”
“We evaluated Postgres and MongoDB for the events service. The decision was Postgres, for three reasons. First, 90% of our queries are relational — joins between events, users, and accounts — and the MongoDB queries were already looking like manual joins in application code. Second, the team already operates Postgres for four other services, so the on-call burden does not increase. Third, the JSONB support covers the schemaless portions of the event payload without giving up transactional guarantees. The tradeoff we accepted: write throughput is lower than MongoDB, and we will need to revisit partitioning when events cross 100 million rows. We estimate that is twelve months away based on current growth.”
Onboarding page
“ask your manager for access”
“On your first day, you'll need access to four systems before you can do anything useful: Okta (your manager provisions this before you arrive), GitHub (request through the IT portal — the template is called 'Engineering GitHub Access'), our staging AWS account (ping #platform-help with your Okta username; it takes about an hour), and the internal wiki (automatic once Okta is set up, but sometimes lags by a day). If any of these are missing by end of day one, message your onboarding buddy — don't wait until day two, because several of the provisioning flows are batch jobs that only run overnight and you'll lose a day.”
The insight
Every runbook, ADR, and onboarding page has a half-life. The moment the meeting ends, the architecture review wraps, or the incident is resolved, the useful details start evaporating. An hour later, you remember the decision but not the tradeoffs you considered. A day later, you remember the outline but not the examples. A week later, the page gets written as a sanitized summary that skips the specific things future readers will need.
This is why documentation culture is so hard to build. The task isn't hard — the writing is just cold. And typing speed makes it worse: by the time you've typed the first paragraph, the context is already fading.
Voice captures documentation at the speed of speech. You dictate the runbook immediately after the incident, while every step is still sharp. You narrate the ADR while the argument is still fresh. The finished page contains the details that would have been lost in any other workflow.
Write the doc tomorrow, or next week. Half the specifics are gone. The examples get vague. The warnings get generic. The document becomes a placeholder that future readers still have to ask someone about.
The tribal knowledge stays tribal.
Dictate the doc within ten minutes of the event that caused it. The specific commands, the specific numbers, the specific edge cases are all still there. The document reads like someone who actually did the thing wrote it.
Because they did, in their own voice.
Where it fits
The best time to write an operational runbook is thirty minutes after you fixed the outage. Every detail is still in your head. Voice captures it before the details fade and the runbook becomes a three-line stub.
You just spent an hour deciding between three approaches. Dictate the decision record while the tradeoffs are still sharp — the reasons you rejected the other two, the risks you accepted, the conditions that would trigger a revisit.
The first engineer you onboard always gets the best experience. By the fifth, you've forgotten what was confusing. Voice lets you narrate the onboarding after each session, while the new hire's questions are still echoing.
A meeting ends, context scatters within minutes. Before the next one starts, dictate the decisions, the open questions, the owners, and the deadline — into the Confluence page, not a Slack thread that disappears.
Research notes typed after the fact become sanitized summaries that omit the interesting parts. Voice captures the dead ends, the surprises, and the 'this didn't work but we should revisit' footnotes that make the document useful.
Nobody wants to type a three-thousand-word policy document. Voice makes it tolerable — and because spoken language is more conversational, the finished page reads more naturally than anything you would have typed.
Architecture
All transcription runs locally on your Mac. No audio is sent to Resonant, to Atlassian, or to any speech-to-text service. The neural models execute on Apple Silicon in real time, and the audio buffer is discarded as soon as the text is produced.
Internal documentation routinely contains things that should never travel through a cloud audio pipeline: unreleased product names, customer details, architectural secrets, incident timelines, hiring notes. With Resonant, those details stay on the machine that produced them.
From Confluence's perspective, Resonant is indistinguishable from the keyboard. There is no integration, no API token, and no new vendor to vet. Security review becomes "it's a keyboard replacement" — and it passes.
Questions
Yes. Resonant types into whatever text field has focus, and Confluence's editor receives the text exactly as if you were typing it. Paragraphs, headings, bullet lists, code blocks, and inline formatting all work the same way.
Yes. Page comments, inline comments, table cells, and the page title all accept dictated text. The same hotkey works in every field — Resonant does not need a Confluence integration to operate.
No. Transcription runs on-device using local neural models (Whisper, Parakeet, or Moonshine, depending on your preference). No audio leaves your Mac. Only the finished text reaches Confluence, and it does so through normal keyboard input.
Both. Resonant does not care which Atlassian hosting model you use — it types into the focused field regardless. Data Center, Server, and Cloud all work identically from Resonant's perspective.
As long as you like. There is no hard limit on a single dictation session. Most people dictate in paragraph-length bursts, pause to think, then continue. The hotkey starts and stops recording on demand.
Free. Local. Works in every Confluence field.
Runbooks that actually run. ADRs with real reasoning. Zero typing tax, zero cloud audio.
Requires macOS 14+ · Apple Silicon
Related