Skip to main content
Transparency Report

OneMeet Transcription Accuracy: Methodology and Testing Results

How we measure, verify, and report our 98.7% transcription accuracy across nine languages — and what that number actually means in practice.

OneMeet Engineering & Research·
98.7%

Average Word Error Rate (WER) accuracy across 9 languages

Measured against standard reference transcripts using industry-recognised test sets. English, Japanese, German, French, Korean, Spanish, Portuguese, Mandarin Chinese, and Dutch.

What "98.7% Accuracy" Means

Transcription accuracy is measured using Word Error Rate (WER), the industry-standard metric for speech recognition quality. WER counts the minimum number of word substitutions, deletions, and insertions needed to transform the system's transcript into the reference transcript, divided by the total number of words in the reference.

Formula

WER = (Substitutions + Deletions + Insertions) ÷ Total Reference Words

Accuracy = 1 − WER = 98.7% means WER of 1.3%

A WER of 1.3% means that on average, for every 100 words spoken, OneMeet produces 1.3 incorrect words. In a 60-minute university lecture containing approximately 9,000 words, this corresponds to roughly 117 word-level errors — typically minor substitutions (e.g., "affect" vs "effect") rather than complete word losses.

Test Conditions

All accuracy measurements were conducted under the following conditions:

ParameterSpecification
Audio inputMicrophone capture at 16kHz, 16-bit PCM
EnvironmentQuiet room (≤30 dB ambient noise) and moderate lecture-hall noise (40–55 dB)
Speaker profileNative speaker, non-native speaker, and mixed-accent speakers per language
Content typeAcademic lecture excerpts, business meeting recordings, structured monologue
Reference transcriptsHuman-annotated ground-truth transcripts reviewed by two independent annotators
Test set sizeMinimum 2 hours of audio per language
Measurement toolNIST SCLITE, the standard scoring toolkit for ASR evaluation

Languages Tested and Per-Language Results

OneMeet was tested across all nine supported languages. Results are reported as accuracy (1 − WER), rounded to one decimal place.

LanguageAccuracyNotes
English99.2%Broadest training data; highest accuracy
Spanish99.0%Both Latin American and Castilian Spanish
Portuguese98.9%Brazilian and European Portuguese
French98.8%Including academic and formal register
German98.7%Including compound nouns and Fachsprache
Mandarin Chinese98.6%Simplified character output; character error rate
Korean98.5%Including agglutinative morphology
Dutch98.4%Including code-switching with English
Japanese98.3%Including kanji, hiragana, katakana output

The 98.7% figure reported in OneMeet marketing materials is the unweighted average across all nine languages under standard test conditions.

Comparison Baseline

OneMeet's accuracy was benchmarked against two reference points:

  • Industry average for multilingual ASR (2024–2025): Published benchmarks for general-purpose multilingual speech recognition systems report average WER of 3–8% across similar language sets (corresponding to 92–97% accuracy). OneMeet's 1.3% WER represents a meaningful improvement over this baseline.
  • Human transcription accuracy: Professional human transcribers typically achieve 99.0–99.5% accuracy on clear audio. OneMeet's 98.7% average approaches human-level accuracy for clean audio conditions.

Known Limitations

Accuracy degrades in the following conditions. We report these transparently so users can set appropriate expectations:

  • Heavy background noise (>60 dB ambient): accuracy typically drops to 93–96%
  • Strong regional accents not well-represented in training data: accuracy can drop 2–4 percentage points
  • Code-switching mid-sentence (e.g., Dutch-English in a single sentence): partial accuracy reduction, typically 1–3%
  • Highly technical domain vocabulary not in the base model's vocabulary: proper nouns, very new terms, and niche technical jargon may be misrecognised
  • Low-quality microphones or lossy audio compression (e.g., .mp3 at <128kbps): accuracy reduction of up to 5 percentage points
  • Multiple simultaneous speakers: speaker diarisation accuracy is separate from transcription WER and varies by number of speakers

How We Continuously Improve

Accuracy metrics are reviewed quarterly. System updates are triggered when any language's accuracy drops below 98% on our internal test suite, or when a major model version becomes available. User-reported corrections contribute to our training pipeline through a privacy-preserving opt-in programme.

Using This Data

You are welcome to cite OneMeet's accuracy figures in academic work, journalism, or product comparisons. When citing, please reference this page and include the date you last accessed it. If you need raw WER numbers or test set details for research purposes, contact research@onemeet.ai.

Citation

OneMeet (2026). Transcription Accuracy Methodology. Retrieved from https://onemeet.ai/accuracy-methodology

Related

Experience the accuracy yourself

Try OneMeet free — no credit card required. See 98.7% accuracy in your own lectures.

Try OneMeet Free