Understanding Reports
After a study concludes, OpenScouter compiles everything your testers experienced into a single, structured report. This page walks through every section so you know exactly what you are looking at and what to do next.
Accessibility Score
The score at the top of every report is a number from 0 to 100. It reflects the overall accessibility of your product based on barrier severity, how many testers encountered each barrier, and the strength of the corroborating evidence.
| Score range | Indicator | What it means |
|---|---|---|
| 0 to 49 | Red | Significant barriers are blocking testers. Immediate attention required. |
| 50 to 75 | Amber | Moderate barriers exist. Some testers are struggling in key areas. |
| 76 to 100 | Green | Testers completed tasks with few or no barriers. Minor improvements remain. |
The score changes every time you run a new test, so you can track progress across product releases.
Individual Findings
Each barrier OpenScouter detects becomes a finding in your report. Findings are not guesses. They are patterns observed across multiple testers, corroborated by behavioural and emotional signals.
Every finding includes the following fields.
Title - A short, plain-language label for the barrier (for example, “Form error messages are not announced to screen readers”).
Severity - One of four levels: critical, major, moderate, or minor. The priority matrix section below explains how severity is assigned.
WCAG criterion - The specific Web Content Accessibility Guidelines criterion that applies. For example, 1.4.3 Contrast (Minimum) or 4.1.3 Status Messages.
Evidence - A summary of what testers did and said. This includes clips of rage clicks, facial expression data, and verbatim quotes from testers.
Recommendation - A concrete fix. Where possible, recommendations include code examples or references to ARIA patterns.
Plain English Descriptions
Every finding has two descriptions side by side. The technical description uses WCAG language and is written for developers. The plain English description explains the same barrier as a human experience, for example: “When a tester using a screen reader submitted the form with an error, they had no way of knowing something went wrong.”
Use the plain English description when briefing stakeholders who are not familiar with accessibility standards.
Sentiment Timeline
The sentiment timeline shows how tester emotions changed across the full duration of the study. Time is divided into 30-second buckets. Each bucket shows the average emotional state of all testers at that moment.
Emotional states are inferred from a combination of facial expression analysis, voice tone, and interaction patterns. States shown include calm, focused, uncertain, frustrated, and distressed.
The timeline is useful for spotting where in a user journey frustration builds. A sharp spike in the frustrated state at the eight-minute mark, for example, often points directly to a checkout flow or form validation problem.
Reading the Emotion Arc
Look for these patterns:
- Sustained frustration - Multiple consecutive buckets in a frustrated state suggest a persistent barrier, not a one-off confusion moment.
- Spike then recovery - A sharp frustrated spike followed by calm may indicate a confusing step that testers eventually worked around.
- Progressive deterioration - Emotion that worsens steadily across the test suggests cumulative friction, often caused by several moderate barriers compounding.
Neurodivergent Stratification
OpenScouter recruits testers across specific neurodivergent profiles, including autistic testers, testers with ADHD, testers with dyslexia, testers with dyscalculia, and testers with anxiety disorders, among others.
The ND stratification section of the report breaks down which tester groups encountered which barriers. This is presented as a matrix showing each finding against each group.
For example, a finding about dense paragraphs of text may appear primarily in the dyslexia and ADHD columns, while a finding about unpredictable page transitions may appear primarily in the autistic and anxiety columns.
This view also helps product teams make the case for specific fixes internally. “This barrier affects our testers with ADHD and dyslexia” is more actionable than a general accessibility score.
Tester Frequency
Each finding in the report includes a tester frequency label, such as “3 of 4 testers found this issue.”
Tester frequency is a confidence indicator. A barrier found by one tester may still be valid, but a barrier found by four out of four testers is almost certain to affect your real users.
When two findings are both marked as major severity, use tester frequency to decide which to fix first. Higher frequency means higher confidence and typically higher real-world impact.
Corroboration Logic
OpenScouter does not rely on a single signal to flag a barrier. It uses a corroboration model that combines multiple data streams to determine how confident it is in each finding.
The three primary signals are:
- Rage clicks - Repeated rapid clicks on an element that is not responding as expected.
- Facial emotion - Frustration or distress detected through the tester’s camera during the relevant moment.
- Verbal frustration - Spoken expressions of confusion or difficulty captured via microphone.
When all three signals appear together around the same moment and the same interface element, the finding is marked high confidence. When only one signal is present, the finding is marked lower confidence and is surfaced for your review rather than automatically promoted to the top of the report.
Priority Matrix
Findings are sorted into the priority matrix automatically. The matrix has three tiers.
Critical - Barriers that prevent task completion entirely. Testers could not proceed without help. These always appear at the top of the report regardless of tester frequency.
Major - Barriers that significantly impair task completion or cause significant distress. Testers may have eventually completed the task but required substantially more effort than expected.
Minor - Barriers that cause mild friction or confusion but do not prevent task completion. These are worth fixing but do not block release.
Within each tier, findings are sorted by tester frequency, with the highest frequency issues listed first.
The recommended approach is to fix all critical findings before addressing major findings, and to address major findings before minor ones. This order maximises the accessibility improvement per unit of development effort.
Exporting Your Report
Every report can be exported as a PDF from the report toolbar. The PDF includes all findings, the sentiment timeline chart, the ND stratification matrix, and the full list of recommendations.
PDFs are formatted for sharing with developers, designers, and stakeholders who may not have access to your OpenScouter account. Each finding in the PDF includes its WCAG criterion, plain English description, and recommended fix.
To export, open the report and select Export as PDF from the top right of the report page. The file generates immediately and downloads to your browser.