How It Works
OpenScouter combines neurodivergent testers, a Chrome extension, and AI agents to surface real accessibility barriers. Each study moves through four stages, from setup to final report.
The Four-Stage Workflow
-
Create a study
A business sets up a study by providing the target URL and specifying tester requirements. Requirements can include cognitive profiles, sensory sensitivities, or assistive technology preferences. OpenScouter uses these criteria to match the right neurodivergent experts to the study.
-
Testers receive offers and test the site
Matched testers receive a study offer via Telegram. Once accepted, they test the target URL using the OpenScouter Chrome extension. The extension runs in the background and captures three live data streams while the tester navigates naturally.
-
AI agents analyze the data and the tester confirms findings
Four AI agents process the captured data, generate findings, and surface potential barriers. The tester then reviews each finding and confirms, rejects, or adds context. This human-in-the-loop step ensures accuracy and captures nuance that automated tools miss.
-
Report delivered with WCAG mappings and ND stratification
The final report arrives with every finding mapped to its WCAG success criterion. Plain English summaries explain each barrier in non-technical language. Findings are stratified by neurodivergent profile, so teams know which groups are most affected and where to prioritize fixes.
What the Extension Captures
The Chrome extension records three parallel data streams throughout the test. Together they give the AI agents a complete picture of what happened and how the tester experienced it.
Browser Events
The extension logs every interaction: clicks, scrolls, focus changes, keyboard input, and navigation. Timing data shows where testers paused, backtracked, or abandoned a task. This stream reveals friction points in the interface.
Facial Emotion Analysis
With tester consent, the extension uses the device camera to capture facial expressions during the test. The AI maps expressions to emotional states such as confusion, frustration, or relief. This stream surfaces moments of cognitive load that the tester may not verbalize.
Voice Transcription
Testers narrate their experience aloud as they test. The extension transcribes this narration in real time. The transcript captures intent, confusion, and commentary that contextualizes the behavioral and emotional data.
The AI Agent Pipeline
Four agents process each test in sequence. Each agent has a specific role, and the pipeline is designed so later agents build on earlier work.
Agent 1: Scouty (Initial Analysis)
Scouty ingests all three data streams immediately after the test completes. It identifies candidate barrier moments by correlating behavioral events with emotional signals and transcript content. Scouty produces a prioritized list of moments for deeper review.
Agent 2: Analyst (Deep Dive)
The Analyst takes Scouty’s candidate list and examines each moment in detail. It maps each barrier to the relevant WCAG success criterion and assesses severity. The Analyst also flags patterns, such as the same barrier appearing across multiple tasks.
Agent 3: Report Writer
The Report Writer transforms the Analyst’s structured findings into readable output. It generates both technical descriptions and Plain English summaries for every finding. This dual output ensures the report is useful for developers, designers, and non-technical stakeholders alike.
Agent 4: Synthesizer (Cross-Test Analysis)
The Synthesizer runs after multiple tests on the same study are complete. It compares findings across testers to identify which barriers are widespread and which are profile-specific. This cross-test view produces the ND stratification data in the final report.
Human-in-the-Loop Confirmation
Before any finding reaches the final report, the tester who discovered it reviews the AI’s interpretation. The tester can confirm the finding, mark it as inaccurate, or add written context. This step is not optional. It is a core quality gate that keeps neurodivergent expertise at the center of every report.
The confirmation interface presents each finding as a simple card with the AI’s summary, a clip of the relevant screen recording, and the tester’s own transcript at that moment. Testers can respond in under a minute per finding.
What the Report Contains
Every OpenScouter report includes:
- WCAG mappings for each finding, referenced by success criterion and level (A, AA, or AAA)
- Plain English summaries written for non-technical readers
- ND stratification showing which findings affect specific neurodivergent profiles
- Severity ratings so teams can triage fixes effectively
- Screen recording clips linked to each finding for full context
Reports are delivered as structured data via the API and as a formatted PDF. Both formats are available from the study dashboard as soon as the Synthesizer agent completes its run.