Skip to content

Glossary

This page defines the key terms used throughout the OpenScouter platform and documentation. If you encounter an unfamiliar term, this is the place to start.

Core Concepts

TermDefinitionAlso Known As
StudyA commissioned accessibility evaluation of a website or app. A study includes multiple testers, each completing their own test.”job” in codebase
TestA single tester’s evaluation within a study. Each tester produces one test, which becomes the basis for their individual report.”session” in codebase
BarrierAn accessibility obstacle identified during testing. Barriers are documented with severity, context, and recommended fixes.Accessibility issue, finding
Accessibility ScoreA 0-100 rating of a site’s accessibility, calculated from the barriers found during a study. Higher scores indicate fewer and less severe barriers.Score
TesterA neurodivergent expert who performs accessibility testing on behalf of a client. Testers bring lived experience alongside structured evaluation skills.ND tester
ClientA business or organisation that commissions an accessibility study. Clients receive a final report summarising findings across all testers.Organisation
ND CategoryOne of 13 neurodivergent categories used to group testers by their primary neurodivergent identity. Examples include ADHD, Autism, and Dyslexia.Neurodivergent category

AI Agents

OpenScouter uses a pipeline of four AI agents to process test data and generate reports. Each agent has a distinct role.

TermDefinitionAlso Known As
ScoutyThe first AI agent in the pipeline. Scouty performs an initial analysis of raw test data, identifying patterns and flagging potential barriers. Also the name of the OpenScouter fox mascot.Agent 1
AnalystThe second AI agent. The Analyst performs a deep-dive analysis of the data surfaced by Scouty, adding context and severity assessments to each finding.Agent 2
Report WriterThe third AI agent. The Report Writer generates the final accessibility report for each individual test, translating findings into clear, structured documentation.Agent 3
SynthesizerThe fourth AI agent. The Synthesizer combines findings from all tests within a study into a single cohesive report for the client.Agent 4

Process Terms

TermDefinitionAlso Known As
Capacity QuizAn assessment completed before a tester begins a study. It confirms the tester understands their rights and the evaluation process. It is not a test of ability or knowledge.Onboarding quiz
Human-in-the-LoopThe requirement for a tester to review and confirm AI-generated findings before a report is finalised and sent. No report is generated without explicit tester approval.HITL
Plain EnglishNon-technical summaries of accessibility findings, written so that clients without a technical background can understand and act on the results.Plain language summary

Platform Features

TermDefinitionAlso Known As
Calm ModeA display option that removes decorative visual elements and reduces overall visual noise in the platform interface. Designed for testers who find busy layouts distracting or overwhelming.Reduced motion mode
ScoutyThe OpenScouter fox mascot. Also the nickname for Agent 1, which performs the initial analysis of test data. Scouty appears in the testing extension as a friendly companion.Agent 1, fox mascot