FAQ for Businesses
Got questions about how OpenScouter works? Here are answers to the most common things businesses ask before running their first study.
How long does a study take?
Most studies return results within 48 hours of launch. Once you publish your study, matched testers are notified immediately and can begin testing. The majority of tests are submitted within the first 24 hours.
Complex studies with a high tester count or specialized ND categories may take slightly longer. You will receive a notification as each tester submits their report, so you can start reviewing findings before the study is fully complete.
If a study has not reached completion within 72 hours, our team will follow up to ensure you are not kept waiting.
How many testers do I need?
The minimum for a valid study is 3 testers. However, we recommend 5 or more to reach statistical significance.
With 5 testers you begin to see consistent patterns emerge across different users, which makes your accessibility score more reliable and your barrier list more representative. Studies with fewer testers are useful for quick directional feedback, but may miss edge cases tied to specific assistive technology configurations.
For large or high-stakes products, studies with 8 to 10 testers give you the most confidence in the findings before a major release or compliance review.
Can I choose specific neurodivergent categories?
Yes. OpenScouter supports 13 neurodivergent categories, and you can select any combination when setting up your study.
Available categories include ADHD, dyslexia, autism spectrum, dyscalculia, dyspraxia, auditory processing disorder, and more. Each tester in our community has completed a capacity quiz that maps their profile to relevant categories, so your selections are matched against real, verified experiences.
If you are targeting a specific user group, such as users with dyslexia for a reading-heavy product, you can filter for that category specifically. If you want broad accessibility coverage, selecting multiple categories will give you a more complete picture.
What happens if a tester does not complete their test?
If a tester accepts your study but does not submit their report within the required window, they are automatically replaced at no charge to you.
Replacement testers are matched using the same criteria as the original selection, so your ND category balance is maintained. The replacement process runs automatically and does not require any action on your part.
You are only charged for completed, reviewed tests. Incomplete tests do not count toward your study quota or your invoice.
How is the accessibility score calculated?
The accessibility score is a number from 0 to 100 that reflects how accessible your product is based on the barriers testers encountered.
The score is calculated from three factors: the total number of barriers reported, the severity of each barrier (critical, major, or minor), and whether the barrier maps to a WCAG 2.1 success criterion. Critical barriers that block a core task and directly violate a WCAG criterion have the highest weight.
A score of 100 means no barriers were reported. A score below 50 indicates significant friction that is likely affecting real users and creating compliance exposure. Scores are recalculated each time you run a new study, so you can track improvement over time.
The score is a summary signal, not a substitute for reading the full barrier report. Always review the detailed findings alongside the score.
Can I integrate OpenScouter with my existing tools?
Yes. OpenScouter provides a REST API and supports webhooks for integration with your existing workflows.
With the API you can programmatically create and manage studies, retrieve reports, and pull barrier data into your own dashboards or accessibility tracking systems. Webhooks allow you to trigger automated actions when a study completes or when a new tester report is submitted.
Common integrations include Jira for automatic ticket creation from reported barriers, Slack for study status notifications, and CI/CD pipelines for gating releases on accessibility scores. Full API documentation is available in the developer section of this site.
If you need a custom integration or have questions about what is possible, contact our team directly.
What is the difference between a free audit and a full study?
The free audit is an automated scan. It checks your site against a predefined set of technical rules, such as missing alt text, incorrect heading structure, and low color contrast ratios. It runs in seconds and gives you a starting baseline.
A full study involves real people with lived neurodivergent experience testing your product as actual users. They discover barriers that automated tools cannot detect, including confusing navigation flows, overwhelming layouts, unclear error messages, and interaction patterns that cause cognitive overload.
Research consistently shows that automated tools catch around 30 to 40 percent of accessibility issues. The remaining majority are only found through human testing. If you want to understand the real user experience, a full study is the right tool.
The free audit is a good first step. A full study is what you need before a product launch, compliance review, or significant update.
Do you test mobile apps or just websites?
OpenScouter primarily focuses on web-based products. Full support is available for desktop websites and mobile web experiences accessed through a browser.
Native mobile apps (iOS and Android) are not currently supported. If your product has a mobile web version or a responsive design, testers can evaluate that experience using their mobile devices during the study.
If mobile web accessibility is important to you, we recommend specifying this when setting up your study so testers can include mobile-specific observations in their reports.
Support for native mobile app testing is on our roadmap. If this is a priority for your team, reach out and we can discuss options.
How do you ensure tester quality?
Every tester on the OpenScouter platform goes through a qualification process before they can participate in paid studies.
The process starts with a capacity quiz that verifies the tester’s neurodivergent profile and assesses their ability to describe accessibility barriers clearly and specifically. Testers who pass are onboarded with guidance on how to write useful, actionable reports.
After each study, tester reports are reviewed for completeness and quality. Testers who consistently submit vague, incomplete, or unhelpful reports are flagged and removed from the active pool. Testers who perform well accumulate experience ratings that are visible to our matching algorithm.
This combination of upfront qualification, structured onboarding, and ongoing performance review keeps the tester pool reliable and the reports you receive genuinely useful.
What compliance frameworks do you support?
OpenScouter reports map findings to WCAG 2.1 Level AA by default. This is the most widely referenced standard for web accessibility and the baseline required by most regulations and procurement policies.
We also support FCA Consumer Duty compliance reporting. The FCA expects firms to demonstrate that their products and services are accessible to all customers, including those with characteristics of vulnerability. OpenScouter studies generate evidence of accessibility testing with real users, which directly supports that obligation.
If you need to demonstrate compliance to a specific framework, partner, or regulator, contact us. We can discuss what documentation and report formats will be most useful for your situation.