Automated scanning, as a starting point
Every audit begins with automated scanning — axe-core, WAVE, Lighthouse, and Pa11y — across a representative sample of page templates and application flows. This catches the issues that can be caught mechanically: missing alt attributes, empty form labels, obvious contrast failures, orphaned ARIA references. This typically surfaces 25–35% of the real issues on a site. Necessary, but nowhere near sufficient.
Keyboard-only traversal
I then work through the site using only a keyboard. Every interactive element must be reachable, the focus order must match the visual order, focus must be visible at all times, and focus must return to sensible locations after modal dialogs, form errors, and navigation changes. A surprising number of “accessible” sites fail quietly here — the keyboard user gets lost, or trapped, or has to guess where focus went.
Screen-reader testing across real pairings
I test with the screen-reader / browser pairings real users actually use — NVDA with Firefox and Chrome on Windows, JAWS with Chrome on Windows, VoiceOver with Safari on macOS and iOS, and TalkBack with Chrome on Android. What reads correctly in one pairing can be mangled or silent in another. I document what I find and how to reproduce it.
Cognitive & low-vision review
Browser zoom at 200% and 400%, reflow at 320 CSS pixels, reading through content with forced colors mode (high-contrast), assessment of reading level and cognitive load on critical flows, and review of error messages and recovery paths — because someone who just hit a form validation error for the fifth time is the edge case that matters most.
Reporting you can use
Every issue is documented with: the specific WCAG 2.2 success criterion and severity, exact reproduction steps, a screenshot or recording, the affected user group, and a concrete recommended fix. I don’t hand over a 200-page PDF of machine output — I hand over a remediation plan your team can work through, prioritized by user impact and implementation effort.