As colleges and universities expand the use of AI to support digital accessibility, questions about human validation in AI accessibility work have moved to the foreground. Recent changes to Title II of the Americans with Disabilities Act have added urgency to these conversations, particularly for faculty members and instructional designers facing large volumes of existing course materials. The challenge is no longer whether accessibility matters, but how to address it at scale without turning course development into an unmanageable remediation project.
Designing Accessible Learning with AI Inside and Outside the LMS, a recent webinar hosted by Every Learner Everywhere in partnership with the Northwest Higher Education Accessibility Technology Group, looked at how artificial intelligence is being used today to support accessibility work, particularly where it can reduce workload without displacing human judgment. Rather than positioning AI as a comprehensive solution, panelists emphasized its value as a practical support for tasks that are time intensive, repetitive, and increasingly urgent under new compliance timelines.
The discussion featured Michele Bromley, Manager of Digital Accessibility and Content at Portland State University; April Crenshaw, Associate Professor at Chattanooga State Community College; and Erik Ducker, Senior Director of Product Marketing at 3Play Media. Across different institutional roles, their examples illustrated a common idea: AI can help institutions support accessibility efforts when it is paired with clear workflows and consistent human validation.
Using AI to reduce remediation workload
One pressure institutions face is the volume of existing instructional materials that must be reviewed. Bromley described a framework her team uses to approach this work: reduce, rebuild, and remediate. The first step is deciding what content is still necessary and what can be archived or removed. That decision alone can significantly reduce the scope of accessibility work.
For materials that remain in use, Bromley said AI can support bulk conversion of documents into HTML, which is generally easier to make accessible and maintain over time. Generative AI tools can handle much of this conversion work quickly, including scanned documents that would otherwise require hours of manual effort. However, she emphasized that this efficiency only holds if institutions treat AI outputs as drafts rather than finished products.
Human validation, Bromley said, is not optional. “Using an AI tool is like working with an extremely competent assistant or technician on their first day,” she said. The output may be strong, but it should never be delivered to students without review.
“But it is always their first day,” she explains. “Their work will probably be quite good based on their interview and work portfolio, but you would never ship their outputs on their first day to an employee or a student population before validating and either returning the work to them for correction or applying the correction yourself.”
In practice, this means using AI to perform the initial transformation and reserving expert time for checking structure, accuracy, and accessibility details that automated systems cannot reliably judge. Ducker described a similar approach in the context of audio and video accessibility. At 3Play Media, AI is commonly used to generate first‑pass captions or transcripts, which are then evaluated for accuracy. Files that meet established thresholds can move forward quickly, while others are flagged for human review or correction. This layered process allows institutions to manage large volumes of media while maintaining quality standards.
Clarifying what AI can and cannot do
The panelists noted that conversations about AI in higher education often swing between extremes. Bromley observed that institutions tend to treat AI either as a flawless solution that can resolve accessibility, security, and training challenges all at once, or as a threat that undermines creativity, academic integrity, and employment. Neither view reflects how AI systems actually function in day‑to‑day accessibility work.
Part of the confusion, Ducker explained, comes from using the term “AI” to describe very different technologies. Generative systems, such as large language models, produce new content and therefore require careful oversight. Other forms of AI, including speech recognition and image detection tools, are designed to classify or transcribe existing content and operate with different risk profiles. Understanding these distinctions helps institutions select tools that align with specific tasks and avoid unnecessary cost or complexity.
Data privacy concerns also shape institutional decision making. While recent legal disputes have heightened awareness about how some AI systems are trained, panelists cautioned against assuming that all AI‑enabled accessibility tools operate by harvesting or repurposing institutional content. Many tools process materials on behalf of colleges and universities under defined security controls. The challenge for institutions is not whether to use AI at all, but how to evaluate tools carefully and deploy them within clear governance structures.
“Not everyone’s trying to steal your data,” Ducker said. “We are really just trying to help you move faster and that’s our only goal.”
Accessibility beyond compliance
While compliance requirements provided the backdrop for the webinar, the discussion repeatedly returned to how accessibility affects students’ actual learning experiences. Crenshaw framed accessibility as encompassing availability and affordability. In her view, accessible design includes whether materials are readable across devices, written in language students can understand, and structured in ways that support engagement.
She noted that courses can technically meet accessibility standards and still be difficult for students to navigate or afford. Institutions can “check every box” on an accessibility checklist and still end up with courses that are hard for students to use or understand, because technical compliance does not guarantee that materials are organized, affordable, or written at a level students can readily engage with.
Checklists and guidelines establish important baselines, but they do not account for differences in how students process information or make sense of course expectations. From this perspective, accessibility work is inseparable from instructional design decisions about clarity, relevance, and cognitive load.
“Accessibility is innately a human experience. We use AI to help us, but ultimately what’s accessible is a human experience. So we have to keep in mind that we have humans on the other side of our content who are engaging with you,” said Ducker.
Why human validation remains central
Across the webinar, human validation emerged as the common thread connecting otherwise diverse use cases for AI. Whether converting documents, generating captions, or drafting alt text, panelists consistently described AI as a way to manage scale, not eliminate responsibility. Automated tools can accelerate progress, but they cannot be relied on to anticipate how students will experience course materials.
Several speakers stressed the importance of institutional support structures that make this approach feasible. Faculty are more likely to engage productively with accessibility work, Crenshaw said, when they know they are not working in isolation and have access to training, vetted tools, and knowledgeable colleagues. Collaboration between instructional staff, accessibility specialists, and IT teams plays a critical role in ensuring that AI is used appropriately and effectively.
The full webinar includes additional discussion of tool selection, LMS‑based accessibility reporting, and emerging approaches to audio description. Readers interested in these topics, as well as in hearing directly from the panelists, can view the complete recording.
Download Where AI Meets Accessibility: Considerations for Higher Education
