Every Learner Everywhere
WCET

Decoding Generative AI and Equity in Higher Education

Since consumer-use AI tools like ChatGPT and DALL-E have been released in the last two years, much has been said and written about the influence this technology might have in higher education. For example, many articles address anxiety about academic dishonesty or about teaching jobs being replaced. Others explore opportunities for innovative assignment formats and assessments. Few of these discussions look at generative AI particularly from an equity perspective, and when we do consider AI and equity together we may see a different set of opportunities and challenges.

Opportunities for accessibility and digital learning

“One thing exciting about AI has to do with accessibility,” says Van Davis, Chief Strategy Officer of WCET (WICHE Cooperative for Educational Technologies).

“We’re seeing really good text-to-speech development, and we’re starting to see speech-to-text as well. So, for folks who may struggle with certain forms of communication, AI has the opportunity to provide a new tool for accessibility.”

For students with learning disabilities, generative AI has the potential for an engaging, efficient alternative to traditional classroom tools. Davis says he’s observed educators creating parameters within ChatGPT to create interactive scenarios that place students in different historical time periods like Renaissance Italy. Essentially, the chat tool is modified to engage with students and present material in a new way.

Another example of expanding accessibility is a faculty member with ADHD who wrote in Inside Higher Ed about using generative AI tools for their own routine tasks like writing conference proposals. While the AI-generated product still required review, it streamlined the faculty member’s process.

Pairing generative AI and equity also has many implications for digital courseware. Davis says many current integrations are on the administrative side of the tool, where faculty get assistance creating course outlines or quizzes. Some institutions are also experimenting with ways that generative AI can be used in developing assignments, he says.

“Particularly with composition courses, there are some really interesting things being done with how students are asked to use the technology in a way that is both pedagogically appropriate for the student, but also in a way that doesn’t succumb to academic integrity issues,” Davis states.

For example, in one assignment he recently observed a colleague use, students used AI to generate a paper, and then the student’s original work was a critique of that paper.

“They basically grade it and fix it and learn to be able to have a reflective conversation about it,” Davis says. “So it tests a student’s subject-matter expertise, but it also helps their metacognitive abilities.”

Related reading — Using Digital Multimodal Composition to Achieve Greater Equity in the Classroom

Pitfalls of generative AI in education

For many educators, the initial fear with generative AI stems from academic dishonesty. In response, many colleagues are using software products that claim to detect plagiarism. As many others have argued, the discourse about plagiarism in higher ed is not race neutral, so it’s unlikely that conversations about controlling the use of AI will be.

“Oftentimes our conversations sort of start and stop with academic integrity,” Davis says. “Yes, we need to be aware of that. But from an equity perspective, there are bigger issues faculty need to talk to their students about.”

One issue is algorithmic bias, which is discrimination against one group over another due to the recommendations or predictions of a computer program. Systematic and repeatable errors in a system can create unfair results that privilege one group of users over another. An algorithm and its recommendations will appear to be impartial because biased instructions are not explicitly written into it. But the data the algorithm is learning from could have structural and historical bias baked in.

For example, suppose a college admissions office wants to use AI to identify applicants who are likely to succeed at their college and the AI relies on that institution’s previous admissions and graduation data. If that data has a lot of students with high school AP classes in it, the software will train itself to treat more AP classes as a signal of quality — even without explicit instructions — and thereby replicate an existing structural bias.

“We have these technologies that we think are dispassionate and incapable of oppression,” Davis says. “In reality, they’re extraordinarily biased. The danger is that we are trained to think it’s unbiased and to trust it more than we trust humans.”

Looking at generative AI from an equity perspective also means thinking about access, since many tools are behind paywalls. For example, many students are using ChatGPT, but some are using ChatGPT Plus — which is more flexible and more accurate — at a cost of $20/month.

“We run the risk of exacerbating our existing digital divide,” Davis says. “We’re going to see some students have access to the best, know how to use it, and have an advantage searching for jobs or trying to get into graduate programs.”

Related reading — How College Faculty Can Confront Unconscious Bias in Edtech Tools

Striking the right balance between AI and equity

“Generative AI is not the Terminator,” says Davis. “It uses probability to predict the next word in a sequence and create new material. There’s a future where generative AI may help develop content, but it’s never going to usurp the role of subject matter experts.”

As Davis has written earlier on this topic, the technology isn’t neutral, nor are our responses to it. Generative AI has the opportunity to revolutionize the classroom and digital learning. But to ensure those changes are for the better, it’s important to assess the risks and the opportunities from an equity perspective.

Professional development opportunity: Use data for equitable teaching and learning