Every Learner Everywhere

How College Faculty Can Confront Unconscious Bias in Edtech Tools

Educational technology has the potential to support personalized instruction in the classroom, but it is not without its biases. The algorithms behind digital learning technology are programmed by human developers, so a lack of diversity in software development can mean technologies don’t account for the experiences of every student.

In addition, the data sets used to train artificial intelligence and machine learning techniques can reinforce inequity, a phenomenon called algorithmic bias. For example, hiring software trained on male-dominated résumés tend to discriminate against female candidates, and facial recognition software trained on mostly Caucasian faces are 10 to 100 times more likely to misidentify people of color.

Every Learner Everywhere is focused on the intersection of equity efforts, digital learning technology, and high-stakes gateway courses in higher education. Edtech tools can be powerful resources to help every student reach their full potential. But, says Jessica Rowland Williams, Director of Every Learner Everywhere, if we don’t also step back and interrogate the design of those tools, the equity work will fall short.

In fact, scaling up powerful learning technologies will be counterproductive if they have traditional inequalities built into them. One common concern is that the AI in learning management systems used by colleges could lead to misidentifying students as low-performing in academic settings, which could lead their professors to treat them differently or otherwise put them at a disadvantage. If someone makes a decision about what constitutes a “low-performing” student, that opinion is embedded into a widely distributed software. Unchecked, this has the potential to amplify bias by several orders of magnitude and hide it under a veneer of “impartial” technology.

In this interview (edited for clarity and length), Williams argues that the edtech industry faces serious risks if it doesn’t become more diverse and equity centered. In the meantime, instructors risk perpetuating inequities if they don’t have an effective way to check and correct the biases of their edtech tools, and she offers practical tips for working with digital learning technologies.

How can bias get baked into educational technology tools?

We all have biases, and if we’re not conscious of those, they will inevitably be in whatever we’re building. Biases are always embedded in technology — even educational technology — because people build technology. Algorithms are only as informed as the programmers and developers who design them.

I’ve noticed there’s a huge disconnect between the populations who are being developed for and the actual developers. Tech is largely white male dominated, with some diversity through Asian developers. But many of the most vulnerable populations, Black and Latino students, are not represented within the EdTech development space, even though they are a large share of the users.

To avoid embedding inequity in digital learning technology, developers must intentionally avoid status quo design or designing for the “average student,” who often is thought of as white, male, and middle- to upper-income. They also must diversify their pool of developers to include a broader range of perspectives and lived experiences in the development process.

What is the impact on student learning if the team of software developers don’t have diverse experiences?

One example is the assumptions made about how long an assignment should take. Many tools use a metric called time-on-task, and students may be automatically flagged for being “at-risk” or needing additional help if they take too long. It’s important to interrogate whether the assumption we bake into the AI about how long an assignment should take is really accurate. How was that decision made? Whose biases informed that decision?

Adaptive learning works by responding to inputs, and the reality is the way I respond to something and the way a rural student with limited internet access responds to something might be completely different. We are defining what’s normal without exploring where those norms come from.

AI and adaptive learning software can help instructors quickly assess student learning and support individualized instruction, but there are many factors the algorithm cannot take into account. Student living situations or cognitive abilities also impact performance. If an algorithm routinely places a student in a learning track that doesn’t align with their learning needs, then this could ultimately hinder growth.

Apart from the underlying algorithms, how is the user interface design also important?

One of our recent surveys of students shows that the way students experience the tool matters —  whether they feel like it’s simplistic, if it’s easy to understand, if the language that’s being used resonates with them.

If developers do not have lived experiences that would orient them to what it may be like for those students to experience those digital tools, then the tools are developed without that vantage point ever taken into account.

How does bias influence the implementation stage once colleges have a tool in hand?

Bias is built into everything related to a course, from the way the syllabus is designed, all the way to how students are assessed.

One concrete example might be a policy requiring that students keep their cameras on during class discussions. That involves a lot of assumptions around wifi and broadband access, but also an assumption about how comfortable a student feels inviting their entire class into their learning space.

I never thought about this until a student shared their lived experience with me. We take for granted that students have the freedom to show up the same way in school and at home. But an LGBTQ student, for example, may be out around their classmates but not out at home. If how technology is implemented puts you in an environment where you need to be speaking to your class while your mom’s in the kitchen, that’s a very tricky situation.

Related reading: Reflect and Check: 3 Ways to Center Equity in Course Design

What steps can instructors and institutions take to mitigate this problem?

Use Resources. Some of the biases built into courseware will be mitigated by a well-designed course and effective instructional practices. Every Learner Everywhere and our network partners discuss this more in our Adaptive Courseware Implementation Guide. It recommends practices like creating student-friendly tech support, thinking through how students purchase the software, and monitoring the log-in data to uncover problems early. Other resources that support faculty in centering equity in digital learning can be found on our Resource Library.

Educate Yourself. Faculty can also raise their personal awareness about algorithmic bias by becoming familiar with the work of the AI Now Institute at New York University and The Algorithmic Justice League.

Ask questions. Lastly, another great step is to be intentional about seeking out EdTech tools that have been developed by people of color. Ask vendors and EdTech developers about how they’ve embedded equity into their products, and hold them accountable by letting them know this is an important factor in your decision making.

Students deserve tools designed with them in mind, and that has to be done intentionally. That’s not something that’s just going to happen naturally. How you implement the tool is important, but if you don’t ever take a step back to interrogate the design of the tool, then your equity work is incomplete.