Every Learner Everywhere

Principles for Understanding AI in the Classroom

Marc Watkins at the University of Mississippi Center for Excellence in Teaching and Learning works to bring together both AI skeptics and early adopters to help them think about how the technology is reshaping writing, reading, research, teaching, and learning.

“You have the choice to use the technology or not,” he says. “You can assign readings about the ethics of using AI if you want to teach about AI versus teaching with AI. That’s perfectly appropriate. But you must have a framework in mind. What you can’t do is bury your head in the sand and try to ostrich your way through this.”

As Academic Innovation Fellow and Lecturer of Writing and Rhetoric, Watkins runs The University of Mississippi’s Summer and Winter AI Institutes for Teachers.

He also recently led an Every Learner Everywhere workshop on Building AI Literacy with Students, which is part of the Digital Learning Workshops series. These help administrators, instructional designers, and faculty redesign learning experience to incorporate digital-age tools for student success.

He says one effect of the Institute is to bring everyone together to get them talking and thinking about ways to use AI technology to help students. “But we also want them to think about what guardrails and boundaries they want to put in place to preserve their teaching and not lose their minds,” he adds. “AI is not only a new tool, but a new way of dealing with different information in the digital space.”

The fast-changing landscape is one of the biggest challenges in adopting AI. The developers of the foundational models, like OpenAI’s ChatGPT or Google’s Gemini, continuously create new versions of the software with powerful new features. Educators must have a way to understand AI, discuss the options available, and use the tools before trying things out in the classroom.

For these reasons, educators need frameworks more than absolute rules.

Humans working with the machine

Watkins says students may share concerns about academic integrity and they want to know how the technology will impact their career readiness.

“They aren’t chatbots,” he says. “They’re there to learn, so they will look to their instructors for guidance about using these tools.”

He believes faculty are often frustrated that there are no simple answers when dealing with AI. A 15-minute lunch-and-learn presentation with top tips for resolving AI issues in the classroom won’t be sufficient.

Watkins encourages instructors to be as engaged as possible but reminds them to approach the technology from a philosophy of human agency.

Adopting a thoughtful AI policy

One beneficial use of AI, Watkins points to how video conferencing tools now can transcribe in real time and generate memos with action items.

“This can be a godsend for students who need help taking notes,” he says. “However, we don’t want unintentional adoption of the technology because it threatens our educational system for students and faculty.”

Finding the balance is the challenge. “AI is so vast with so much material at one point in time that there’s a threat it will overwhelm people,” Watkins says.

“Our institutes let people unpack all of this, talk about it, sit down, and think about what it means.”

A strict no-use policy is probably impractical since AI is becoming embedded as a feature in common tools like Microsoft Word, Blackboard, and Google Docs. And hardline approaches also don’t suit the fast rate of change.

Understanding AI assessment in the classroom

Watkins cautions instructors against trying to label student work as AI generated since AI detection tools are notoriously unreliable. OpenAI even shut down its AI detection tool because it gave too many false positives.

They also exacerbate inequities, since the false positives are disproportionately found with non-native English writers and students with disabilities.

Watkins walks participants in his workshops through exercises designed to illustrate how difficult it is to determine whether text is AI generated or if photos are real or fake. People quickly realize the difficulty of the task.

Trying to “catch” students isn’t the answer. This destroys the trust between students and instructors. Instead, Watkins and the University of Mississippi AI Institute encourage educators to talk to students about what is and is not acceptable, give students a mechanism for using AI, and encourage them to reveal that in their work.

The most essential ingredient is to be transparent and talk to students. Instructors must explain their requirements for student use of AI and discuss how they plan to use AI in the classroom.

Watch our workshop page for more events on equitable teaching with AI