One challenge of incorporating AI into college courses is helping students understand that while AI has its uses, they should be careful about the kinds of work they hand over to it. It may be “intelligent” about some tasks — summarizing and reporting, for example — but certainly not all tasks. Happily, I am discovering that students can learn to make that critical distinction and even decide to forgo using AI in some cases. However, students may need multiple opportunities to learn these lessons.
In an earlier article, I wrote about my experience incorporating generative AI into the Pharmacy Ethics course I teach at the University of Mississippi School of Pharmacy. I argued that educators should be less worried about students offloading the research and summary parts of college work and more worried about them offloading the work of forming personal opinions. I explained how in my assignments, which permit the use of AI (as if I could stop it anyway), students struggled to situate their own opinion alongside AI outputs or to use those outputs as a platform for developing their own ideas.
That was at the transition from the fall 2024 term to spring 2025, and I wrote that I planned to make adjustments to my course to account for this problem. Could I design assignments so that AI played a supportive role in the way students develop and express individual opinions on ethical questions?
The spring 2025 term recently finished and, while I am glad to say I am somewhat further down the road in incorporating AI, I will certainly still need to make adjustments the next time I teach the course.
Revising assignments that use AI
In this self-paced online course, students consider ethical questions — the interplay of confidentiality, choice, consent, community, rights, safety, and law — that will be important to them when they graduate and are working as pharmacists. They learn to examine these questions through frameworks like the Belmont Report from the U.S. Department of Health and Human Services and the Pharmacist’s Oath. They develop command of these frameworks by reading and writing about topics like end-of-life care, reproductive health, and gender-affirming care.
A multi-part writing activity typically begins with information gathering, like understanding how local elected representatives have explained their votes on healthcare issues, and AI is often helpful for that. Oftentimes elected officials do not make formal statements on pending legislation or on their voting record. AI helps students gather data on elected official’s platforms or speeches to determine how they might justify a particular vote on a healthcare issue.
Similarly, as I wrote in the earlier article, I use AI myself to develop many of the summaries of legislation and court cases on these topics, and, of course, I acknowledge my use of AI and provide citations to verify the AI output.
The second part of the assignments often ask students to write an opinion about a challenging problem arising from these topics and to discuss that opinion in the context of the ethical frameworks we are learning about. For example, in a unit with readings about quality-of-life questions, I ask students to discuss if they would rather have physical or cognitive limitations and to analyse their perspective through an ethical framework such as the three ways to value human life (human life is the supreme value, the sanctity or inviolability of life approach, and instrumentality or quality of life approach).
I wouldn’t think AI would be needed or even helpful to answer a question that begins “Would you rather … ” or “Do you think …” Educators will recognize those as the kinds of questions where we don’t have a right answer in mind. The point of the exercise is to demonstrate the ability to work through a challenging problem. It seems obvious to me that no AI product has the context to summarize a personal opinion before a student expresses it themself, especially on a topic they only recently learned about.
But AI has become the first place many students turn when they begin an assignment, and, as I explained in the earlier article, I let students know they can use AI to assist them, but they need to include a statement that they used it. Two results of this troubled me.
First, students often failed to include the acknowledgement. It was as if an invisible barrier kept them from “admitting” to doing something they were not prevented from doing.
Second, their opinion was often simply the output of the AI. Many would paste the “Would you rather …” or “Do you think …” question into the query box and turn in the output as their answer. Those students often had nothing or very little of their own to add beyond agreeing with the “opinion” AI had produced.
(As a matter of interest, 95 percent of the time, this was from ChatGPT.)
In the second iteration of the class revised to integrate generative AI, I made a couple of adjustments. The first was using formatting, lots of repetition, and point deductions to make acknowledging the use of AI a more consistent practice, and that worked for most students.
Second, in addition to asking students to acknowledge the use of AI, the prompt now also asks students to explain how they verified the accuracy of the output. This encouraged them to go beyond a simple “Yes, I used it” and to reflect on how. Students ranged in their ability to use authoritative sources to verify AI outputs. Some only said “I Googled it” or pointed to non-authoritative sources such as unnamed professors, social media posts, and their own knowledge of a topic.
But most improved over the term as I provided feedback on appropriate source verification features including authors who are qualified to write authoritatively on the topic, minimal evidence of bias in the content, and widely known and respected publication sites such as a .gov website, or mainstream news outlets.
Is AI even helpful here?
I didn’t make any other formal changes to the reflection activities asking students to develop an opinion, apart from repeatedly emphasizing the “no wrong answers” point. I was also more conscious of how I provided comments on their response activities, typically engaging more with students’ original ideas over those of the AI output, however limited or undeveloped the students’ ideas might be.
During modules in the beginning and middle of the term, many students did write a little of their own opinion alongside the AI outputs — and then gradually a little more. Over the term, their own share became more developed and in-depth. The combination of acknowledging the use of AI, doing the work to verify its outputs, and getting reminders in my feedback seemed to slowly have the desired effect.
Then a surprising thing began to happen in later modules. Many students began to skip the step of using AI altogether and just went directly to responding to the question on their own. AI got weeded out of the workflow, and they were back to doing what students did before generative AI.
I have a few tentative explanations for this:
- They discovered that when it comes to writing an opinion versus a “report,” AI doesn’t actually save time. It was an extra step rather than a shortcut.
- They began to feel more confident in their ability to do what they needed to do to get the grade they wanted. They saw that when they worked without the AI safety net they would be okay.
- They saw that the more they expressed their own ideas the more engaged a response they got from me, which some of them enjoyed.
Resetting expectations
On reflection, I now think it’s a mistake to assume students know what is being asked of them in an opinion-based response essay. “Would you rather …” or “Do you think …” is not necessarily the obvious rhetorical signal I thought it was.
Put another way, students probably need help understanding the difference between “right answer” and “no wrong answer” questions, and it may take a whole semester to learn that distinction. Simply telling them there is no wrong answer isn’t sufficient.
Likewise, it probably takes more than a single exercise to learn to question the authority of AI and to compare it against authoritative sources. As a teacher, I need to be patient while nudging them toward that ability.
With that in mind, the next time I teach this course, I plan to clarify for students the difference between a prompt that requires external research and one that requires their unique thoughts on an issue. I will provide more opportunities for students to develop their opinion on topics before having to share them in assignments. And I will model for students how they can verify AI outputs and summarize that verification process in their submissions.
Students are very grade driven and are trying to do what is necessary to get the grade they want. That’s understandable, and it’s wishful thinking to expect they won’t turn to AI to help them with that. On the other hand, students also enjoy the freedom to explore new ideas and wrestle with big questions in a safe environment.
My experience having taught for two terms a course that has integrated AI into practice activities and assessments, tells me that AI is creating an interesting new challenge for educators — to help students develop an understanding of how AI can help organize, analyze, and summarize their thoughts, while not allowing AI to shape, or even worse, replace their thoughts.
Check our workshops page for more events on using AI in higher education