The student learning data generated by digital technologies creates opportunities to evaluate courses if faculty and an institution’s Office of Institutional Research (IR) are able to cooperate effectively. But Julie Neisler, Quantitative Researcher, Learning Sciences Research at Digital Promise, says some faculty mistakenly believe they simply need to put in a request to IR at the end of the semester and they’ll get the data they need.
That’s a recipe for an incomplete or less insightful data collection and analysis of the effects of making changes in a course. Instead, say Neisler and her colleague Barbara Means, Executive Director, Learning Sciences Research at Digital Promise, it’s critical that faculty build a relationship with IR early in a course design in order to evaluate the course later.
As the repository for student records, IR has the student demographics needed to generate an unbiased estimate of the impact of your adaptive course. Those records include the student ID, Pell eligibility, prior academic achievement, race and age, and more. IR can also help you prepare for a focused analysis later and anticipate problems collecting and disaggregating student data, which is key to knowing how equitable a course is and if it is closing or exacerbating equity gaps.
If you are a college or university instructor implementing adaptive learning or other digital learning technology in course, how can you work effectively with IR at your institution to evaluate your results and use that insight for continuous improvement?
Identify the research question
Means says, “Typically faculty are not in a position to produce convincing evidence of impact” when implementing significant changes to instruction.
A key step to evaluating courses is helping IR understand the research question you are trying to answer. That works best if they are involved from the course design stage. Having someone on your team who understands research design and setting up a reasonable comparison group enables you to extract reliable data for students in a comparison group as well as those in your course at the end of the semester.
Know your student characteristics
College instructors evaluating a pilot project may not know important characteristics of their students available in IR records, such as whether the student is attending your college full or part time.
One of the most important characteristics is prior achievement, which is an important predictor of academic success. Prior achievement can be measured by a placement test, for example, in mathematics or language arts. Or it could be a student’s GPA from high school.
Depending on your research question, other characteristics that may be useful to know is how many students in your class are veterans, Pell eligible, or attending part time. Part-time students are more likely to be juggling work and family responsibilities, which can be a factor in learning outcomes. Not accounting for these student characteristics when comparing course outcomes for different groups of students can skew the data comparison and analysis at the end of the semester.
Neisler recalls working with a faculty member who went from teaching a daytime section of a course without adaptive courseware to teaching an evening section with adaptive courseware. As a result, her students went from almost all full time to almost all part time.
But the change in student characteristics was less visible than the change in modality. If the instructor evaluated the course by comparing grades for the two semesters and found that they were lower for the evening students, she might incorrectly infer that the courseware had depressed performance. Using data from IR, she can compare part-time day students to part-time evening students to get a cleaner comparison.
Define your outcome measures
At some point, faculty and IR will be merging data that each has collected, and that should be guided by what outcomes will illuminate the research question. The outcome measures that matter may be using technology to close student equity gaps, improving DFWI or DFW rates, or improving equity in gateway courses. Neisler advises, “Whatever your goal for your course is, the adaptive courseware should include those measures.”
Meanwhile, define which outcome measures require input from IR. To evaluate the course, do you need to sort data by gender? Is knowing a student’s race or ethnicity important? Are you seeking to improve outcomes for Pell-eligible students?
Tying the data together
You need three things to merge courseware data with IR records — the student login name within the learning management system, the student’s actual name, and their institutional ID.
One challenge is that the student’s login name may not be the same as their actual name. Another is that the courseware typically doesn’t carry the institutional ID, though some institutions arrange with their vendor to tie the student login ID to the institutional ID.
Resolving this is rarely a “push button” process, which is another reason why it’s important for faculty to coordinate with IR as early as possible.
Designing your evaluation report
What does a model course evaluation report look like? Means says there isn’t a standard, because “there’s a wide variance, depending on whether you’re teaching a high-enrollment class or you’re working with a lot of minoritized students in a community college.”
But she does describe an ideal scenario for defining and measuring student outcomes — two faculty members teaching the same course and embarking on an adaptive learning course together.
In that case, they share a common structure for the course, learning outcomes, mid-term and final exams, and they share the same courseware and instructional strategies. They’ve engaged the IR office early in the design process and are getting feedback from students by asking questions on a regular basis. This works especially well when both instructors are discussing that feedback and using that data to make course improvements during the semester.
Prepare for privacy
Data from IR typically comes back to the instructor anonymized. For example, you may know you have three veterans and two Pell-eligible students in your class, as well as data about their prior achievements, but not who they are.
IR has to be careful with small classes that the data it provides doesn’t make it obvious which characteristics belong to which students. For example, IR might avoid showing characteristics such as race and Pell eligibility status to avoid feeding into an implicit bias.
As an instructor, you want to know what data the software is collecting about students and how it is being used. Make sure consent to collecting data is informed consent, not just a student checking a box without understanding how the data is being used.
To evaluate courses, prepare for success
Faculty wanting to evaluate the impact of new digital learning technologies should be cognizant of resources, reporting requirements, and workload of the institutional research office. Remember that IR is processing grades for every course at the end of the semester and have their own high-stakes reporting deadlines to meet. For example, IR works on many of the regulatory requirements that higher ed must comply with.
As a result, it’s not unusual for faculty to wait a month or two for that final report on student outcomes. “Most IR offices were set up to do mandatory reporting to the federal government,” Means explains. “In some cases, they’re quite understaffed. This is a new role for IR to support better teaching and learning.”
Adaptive courseware, when designed and used effectively, can result in better student outcomes. But it is not plug-and-play. Faculty and administrators need to get the right people around the table at the right time to evaluate courses that have been redesigned to use digital learning technologies and to close equity gaps. If the expectation is that data analysis will drive improvement at your institution, be sure to include your Office of Institutional Research.
Download our guide Improving Critical Courses Using Digital Learning & Evidence-based Pedagogy
Originally published December 2020. Updated September 2021 with additional information and references.