We are here for you in these difficult times. Check out our latest SOLVE COVID-19 Resources Here

Dismiss
Every Learner Everywhere
Digital Promise

How Faculty and IR Can Work Together to Evaluate Courses Using Digital Learning Tech

Analyzing and using student learning data generated by digital technologies often depends on cooperation between faculty and an institution’s Office of Institutional Research (IR). But Julie Neisler, Quantitative Researcher, Learning Sciences Research at Digital Promise, says some faculty mistakenly believe they simply need to put in a request to IR at the end of the semester and they’ll get the data they need.

That’s a recipe for an incomplete or less insightful data collection and analysis of the effects of making changes in a course. Instead, say Neisler and her colleague Barbara Means, Executive Director, Learning Sciences Research at Digital Promise, it’s critical that faculty build a relationship with IR early in a course design.

As the repository for student records, IR has the student demographics needed to generate an unbiased estimate of the impact of your adaptive course. Those records include the student ID, Pell eligibility, prior academic achievement, race and age, and more. They can also help you prepare for a focused analysis later and anticipate problems collecting and sorting data.

If you are implementing adaptive learning technology in a college course, how can you work effectively with IR at your institution to evaluate your results and use that insight for continuous improvement?

Identify the research question

Means says, “Typically faculty are not in a position to produce convincing evidence of impact” when implementing significant changes to instruction.

A key step to a successful evaluation is helping IR understand the research question you are trying to answer. That works best if they are involved from the course design stage. Having someone on your team who understands research design and setting up a reasonable comparison group enables you to extract reliable data for students in a comparison group as well as those in your course at the end of the semester.

Know your student characteristics

Instructors may not know important characteristics of their students available in IR records, such as whether the student is attending your college full or part time.

One of the most important characteristics is prior achievement, which is an important predictor of academic success. Prior achievement can be measured by a placement test, for example, in mathematics or language arts. Or it could be a student’s GPA from high school.

Depending on your research question, other characteristics that may be useful to know is how many students in your class are veterans, Pell eligible, or attending part time. Part-time students are more likely to be juggling work and family responsibilities, which can be a factor in learning outcomes. Not accounting for these student characteristics when comparing course outcomes for different groups of students can skew the data comparison and analysis at the end of the semester.

Neisler recalls working with a faculty member who went from teaching a daytime section of a course to teaching an evening section with adaptive courseware. As a result, her students went from almost all full time to almost all part time. If she compared grades for the two semesters and found that they were lower for the evening students, she might incorrectly infer that the courseware had depressed performance. Using data from IR, she can compare part-time day students to part-time evening students to get a cleaner comparison.

Define your outcome measures

At some point, faculty and IR will be merging data that each has collected, and that should be guided by what outcomes will illuminate the research question. The outcome measures that matter may be closing gaps between different student groups, improving DFWI rates, or improving equity in gateway courses. Neisler advises, “Whatever your goal for your course is, the adaptive courseware should include those measures.”

Meanwhile, define which outcome measures require input from IR. Do you need to sort data by gender? Is knowing a student’s race or ethnicity important? Are you seeking to improve outcomes for Pell-eligible students?

Tying the data together

You need three things to merge courseware data with IR records — the student login name within the learning management system, the student’s actual name, and their institutional ID.

One challenge is that the student’s login name may not be the same as their actual name. Another is that the courseware typically doesn’t carry the institutional ID, though some institutions arrange with their vendor to tie the student login ID to the institutional ID.

Resolving this is rarely a “push button” process, which is another reason why it’s important for faculty to coordinate with IR as early as possible.

Designing your report

What does a model report look like? Means says there isn’t a standard, because “there’s a wide variance, depending on whether you’re teaching a high-enrollment class or you’re working with a lot of minoritized students in a community college.”

But she does describe an ideal scenario for defining and measuring student outcomes — two faculty members teaching the same course and embarking on an adaptive learning course together. 

In that case, they share a common structure for the course, learning outcomes, mid-term and final exams, same courseware, and same instructional strategies. They’ve engaged the IR office early in the design process and are getting feedback from students by asking questions on a regular basis. This works especially well when both instructors are discussing that feedback and using it to make course improvements during the semester.

Prepare for privacy

Data from IR typically comes back to the instructor anonymized. For example, you may know you have three veterans and two Pell-eligible students in your class, as well as data about their prior achievement, but not who they are.

IR has to be careful with small groups that the data it provides doesn’t make it obvious which characteristics belong to which students. For example, IR might avoid showing characteristics such as race and Pell eligibility status to avoid feeding into an implicit bias.

As an instructor, you want to know what data the software is collecting about students and how it is being used. Make sure consent to collecting data is informed consent, not just a student checking a box without understanding how the data is being used.

Prepare for success

Faculty should be cognizant of IR’s resources, reporting requirements, and workload at the end of the semester. Remember that IR is processing grades for every course at the end of the semester and have their own high-stakes reporting deadlines to meet. It’s not unusual for faculty to wait a month or two for that final report on student outcomes.

“Most IR offices were set up to do mandatory reporting to the federal government,” Means explains. “In some cases, they’re quite understaffed. This is a new role for IR to support better teaching and learning.”

Adaptive courseware, when designed and used effectively, can result in better student outcomes. But it is not plug-and-play. Faculty and administrators need to get the right people around the table at the right time to evaluate redesigned courses and to make progress. If data analysis is going to drive improvement at your institutions, be sure to include your Office of Institutional Research.

Additional resources