Geber86/iStock via Getty Images
The Research Brief is a short take about interesting academic work.
The big idea
Students don’t have to be supervised during online exams. That’s because unsupervised online exams can accurately assess student learning, according to our study published in July 2023 in the Proceedings of the National Academy of Sciences.
Our data set comprised nearly 2,000 students from a public university in the Midwest. We analyzed exam scores from the first half of the spring semester of 2020, when tests were administered in person, and the second half, when the pandemic forced schools to shift online. This enabled us to compare how students performed on in-person exams versus online exams taught by the same instructor in the same course.
Our data showed a strong correlation between the scores that students achieved from unsupervised online exams and supervised in-person exams. In other words, students who got the best scores on the in-person exams also got the best scores on online exams.
We also examined whether this correlation changed based on students’ being early or later in their college career, the course discipline, the class size, or whether the exams featured mainly multiple-choice or short-answer questions. None of those factors significantly affected how well online exams assessed student learning.
We further analyzed our data to see if we could find clear signs of cheating during online exams. Because students who are doing poorly in a course are more likely to cheat, we predicted that students who had done poorly on the in-person exams – during the first half of the semester – would increase their scores more on online exams if they cheated.
We found no evidence for this type of cheating. This is important, because most people expect students to cheat during online exams. For example, a recent survey showed that more than 70% of college faculty believed cheating to be a significant problem for online exams, but only 8% believed the same for in-person exams.
Why it matters
COVID-19 has accelerated the adoption of online teaching and assessments. For that reason, we thought it was important to examine whether unsupervised online exams can accurately assess learning.
Previous studies have shown that students obtained higher scores on online exams than on in-person exams. Those results have sometimes been seen as evidence of cheating, which calls into question the suitability of online exams as a form of assessment.
But to judge whether online exams accurately assess learning, we must show that a student who earns high marks on in-person exams does the same on online exams and vice versa. In other words, the two forms of exams should rank-order students similarly, which was exactly what we found in our data.
What’s next
Although this data shows that online exams, even when unproctored, can accurately assess student learning at a relatively broad scale, it all comes from a single university. For that reason, caution is needed when attempting to draw general conclusions.
Moreover, much has changed in online education over just the past year with the rise in popularity of generative AI tools like ChatGPT, which can facilitate cheating.
We want to obtain a larger data set to determine if our results hold true beyond a single university.
Jason C.K. Chan receives funding from the National Science Foundation.
Dahwi Ahn does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.