Abstract
Motivated partly by the COVID-19 pandemic, participation in online education at all levels has seen a significant and continuing increase. Although online education increases equal learning opportunities, one major potential problem it has faced is cheating. This concern, supported by data from the pandemic, is now heightened by the emergence of free, online, AI-based resources like ChatGPT. To ensure the integrity of online education, it is necessary that online education address the challenge of designing assessments that mitigate the extent to which students can cheat using ChatGPT in an effective reliable manner. This paper proposes four approaches that can be incorporated into the design of assessments for statistics and data science-based courses. These approaches leverage the fact that ChatGPT is not designed to answer queries based on information outside of its training data set. The first class of assessments, “self-referential queries,” are conditioned on prior information covered in the class of which ChatGPT does not have direct knowledge. The second class of assessments, “information-starved queries,” leverages idiosyncratic approaches an educator uses, which are not part of the mainstream curriculum. The third class of assessments, “temporally based queries,” uses the fact that ChatGPT is not trained by current data. The fourth approach uses online teamwork platforms such as Notion and Slack, with embedded web-bot to monitor teamwork and track who did what. We present examples of the four approaches and address practical issues that might limit the implementation of these approaches into current online assessment platforms.
Presenters
Hong LiuProfessor, Mathematics, Embry-Riddle Aeronautical University, Florida, United States
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
2024 Special Focus—People, Education, and Technology for a Sustainable Future
KEYWORDS
ChatGPT, Data Science Education, Education Technologies, Learning Assessment