Assessment for Learning MOOC’s Updates

The Coursera MOOC platform as an example of applied standardized tests

The MOOC [1] course Coursera Course "Assessment for Learning" [2] introduces canonical categories of standardized tests, i.e. "select-response" and "supply-response". This text, written as an "update", i.e. "supply-response assignment", in said MOOC with the following topic:

"Parse" a standardized test. Or describe the implementation of a standardized test in practice. What are its strengths and weaknesses?

The Coursera platform provides an interesting opportunity for studying standardized tests "in the wild". As a student with a company-sponsored Coursera account (and an individual that indulges in an "all-you-can-eat buffet" of courses) the author has been subject to a wide range of standardized tests.

  • regular "select response" with more or less well designed distractor items
  • "​supply-response" in disguise of "select response" (e.g., select an answer that requires at least some analytical or programming work if not guessed at random)
  • regular "supply response" from "fill-in a word or a number" to peer reviews assignments with a regular rubric
  •  "supply response", e.g., for programming assignments with automated grading through "code tests"
  • Coursera "SkillSets" [3] as automated assessment tests, designed around "select response" from selected courses in a domain of skills (sometimes including items that are in fact "supply response" in disguise)

In this framework it's obvious that the supplier (and designer) of a MOOC isn't just responsible for the quality of the course but also for the validity of a range of assignments or "quizzes". This is challenging as it may have a direct impact on the "recommendation score" of courses, and thus its success on the platform.

The Coursera "SkillSets", designed as a tool for automated assessment within a employer-defined curriculum, can be regarded as an attempt of applying a range of methods introduced in this course. The learner starts with a "SkillSets Score" of zero and works towards employer defined "proficiency" levels in a range of up to 500, e.g.,

  • Conversant (1-50)
  • Beginner (51-150)
  • Intermediate (151-350)
  • Advanced (351 - 500)

Based on a selected "SkillSets" item the MOOC platform "recommends" courses. The learner is prompted "Looking to improve a particular skill? Improve your skill score with any one of these top recommendations."

The assessment tool allows the individual learner to "leap ahead" by taking assessment tests in a range of topics, e.g., "Data Visualization", "Computer Programming", "Probability & Statistics" or "Mathematics". Such tests are available for a limited number of "SkillSets skills" and can be taken twice.

While a formal validation of such a tool is certainly very challenging, epistemologically speaking, it's easy to cast doubt on the validity of a theory or its implementation with few anecdotal but confirmed counter-examples.

There are a number of peculiarities there that may have an undesired effect on the validity of the assessment - no matter if it's knowledge or skill, or even aptitude for a task, that's to be assessed.

  • recommended courses, e.g., for the "Skill" "Mathematics", may not increase the "SkillSets Score" at all *)
  • automated "SkillSets skill" assessments are rather unpredictable
  • "SkillSets" assessment tests ("leap ahead") may not always correctly identify a "supply-response" task "in disguise" leading to ill-calibrated test regarding "Item Response Theory"
  • working knowledge in areas like Epistemology may give assessees an unfair advantage in "SkillSets skills" like "Machine Learning" **)

The author hypothesizes that the "SkillSets" tool is based on NLP (Native Language Processing) based on an ML (Machine Learning) models rather than on a formal ontology (e.g., using the BFO framework).

It may be allowed to conclude, based on anecdotal evidence including help-desk interaction, the Coursera SkillSets tool stands out as "validity challenged".

*) The author selected the "SkillSets" item "Data Scientist" and experienced that the course "Data Science Math Skills", "recommended" by Coursera for improving the "skill Mathematics", only contribute to the "skill Probability & Statistics" (which is, arguably, applied math). On request, the Coursera support was unable to do anything about that and recommended to take a course with "Mathematics" in the name, e.g., "Mathematics for Machine Learning: Linear Algebra". Completing that course, unsurprisingly, contributed about 40 points to the score of "Machine Learning" and "Linear Algebra" each - but only 5 points to "Mathematics".

**) With a amateur background in pragmatic philosophy, semiotics and epistemology, and some courses in Bayesian Statistics, but no hands-on experience with "Deep Learning", the author scored 477/500.

[1]: https://en.wikipedia.org/wiki/Massive_open_online_course
[2]: https://www.coursera.org/learn/assessmentforlearning
[3]: https://www.coursera.support/s/article/360057142192-SkillSets?language=en_US