H5P Multiple-Choice & learnr

The following example is part of GitHub issue 564 for the R learnr package with the following text:

Multiple Choice: Individual feedback for all answer options

From an educational perspective, it is essential to give individual feedback for all answer options, either correct or incorrect. Rich feedback is an important feature of Multiple-Choice Questions (MCs) because teachers do not want to give individual feedback for wrong answers ("distractors") but also want to explain why the options are correct.

As far as I can see, individual feedback for all answer options is not possible for MCs in the `learnr` package. Only for wrong answers, a message can be displayed if students choose this wrong option. Therefore, the overall false feedback wording in `try_again` or `incorrect` must cover all possible combinations, including choosing partly correct answers. The result is awkward wording that can never cover all combinations.

Pizza toppings example with learnr

Look for instance of the pizza toppings example. 

```{r checkbox-example}
question(
  "Select all the toppings that belong on a Margherita Pizza:",
  answer("tomato", correct = TRUE),
  answer("mozzarella", correct = TRUE),
  answer("basil", correct = TRUE),
  answer("extra virgin olive oil", correct = TRUE),
  answer("pepperoni", message = "Great topping! ... just not on a Margherita Pizza"),
  answer("onions"),
  answer("bacon"),
  answer("spinach"),
  random_answer_order = FALSE,
  allow_retry = TRUE,
  try_again = "Be sure to select all toppings!"
)
```

Pizza Toppings Example with H5P

The general wrong feedback "try_again = 'Be sure to select all toppings!'" does not make sense if students choose all correct options with one or more wrong options. General wrong feedback like "Incorrect" is not a solution as choosing several correct options (but not all) is better characterized as "Only partly correct" than incorrect. I could not think of wording for a general wrong answer which covers all possible outcomes.

Another side effect of the MC design in `learnr` is that you cannot grade the answer partially correct. Neither fine grading with points is possible by (for instance) calculating the chosen correct options minus the chosen the wrong options.

I believe that MCs need individual feedback for all chosen options and fine grading. To demonstrate the difference and how it could be implemented I have prepared an example of the pizza toppings MC with precisely the same `learnr` choices but designed with H5P

 H5P is a free and open-source content collaboration framework based on JavaScript. H5P is an abbreviation for HTML5 Package and aims to make it easy for everyone to create, share and reuse interactive HTML5 content.

Wikipedia

If you scroll the MC exercise down, then you see a screenshot that demonstrates that teachers can set in H5P different options and messages for every choice:

Figure 1: Screenshot shows part of H5P Tool for Multiple-Choice Questions
  • correct or incorrect
  • feedback if the answer was chosen
  • feedback if the answer was not chosen
  • help text for the option

But for my example, more important: After answering, students get feedback in three different ways:

  • which options were chosen correctly
  • which options were chosen incorrectly 
  • how many choices would be correct

Teachers can optionally calculate points via correct/incorrect options or give maximal just one point by requiring that all possibilities be chosen correctly.

Summary

So my two questions are: 

  1. Did I overlook something, and my perception of the feedback problem in `learnr` is wrong, or could it be solved with the actual design?
  2. If not: Wouldn't it be nice to change the somewhat restricted design of `learnr` MCs?

PS.: I'm using the GitHub version of `learnr`: 0.10.1.9009.