3260 papers • 126 benchmarks • 313 datasets
This task has no description! Would you like to contribute one?
(Image credit: Papersgraph)
These leaderboards are used to track progress in skill-mastery-10
Use these libraries to find skill-mastery-10 models and implementations
No subtasks available.
This paper quantitatively evaluates the performance of the option tracing methods on two large-scale student response datasets and qualitatively evaluates their ability in identifying common student errors in the form of clusters of incorrect options across different questions that correspond to the same error.
This work investigates what choices matter for learning such general vision-based agents in simulation, and what affects optimal transfer to the real robot, and then leverages data collected by such policies and improves upon them with offline RL.
Interpretable Knowledge Tracing is presented, a simple model that relies on three meaningful features: individual skill mastery, ability profile (learning transfer across skills) and problem difficulty by using data mining techniques and shows better student performance prediction than deep learning based student models without requiring a huge amount of parameters.
Adding a benchmark result helps the community track progress.