Document | DatasetInfo | PaperCollection | Experiment | RankingList
NOTE
The experiments in the Experiment
section are based on this code repository. Currently, I am in the process of refactoring the repository to eliminate redundant code and redesign the dataset division to create a more unified framework for Knowledge Tracing (KT), Cognitive Diagnosis (CD), and Exercise Recommendation (ER) tasks. As a result, the results in the RankingList
may differ from those in the Experiment
section. For the KT model, I have retained the hyperparameters used in the original experiments.
The new code repository is still under active development. Once completed, I will release all divided datasets, model training parameters, and model weights. Below is a flowchart illustrating the experimental design of the new repository.
If you are interested in accessing the new code repository, please feel free to email me, and I will grant you access.
A library of algorithms for reproducing knowledge tracing, cognitive diagnosis, and exercise recommendation models.
-
Initialize project
-
Create file
settings.json
in the root directory. -
Modify the environment configuration file
settings.json
{ "LIB_PATH": ".../dlkt-main", # Change to the project root path "FILE_MANAGER_ROOT": "any_dir" # Any path used to store data and models }
-
Run
set_up.py
python set_up.py
-
-
Place the original files of the dataset in the corresponding directory (Please refer to Document (Section 1.3) for details)
-
Data Preprocessing: Run
example/preprocess.py
, for examplepython preprocess.py --dataset_name assist2009
-
Divide the dataset according to the specified experimental settings: Run
example4knowledge_racing/prepare_dataset/akt_setting.py
. For example,python akt_setting.py
- For details on dataset partitioning, please refer to Document (Section 1.6)
-
Train model: Run the file under
example/train
. For example, train a DKT modelpython dkt.py
- Regarding the meaning of parameters, please refer to Document (Section 2)
-
Divide the dataset according to the specified experimental settings: Run
example4cognitive_diagnosis/prepare_dataset/akt_setting.py
. For example,python ncd_setting.py
-
Train model: Run the file under
example4cognitive_diagnosis/train
. For example, train a NCD modelpython ncd.py
-
Divide the dataset according to the specified experimental settings: Run
example4exercise_recommendation/prepare_dataset/kg4ex_setting.py
. For example,python kg4ex_setting.py
-
Train or evaluate different model or method
-
KG4EX
- step 1, train a
DKT
model to get mlkc - step 2, train a
DKT_KG4EX
model to get pkc - step 3, run
example4exercise_recommendation/kg4ex/get_mlkc_pkc.py
- step 4, run
example4exercise_recommendation/kg4ex/get_efr.py
- step 5, run
example4exercise_recommendation/kg4ex/get_triples.py
- step 6, run
example4exercise_recommendation/train/kg4ex.py
- step 1, train a
-
EB-CF (Exercise-based collaborative filtering)
- step1, change
example4exercise_recommendation/eb_cf/load_data
to get users' history data - step2, run
example4exercise_recommendation/eb_cf/get_que_sim_mat.py
to get questions' similarity matrix - step3, run
example4exercise_recommendation/eb_cf/evaluate.py
- step1, change
-
SB-CF (Student-based collaborative filtering)
- Similar to EB-CF, run the code in
example4exercise_recommendation/sb_cf
- Similar to EB-CF, run the code in
-
Please let us know if you encounter a bug or have any suggestions by filing an issue
We welcome all contributions from bug fixes to new features and extensions.
We expect all contributions discussed in the issue tracker and going through PRs.