This is the official implementation for the paper: BRUCE - Bundle Recommendation Using Contextualized item Embeddings
The BRUCE architecture is modular and combines several independent
components which can be configured to best match the
data and task
Here are the best configurations for running BRUCE on each dataset
Steam:
- Train BPR model:
MFbaseline/trainMF.py --train_only --size=12 --dataset_string=Steam --avg_items
- Train model usin pretrained BPR embeddings:
Main.py --dataset_string=Steam --description=bestConfig --op_after_transformer=avg --num_epochs=10000 --num_transformer_layers=1 --start_val_from=8000 --pretrained_bpr_path=<bprModelPath>.pkl --use_pretrained --dont_multi_task
Youshu:
- Train BPR model:
MFbaseline/trainMF.py --train_only --size=24 --dataset_string=Youshu --avg_items
- Train model usin pretrained BPR embeddings:
Main.py --dataset_string=Youshu --description=BestConfig --embed_shape=24 --weight_decay=0.0075 --useUserBertV2 --num_epochs=7000 --start_val_from=4000 --pretrained_bpr_path=<BprModelPath.pkl> --use_pretrained
NetEase:
- Train BPR model:
MFbaseline/trainMF.py --train_only --size=24 --dataset_string=NetEase --avg_items
- Train model usin pretrained BPR embeddings:
Main.py --dataset_string=NetEase --description=BestConfig --seed=111 --embed_shape=24 --weight_decay=0.0075 --useUserBert --num_epochs=7000 --batch_size=2048 --start_val_from=4000 --evaluate_every=500 --use_pretrained --pretrained_bpr_path=<bprModelPath>.pkl
BRUCE code is modular and can be used and changed according to the need and task.
The default configuration is to randomly initialize item embeddings.
In order to use pretrained embeddings you need to do the following steps.
a. Train a BPR model -
MFbaseline/trainMF.py --train_only --size=<12-48> --dataset_string=<Youshu/NetEase/Steam> --avg_items
The saved BPR model path should look like TrainedModels/bpr_user_avg_items_.pkl")
b. Train with pretrained BPR embeddings: by adding the parameters --use_pretrained --pretrained_bpr_path=<modelPath.pkl>
The following user integration techniques are supported:
a. Concatenation of the user to each item. default option, you also need to pass the op_after_transformer elaborated in the next section.
The models' code is under the PreUL dir.
b. User first. by passing --useUserBert or --useUserBertV2 (the first shares the Transformer layer with the auxiliary task of items recommendation while the second does not).
The models' code is under the UserBert dir.
c. Post Transformer Integration. by passing --usePostUL, you also need to pass the op_after_transformer elaborated in the next section.
The models' code is under the PostUL dir.
The aggregation method preformed after the Transformer layer, the following are supported:
a. Concatenation. --op_after_transformer=concat
b. Summation --op_after_transformer=sum
c. Averaging --op_after_transformer=avg
d. First item (BERT-like) aggregation --op_after_transformer=bert
e. Bundle embedding BERT-like aggregation --bundleEmbeddings --op_after_transformer=bert
Learning to predict user bundle preferences together with user item preferences. You can avoid the multi-task learning process by using the --dont_multi_task flag.
If you use this code, please cite our paper. Thanks!
@inproceedings{
author = {Tzoof Avny Brosh and
Amit Livne and
Oren Sar Shalom and
Bracha Shapira and
Mark Last},
title = {BRUCE - Bundle Recommendation Using Contextualized item Embeddings},
year = {2022}
}
Portions of this code are based on the BGCN paper's code and the DAM paper's code.`