Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

datasets #74

Open
Cherryjingyao opened this issue Apr 7, 2024 · 1 comment
Open

datasets #74

Cherryjingyao opened this issue Apr 7, 2024 · 1 comment

Comments

@Cherryjingyao
Copy link

why this datasets have both vis and lang datasets ,and the vis datasets are without language annotations .
I am curious about the benchmark results are trained and tested on what dataset

@lukashermann
Copy link
Collaborator

The language dataset is 1% of the vision dataset that has been labeled with language instructions. In Multi-Context Imitation Learning, the agent is trained with different goal modalities, either a goal image or a goal language instruction. The training is performed on both datasets (which is why we use the combined dataloader and the evaluation uses exclusively language instructions (= language goals). It seems to me that you still haven't read the original papers that I linked in your original issue in the hulc repo.

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants