Random Network Distillation pytorch
-
Updated
Mar 4, 2019 - Python
Random Network Distillation pytorch
Curiosity-driven Exploration by Self-supervised Prediction for Street Fighter III Third Strike
Curiosity-driven Exploration by Self-supervised Prediction
🎮 [IJCAI'20][ICLR'19 Workshop] Flow-based Intrinsic Curiosity Module. Playing SuperMario with RL agent and FICM!
DQN, DDDQN, A3C, PPO, Curiosity applied to the game DOOM
Attention-based Curiosity-driven Exploration in Deep Reinforcement Learning
Implementation of our IJCAI 2022 paper "CCLF: A Contrastive-Curiosity-Driven Learning Framework for Sample-Efficient Reinforcement Learning".
Implementation of Curiosity-Driven Exploration with PyTorch
A curiosity-driven PPO + ICM reinforcement learning agent for autonomous maze exploration and victim rescue — built to evolve into a full SLAM-based search and rescue system.
Reinforcement Learning to manage the Logistics and resources of a warehouse.
Minimalist, flexible Python reinforcement learning framework. Out-of-the-box support for OpenAI Gym and TensorFlow.
This website is a private project to trying out different things. If you find any issues, please let me know.
Recommend one (or several) articles about one (or several) completely random subject per day in order to expand your mind
TwinHead-RL is a research project on curiosity-driven deep RL. We use a multi-head agent that learns both policy and future state prediction, using prediction error as a dynamic intrinsic reward for efficient exploration and representation learning.
Add a description, image, and links to the curiosity-driven topic page so that developers can more easily learn about it.
To associate your repository with the curiosity-driven topic, visit your repo's landing page and select "manage topics."