RL researcher
-
ZJU
- Hangzhou, Zhejiang, China
-
16:09
(UTC +08:00) - jtd.acad@gmail.com
Pinned Loading
-
PKU-Alignment/safe-rlhf
PKU-Alignment/safe-rlhf PublicSafe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
-
PKU-Alignment/omnisafe
PKU-Alignment/omnisafe PublicJMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.
-
PKU-Alignment/beavertails
PKU-Alignment/beavertails PublicBeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).
-
PKU-Alignment/safe-sora
PKU-Alignment/safe-sora PublicSafeSora is a human preference dataset designed to support safety alignment research in the text-to-video generation field, aiming to enhance the helpfulness and harmlessness of Large Vision Models…
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.