BrightVQ is the a large-scale subjective video quality dataset dedicated to HDR User-Generate-Content (UGC) video quality assessment. It consists of 300 HDR source videos and 2,100 transcoded versions, with 73,794 subjective quality ratings collected through crowd-sourced subjective study. BrightVQ serves as a benchmark for No-Reference (NR) UGC HDR-VQA models.
Based on BrightVQ, we introduce BrightRate, a novel No-Reference (NR) Video Quality Assessment (VQA) model designed to capture UGC-specific distortions and HDR-specific artifacts.
BrightRate integrates UGC-specific features from a pretrained CONTRIQUE model, semantic cues from a CLIP-based encoder, HDR features extracted via a piecewise non-linear luminance transform, and temporal differences, which are regressed to MOS.
Dataset | CONTRIQUE | DOVER | FastVQA | HIDROVQA | BrightRate |
---|---|---|---|---|---|
BrightVQ | 0.7081 | 0.7745 | 0.8094 | 0.8526 | 0.8887 |
LIVE-HDR | 0.8170 | 0.6303 | 0.5182 | 0.8793 | 0.8907 |
SFV+HDR | 0.5901 | 0.6001 | 0.7130 | 0.7003 | 0.7328 |
Please follow --> Demo_Inference
Direct download link for dataset: COMING SOON
Each video is hosted on AWS S3 and can be accessed using:
wget https://ugchdrmturk.s3.us-east-2.amazonaws.com/videos/VIDEO.mp4
Replace VIDEO
with a hashed video ID from BrightVQ.csv
or BrightVQ.txt
.
Example:
Police: https://ugchdrmturk.s3.us-east-2.amazonaws.com/videos/ad8affdd94b3c44ae83169fb668ea5c6.mp4
To download all videos:
cat BrightVQ.txt | while read video; do
aws s3 cp s3://ugchdrmturk/videos/${video}.mp4 ./BrightVQ/
done
To download a single video:
aws s3 cp s3://ugchdrmturk/videos/VIDEO.mp4 ./BrightVQ/
To download selected videos, create a new text file with list of video IDs:
cat sample-video.txt | while read video; do
aws s3 cp s3://ugchdrmturk/videos/${video}.mp4 ./BrightVQ/
done
Below, you can directly play some sample HDR videos from our dataset:
More sample are listed here in table:
Category | Video ID | MOS Score | Resolution | Link |
---|---|---|---|---|
Screen Recording | 5a4685e693378cbcc94c4533d95a96aa |
27.55 | 360p | ▶ Watch Video |
Train | 1fb6cd6866e7bfd289b65ed66d5e4397 |
36.54 | 720p | ▶ Watch Video |
Dog | f95c073a8958c61dcc365c88fbfb7e25 |
41.41 | 1080p | ▶ Watch Video |
Nature | 92c0376638d57e94f0f98376991caa96 |
65.43 | 1080p | ▶ Watch Video |
Bridge | 51f5e007f3636c77a4e3c91379745f1e |
57.13 | 1080p | ▶ Watch Video |
Game | 26d76f2fcaaffd1a2cf5340fd77ead4d |
67.42 | 1080p | ▶ Watch Video |
Please checkout the full dataset.
- BrightVQ provides the first large-scale dataset for HDR UGC quality evaluation, enabling researchers to develop and compare No-Reference (NR) VQA models for HDR conten
- Streaming platforms (e.g., YouTube, Netflix, Prime Video) can optimize their encoding pipelines by using BrightRate and BrightVQ to assess perceptual quality at different bitrates and HDR processing techniques.
- BrightVQ can contribute to the development of new HDR quality standards, potentially influencing ITU-T, MPEG, and industry-led VQA benchmarks
Please cite us if this work is helpful to you.
COMING SOON
BrightVQ is released under a Creative Commons Attribution-NonCommercial (CC BY-NC 4.0) License.
For questions, please reach out: 📧 [Redacted for Blind Review]