Hossein Shakibania*, Sina Raoufi*, and Hassan Khotanlou
* Equal contribution
Abstract: Low-light images, characterized by inadequate illumination, pose challenges of diminished clarity, muted colors, and reduced details. Low-light image enhancement, an essential task in computer vision, aims to rectify these issues by improving brightness, contrast, and overall perceptual quality, thereby facilitating accurate analysis and interpretation. This paper introduces the Convolutional Dense Attention-guided Network (CDAN), a novel solution for enhancing low-light images. CDAN integrates an autoencoder-based architecture with convolutional and dense blocks, complemented by an attention mechanism and skip connections. This architecture ensures efficient information propagation and feature learning. Furthermore, a dedicated post-processing phase refines color balance and contrast. Our approach demonstrates notable progress compared to state-of-the-art results in low-light image enhancement, showcasing its robustness across a wide range of challenging scenarios. Our model performs remarkably on benchmark datasets, effectively mitigating under-exposure and proficiently restoring textures and colors in diverse low-light scenarios. This achievement underscores CDAN's potential for diverse computer vision tasks, notably enabling robust object detection and recognition in challenging low-light conditions.
Figure 1: The overall structure of the proposed model.
In this section, we present the experimental results obtained by training our CDAN model using the LOw-Light (LOL) dataset and evaluating its performance on multiple benchmark datasets. The purpose of this evaluation is to assess the robustness of our model across a spectrum of challenging lighting conditions.
Dataset | No. of Images | Paired | Characteristics |
---|---|---|---|
LOL | 500 | ✅ | Indoor |
ExDark | 7363 | ❌ | Extremely Dark, Indoor, Outdoor |
DICM | 69 | ❌ | Indoor, Outdoor |
VV | 24 | ❌ | Severely under/overexposed areas |
Learning method | Method | Avg. PSNR ↑ | Avg. SSIM ↑ | Avg. LPIPS ↓ |
---|---|---|---|---|
Supervised | LLNET | 17.959 | 0.713 | 0.360 |
LightenNet | 10.301 | 0.402 | 0.394 | |
MBLLEN | 17.902 | 0.715 | 0.247 | |
Retinex-Net | 16.774 | 0.462 | 0.474 | |
KinD | 17.648 | 0.779 | 0.175 | |
Kind++ | 17.752 | 0.760 | 0.198 | |
TBEFN | 17.351 | 0.786 | 0.210 | |
DSLR | 15.050 | 0.597 | 0.337 | |
LAU-Net | 21.513 | 0.805 | 0.273 | |
Semi-supervised | DRBN | 15.125 | 0.472 | 0.316 |
Unsupervised | EnlightenGAN | 17.483 | 0.677 | 0.322 |
Zero-shot | ExCNet | 15.783 | 0.515 | 0.373 |
Zero-DCE | 14.861 | 0.589 | 0.335 | |
RRDNet | 11.392 | 0.468 | 0.361 | |
Proposed (CDAN) | 20.102 | 0.816 | 0.167 |
Figure 2: Visual comparison of state-of-the-art models on ExDark dataset.
Figure 3: Visual comparison of state-of-the-art models on DICM dataset.
To get started with the CDAN project, follow these steps:
You can clone the repository using Git. Open your terminal and run the following command:
git clone git@github.com:SinaRaoufi/CDAN.git
After cloning, navigate to the project directory and locate the `config/default.json` file. This file contains all the configuration settings for the CDAN model, including model architecture, training parameters, and dataset paths. You can customize these settings according to your requirements.
Key configuration settings:
- Model Settings: Define the model architecture and its parameters
- Training Settings:
device
: Training device (cuda/mps/cpu)n_epoch
: Number of training epochslr
: Learning rate- Dataset paths and dataloader configurations
- Testing Settings:
- Dataset paths and configurations
- Post-processing options
- Output paths for generated images
Modify these settings according to your setup, particularly:
- Update the dataset paths to point to your data
- Adjust the training parameters if needed
- Configure the output paths for saved models and results
You can install project dependencies using pip:
pip install -r requirements.txt
You are now ready to run the CDAN project. To start the training, use the following command:
python run.py -p train -c config/default.json
To test the trained model, run:
python run.py -p test -c config/default.json
The following hardware and software were used for training the model:
- GPU: NVIDIA GeForce RTX 3090
- RAM: 24 GB SSD
- Operating System: Ubuntu 22.04.2 LTS
- Python version: 3.9.15
- PyTorch version: 2.0.1
- PyTorch CUDA version: 11.7
@article{SHAKIBANIA2025104802,
title = {CDAN: Convolutional dense attention-guided network for low-light image enhancement},
journal = {Digital Signal Processing},
volume = {156},
pages = {104802},
year = {2025},
issn = {1051-2004},
doi = {https://doi.org/10.1016/j.dsp.2024.104802},
url = {https://www.sciencedirect.com/science/article/pii/S1051200424004275},
author = {Hossein Shakibania and Sina Raoufi and Hassan Khotanlou},
}