Skip to content

Implementation of the paper: Intrinsic Light Field Decomposition and Disparity Estimation with Deep Encoder-Decoder Network

Notifications You must be signed in to change notification settings

cvia-kn/lf_intrinsic_images

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Intrinsic Light Field Decomposition and Disparity Estimation with Deep Encoder-Decoder Network

We present an encoder-decoder deep neural network that solves non-Lambertian intrinsic light field decomposition, where we recover all three intrinsic components: albedo, shading, and specularity. We learn a sparse set of features from 3D epipolar volumes and use them in separate decoder pathways to reconstruct intrinsic light fields. While being trained on synthetic data generated with Blender, our model still generalizes to real world examples captured with a Lytro Illum plenoptic camera. The proposed method outperforms state-of-the-art approaches for single images and achieves competitive accuracy with recent modeling methods for light fields. teaser_git

Project description

Our project consist of 2 steps:

  1. Divide input light fields into 3D patches and create network inputs with DataRead_code project
  2. Train and evaluate the network with intrinsic_autoencoder_eusipco2018_code project

Prerequisites

  1. Python 3.5
  2. Tensorflow with GPU support

1. Creating the data

Depends on the type of data use separate scripts to create inputs (.hdf5 data container) for the network:

  • synthetic create_training_data_intrinsic.py
  • real-world use separete script create_training_data_lytro_intrinsic.py
px = 96 # patch size
py = 96 
nviews = 9 # number of views
sx = 32 # block step size
sy = 32

training_data_dir = "./trainData/"
training_data_filename = 'lf_patch_autoencoder1.hdf5'
file = h5py.File( training_data_dir + training_data_filename, 'w' )

data_source = "./CNN_data/1"

Synthetic data that can be used for training, for more training data please contact our research group. If dataset contains the whole light field, read_lightfield_intrinsic.py should be used for reading the intrinsic components. If dataset contains only crosshair-shaped subset of 7 views, read_lightfield_intrinsic_crosshair.py should be used.

2. Run the network

To train the network you need to specify all training options in the config_autoencoder_v9_final.py Also, you need to specify patch size and minimum and maximum disparity values in the config_data_format.py In cnn_autoencoder.py you need to specify coordinates that are taken into account when the loss is computed. For example, if the input patch size is 96x96, and we select loss_min_coord_3D= 0, loss_max_coord_3D = 40, then the last 8 pixels will be omitted while computing loss.

To use the trained model, please download the model current_full.zip and extract the archive to ./networks/ folder. We provide some test examples intrinsic_test.zip that should be extracted to the ./test_data/ folder.

To download our results on the test data in HDF5 format use this link: intrinsic_results.zip

To create your own test examples, please use DataRead_code project.

References

About

Implementation of the paper: Intrinsic Light Field Decomposition and Disparity Estimation with Deep Encoder-Decoder Network

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published