Below contains the information for preprocessing, training, and testing baseline methods from the paper.
Training each baseline require one (and only one) of the two environments:
Environment dmodel
, created following NVDiffRec:
conda env create -f envs/dmodel.yml
Environment neuralpil
, created following Neural-PIL:
conda env create -f envs/neuralpil.yml
Testing NVDiffRec and NVDiffRecMC requires the following environment:
Environment dmodel3
, created following NVDiffRec, with pytorch3d
installed:
conda env create -f envs/dmodel3.yml
In each environment, also install a util package:
pip install git+https://github.com/zzyunzhi/tu2
No. | Method | Format | Script |
---|---|---|---|
1 | IDR | llff_format_LDR | Script |
2 | PhySG | llff_format_HDR | Script |
3 | InvRender | blender_format_HDR | Script |
4 | NeRD | blender_format_LDR | Script |
5 | Neural-PIL | blender_format_LDR | Script |
6 | NeRF | blender_format_LDR | Script |
7 | NeRFactor | blender_format_LDR | Script |
8 | NVDiffRec | blender_format_HDR | Script |
9 | NVDiffRecMC | blender_format_HDR | Script |
Adapt the scripts listed above. You need modify DATA_ROOT
to be your data path, CODE_ROOT
to be the path to this codebase, SCENE
to be the name of the folder name of the scene you want to train, and EXP_ID
to be your custom experiment ID which will be used to identify experiments during evaluation. The data format required is listed here.
Assume data are stored in "my/data/path"
. It will be the same as DATA_ROOT
used in the training scripts above.
Data paths are configured in constant.py.
Within this file, do the following:
- Change
EXTENSION_SCENES
to the scenes you want to test. - Change
DEFAULT_SCENE_DATA_DIR
under the if clauseif VERSION == "extension"
to"my/data/path
. - Change
PROCESSED_SCENE_DATA_DIR
(output of preprocessing) to your desired path.
Adapt the following script for preprocessing: preprocess_slurm.sh.
Adapt the following script for testing: test.sh.
Results will be saved under imageint/logs/leaderboard/baselines/
.
Each testing script used in test.sh
corresponds to a baseline method. It evokes a pipeline file, e.g., mymethod.py. Running it does the following:
- Test view synthesis with
test_new_view
. - Test relighting with
test_new_light
. - Test depth and normal with
test_geometry
. - Test material with
test_material
. - Test mesh output with
test_shape
.
In each of the pipeline class methods 1) computes and saves the baseline outputs, e.g., predicted RGB images under test views, as local files, and 2) returns the local file paths. Step 1) will be skipped if you run the test script again, unless you turn on the OVERWRITE*
flags from this file.
To evaluate your method, follow the steps below:
- Adapt this class which handles the test-time pipeline to your method. Examples of the implementation can be found under here.
- Run
python scripts/test/my_method.py
- Outputs will be saved to
imageint/logs/leaderboard/baselines/my_method.json
, wherescore_stats
contains the mean and standard deviation for all metrics averaged across scenes. If you are running multiple methods, results for all methods will be aggregated toimageint/logs/leaderboard/baselines/latest.json
.