Skip to content

Latest commit

 

History

History
78 lines (58 loc) · 4.69 KB

BASELINE.md

File metadata and controls

78 lines (58 loc) · 4.69 KB

Benchmarks

Below contains the information for preprocessing, training, and testing baseline methods from the paper.

Installation

Training each baseline require one (and only one) of the two environments:

Environment dmodel, created following NVDiffRec:

conda env create -f envs/dmodel.yml

Environment neuralpil, created following Neural-PIL:

conda env create -f envs/neuralpil.yml

Testing NVDiffRec and NVDiffRecMC requires the following environment:

Environment dmodel3, created following NVDiffRec, with pytorch3d installed:

conda env create -f envs/dmodel3.yml

In each environment, also install a util package:

pip install git+https://github.com/zzyunzhi/tu2

Training Scripts

No. Method Format Script
1 IDR llff_format_LDR Script
2 PhySG llff_format_HDR Script
3 InvRender blender_format_HDR Script
4 NeRD blender_format_LDR Script
5 Neural-PIL blender_format_LDR Script
6 NeRF blender_format_LDR Script
7 NeRFactor blender_format_LDR Script
8 NVDiffRec blender_format_HDR Script
9 NVDiffRecMC blender_format_HDR Script

Adapt the scripts listed above. You need modify DATA_ROOT to be your data path, CODE_ROOT to be the path to this codebase, SCENE to be the name of the folder name of the scene you want to train, and EXP_ID to be your custom experiment ID which will be used to identify experiments during evaluation. The data format required is listed here.

Testing Scripts

Data Preparation

Assume data are stored in "my/data/path". It will be the same as DATA_ROOT used in the training scripts above. Data paths are configured in constant.py. Within this file, do the following:

  1. Change EXTENSION_SCENES to the scenes you want to test.
  2. Change DEFAULT_SCENE_DATA_DIR under the if clause if VERSION == "extension" to "my/data/path.
  3. Change PROCESSED_SCENE_DATA_DIR (output of preprocessing) to your desired path.

Adapt the following script for preprocessing: preprocess_slurm.sh.

Testing

Adapt the following script for testing: test.sh. Results will be saved under imageint/logs/leaderboard/baselines/.

Each testing script used in test.sh corresponds to a baseline method. It evokes a pipeline file, e.g., mymethod.py. Running it does the following:

  1. Test view synthesis with test_new_view.
  2. Test relighting with test_new_light.
  3. Test depth and normal with test_geometry.
  4. Test material with test_material.
  5. Test mesh output with test_shape.

In each of the pipeline class methods 1) computes and saves the baseline outputs, e.g., predicted RGB images under test views, as local files, and 2) returns the local file paths. Step 1) will be skipped if you run the test script again, unless you turn on the OVERWRITE* flags from this file.

Custom Method

To evaluate your method, follow the steps below:

  1. Adapt this class which handles the test-time pipeline to your method. Examples of the implementation can be found under here.
  2. Run python scripts/test/my_method.py
  3. Outputs will be saved to imageint/logs/leaderboard/baselines/my_method.json, where score_stats contains the mean and standard deviation for all metrics averaged across scenes. If you are running multiple methods, results for all methods will be aggregated to imageint/logs/leaderboard/baselines/latest.json.