-
Notifications
You must be signed in to change notification settings - Fork 31
configuration files
Jonas Schult edited this page Oct 5, 2018
·
3 revisions
The core of our system is the configuration file which is passed to training/testing script:
python run.py --config path/to/config/file
Here, all needed parameters are specified such as the location of the dataset, the model definition, the batch generator for generating blocks as well as the optimizer and training parameters.
Datasets, models, batch generators and optimizers are referenced by their module name and are dynamically loaded. This makes adding a new model or batch generator very simple if you derive it from the predefined base classes.
modus: TRAIN_VAL # {TRAIN_VAL, TEST}
dataset:
name: general_dataset # name of the module in the package datasets
num_classes: 13
data_path: dataset/stanford_indoor/ # path to the datasets with the ensured structure
test_sets: ['Area_5'] # specify which areas are only for validation
downsample_prefix: sample_0.03 # folder name of downsampled point cloud version
colors: True
laser: False
model:
name: multi_scale_cu_model # name of the module in the package models
batch_generator:
name: multi_scale_batch_generator # name of the module in the package batch_generators
params: # parameters for batch generator which are directly passed to it
batch_size: 24
num_points: 4096
grid_spacing: 0.5
metric: chebyshev
augmentation: True
radii: [0.25, 0.5, 1.0]
optimizer:
name: exponential_decay_adam # name of the module in the package optimizers
params: # parameters for the optimizer which are directly passed to it
initial_lr: 0.001
decay_step: 300000
decay_rate: 0.5
train: # training parameters
epochs: 150 # maximal number of epochs to run