Note: this repo is not actively maintained; I've been trying to build a new full-stack satellite stereo pipelines in computer vision style: SatelliteSfM.
This is the python interface for VISion-based SATellite stereo (VisSat) that is backed by our adapted COLMAP. You can run both SfM and MVS on a set of satellite images.
Project page: https://kai-46.github.io/VisSat/
- Install our adapted COLMAP first.
- Install GDAL and GDAL for python on your machine according to this page.
- Use python3 instead of python2.
- All the python dependent packages can be installed via:
pip3 install -r requirements.txt
- Download the MVS3DM satellite stereo dataset.
- The file "aoi_config/MVS3DM_Explorer.json" is a template configuration for the site 'Explorer' in the MVS3DM dataset. Basically, you only need to set two fields, i.e., "dataset_dir" and "work_dir", in order to get started for this site.
- Launch our pipeline with:
python3 stereo_pipeline.py --config_file aoi_config/MVS3DM_Explorer.json
- If you enable "aggregate_3d", the output point cloud and DSM will be inside "{work_dir}/mvs_results/aggregate_3d/"; alternatively, if "aggregate_2p5d" is adopted, the output will be inside "{work_dir}/mvs_results/aggregate_2p5d/".
- Our pipeline is written in a module way; you can run it step by step by choosing what steps to execute in the configuration file.
- You can navigate inside {work_dir} to get intermediate results.
We use a specific directory structure to help organize the program logic. The base directory is called {work_dir}. To help understand how the system works, let me try to point out what directory or files one should pay attention to at each stage of the program.
SfM stage
You need to enable {“clean_data”, “crop_image”, “derive_approx”, “choose_subset”, “colmap_sfm_perspective”} in the configuration. Then note the following files.
- (.ntf, .tar) pairs inside {dataset_dir}
- (.ntf, .xml) pairs inside {work_dir}/cleaned_data
- {work_dir}/aoi.json
- .png inside {work_dir}/images, and .json inside {work_dir}/metas
- .json inside {work_dir}/approx_camera, especially perspective_enu.json
- {work_dir}/colmap/subset_for_sfm/{images, perspective_dict.json}
- {work_dir}/colmap/sfm_perspective/init_ba_camera_dict.json
Step [1-4] transform the (.ntf, .tar) data into more accessible conventional formats. Step 5 approximates the RPC cameras with perspective cameras. Step [6-7] selects a subset of images (by default, all the images), performs bundle adjustment, and writes bundle-adjusted camera parameters to {work_dir}/colmap/sfm_perspective/init_ba_camera_dict.json. For perspective cameras in the .json files mentioned in Step [5-7], the camera parameters are organized as:
w, h, f_x, f_y, c_x, c_y, s, q_w, q_x, q_y, q_z, t_x, t_y, t_z
, where (w,h) is image size, (f_{x,y}, c_{x,y}, s) are camera intrinsics, q_{w,x,y,z} is the quaternion representation of the rotation matrix, and t_{x,y,z} is the translation vector.
Coordinate system
Our perspective cameras use the local ENU coordinate system instead of the global (lat, lon, alt) or (utm east, utm north, alt).
For conversion between (lat, lon, alt) and local ENU, please refer to: coordinate_system.py and latlonalt_enu_converter.py
For conversion between (lat, lon) and (utm east, utm north), please refer to: lib/latlon_utm_converter.py
MVS stage
To run MVS after the SfM stage is done, you need to enable {“reparam_depth”, “colmap_mvs”, “aggregate_3d”} or {“reparam_depth”, “colmap_mvs”, “aggregate_2p5d”}.
If you enable "aggregate_2p5d", you will be able to see the per-view DSM in {work_dir}/colmap/mvs/dsm.
@inproceedings{VisSat-2019,
title={Leveraging Vision Reconstruction Pipelines for Satellite Imagery},
author={Zhang, Kai and Sun, Jin and Snavely, Noah},
booktitle={IEEE International Conference on Computer Vision Workshops},
year={2019}
}
This software uses the 3-clause BSD license.