Skip to content

Latest commit

 

History

History
120 lines (92 loc) · 4.49 KB

README.md

File metadata and controls

120 lines (92 loc) · 4.49 KB

Terrain-Dreamer: Procedural Terrain Generation

banner

Terrain-Dreamer is a novel approach to procedural terrain generation that places key tiles on a tilemap and fills in the gaps using inpainting models. This allows for smooth transitions between different biomes.

🛠️ Requirements

  • Clone the repository and install it as a package:
    git clone https://github.com/ruipreis/terrain-dreamer.git
    cd terrain-dreamer
    pip install .
  • Download the available checkpoints if you want to experiment without training the models from scratch.
  • For manually downloading and processing the original dataset, ensure you have access to a service account with Google Earth API access.

🌍 Dataset

The ALOS World 3D - 30m (AW3D30) is a global Digital Elevation Model (DEM) produced by the Japan Aerospace Exploration Agency (JAXA), offering 30-meter resolution digital surface and terrain models worldwide.

Downloading the Dataset

You can download the dataset from the original source or use our prepared dataset with satellite and depth data in NPZ files, split into training and testing sets. The prepared dataset is available here.

Preparing the Dataset

To download the dataset from the official source:

Alternatively, use the following script to automatically download AW3D30 data and obtain corresponding satellite imagery from the Google Earth API:

  1. Download the AW3D30 data:
    python terrdreamer/dataset/aw3d30.py
  2. Obtain matching satellite imagery:
    python terrdreamer/dataset/gearth.py

Documentation

For more information about the AW3D30 dataset, refer to the official documentation.

🧠 Models

We use four models to achieve our results:

  • image-to-dem: A pix2pix Conditional GAN that converts RGB data to a DEM.
  • dem-to-image: A pix2pix model that converts DEM data to RGB data.
  • image generation: A ProGAN to generate novel satellite imagery.
  • inpainting: An inpainting model based on Generative Image Inpainting with Contextual Attention, used to predict masked parts of satellite data.

🎓 Training

Training scripts use wandb for experiment tracking. Ensure wandb is installed and configured.

Inpainting Model Training

To train the inpainting model:

python terrdreamer/models/infinity_grid/train.py \
    --train-dataset aw3d30/train \
    --test-dataset aw3d30/test \
    --limit 10000 \
    --save-model-path checkpoints/inpainting \
    --wandb-project inpainting

inpainting

Checkpoints

Download checkpoints for all models:

mkdir -p checkpoints
wget https://storage.googleapis.com/terrain-generation-models/checkpoints.zip -P checkpoints
unzip checkpoints/checkpoints.zip -d checkpoints

🖼️ Inference

To generate a tile map:

  1. Initialize a Qdrant container:
    docker pull qdrant/qdrant
    docker run -p 6333:6333 qdrant/qdrant
  2. Populate the Qdrant vector database with randomly generated satellite imagery:
    python terrdreamer/grid/__init__.py
  3. Create real and fake tiles:
    python terrdreamer/grid/generate.py --height 10 --width 50
  4. Interpolate between placed tiles:
    python terrdreamer/grid/interpolation.py --height 10 --width 50
  5. Fill gaps with inpainting:
    python terrdreamer/grid/filling.py --height 10 --width 50
  6. Estimate depth for image tiles:
    python terrdreamer/grid/depth.py
  7. Convert to image format:
    python terrdreamer/grid/to_image.py

🌟 Inspiration

This project is inspired by the following works by Emmanouil Panagiotou: