Skip to content

Latest commit

 

History

History
83 lines (50 loc) · 2.08 KB

benchmark_start.md

File metadata and controls

83 lines (50 loc) · 2.08 KB

Driving Benchmarks

The benchmark tools module is made to evaluate a driving controller (agent) and obtain metrics about its performance.

This module is mainly designed for:

  • Users that work developing autonomous driving agents and want to see how they perform in CARLA.

On this section you will learn.

Getting Started

As a way to familiarize yourself with the system we provide a trivial agent performing in an small set of experiments (Basic). To execute it, simply run:

$ ./benchmarks_084.py

Keep in mind that, to run the command above, you need a CARLA 0.8.4 simulator running at localhost and on port 2000.

This benchmark example can be further configured. Run the help command to see options available.

$ ./benchmarks_084.py --help

One of the options available is to be able to continue from a previous benchmark execution. For example, to continue a experiment in a basic benchmark with a log name of "driving_benchmark_test", run:

$ ./benchmarks_084.py --continue-experiment -n driving_benchmark_test

!!! note if the log name already exists and you don't set it to continue, it will create another log under a different name.

Benchmarks Availabe

CoRL 2017

As explained on the legacy CARLA paper: CoRL 2017 paper. The CoRL 2017 experiment suite can be run in a trivial agent by running:

$ ./benchmarks_084.py --corl-2017

When running the driving benchmark for the basic configuration you should expect these results

CARLA 100

The CARLA100 experiment suite can be run in a trivial agent by running:

$ ./benchmarks_084.py --carla100