Welcome. This repository contains the data and scripts comprising the Numenta Anomaly Benchmark (NAB). NAB is a novel benchmark for evaluating algorithms for anomaly detection in streaming, real-time applications. It is comprised of over 50 labeled real-world and artificial timeseries data files plus a novel scoring mechanism designed for real-time applications.
Included are the tools to allow you to easily run NAB on your own anomaly detection algorithms; see the NAB entry points info. Competitive results tied to open source code will be posted in the wiki on the Scoreboard. Let us know about your work by emailing us at nab@numenta.org or submitting a pull request.
This readme is a brief overview and contains details for setting up NAB. Please refer to the following for more details about NAB scoring, data, and motivation:
- Unsupervised real-time anomaly detection for streaming data - The main paper, covering NAB and Numenta's HTM-based anomaly detection algorithm
- NAB Whitepaper
- Evaluating Real-time Anomaly Detection Algorithms - Original publication of NAB
We encourage you to publish your results on running NAB, and share them with us at nab@numenta.org. Please cite the following publication when referring to NAB:
Ahmad, S., Lavin, A., Purdy, S., & Agha, Z. (2017). Unsupervised real-time anomaly detection for streaming data. Neurocomputing, Available online 2 June 2017, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2017.04.070
This repo is NAB community edition which is a for of the original Numenta's NAB. One of the reasons for forking was a lack of developer activity in the upstream repo.
- Identical algorithms and datasets as the Numenta's NAB. So the results are
reproducible
. -
Python 3
codebase (as Python 2 reaches end-of-life at 1/1/2020 and Numenta's not yet ported) - additional community-provided detectors:
htmcore
: currently the only HTM implementation able to run in NAB natively in python 3. (with many improvements in Community HTM implementation, successor of nupic.core.numenta
,numenta_TM
detectors (original from Numenta) made compatible with the Py3 codebase (only requires Py2 installed)
- the HTM visualization tool HTMPandaVis could be used with htm_core_detector (set PANDA_VIS_ENABLED flag to True)
- additional datasets
- TBD, none so far
Statement: We'll try to upstream any changes, new detectors and datasets to upstream Numenta's NAB, when the devs have time to apply the changes.
The NAB scores are normalized such that the maximum possible is 100.0 (i.e. the perfect detector), and a baseline of 0.0 is determined by the "null" detector (which makes no detections).
Detector | Standard Profile | Reward Low FP | Reward Low FN | Detector name | Time (s) |
---|---|---|---|---|---|
Perfect | 100.0 | 100.0 | 100.0 | ||
Numenta HTM* | 70.5-69.7 | 62.6-61.7 | 75.2-74.2 | numenta |
|
CAD OSE† | 69.9 | 67.0 | 73.2 | ||
htm.core | 63.1 | 58.8 | 66.2 | htmcore |
|
earthgecko Skyline | 58.2 | 46.2 | 63.9 | ||
KNN CAD† | 58.0 | 43.4 | 64.8 | ||
Relative Entropy | 54.6 | 47.6 | 58.8 | ||
Random Cut Forest **** | 51.7 | 38.4 | 59.7 | ||
Threshold ***** | 50.83 | 33.61 | 53.5 | ||
Twitter ADVec v1.0.0 | 47.1 | 33.6 | 53.5 | ||
Windowed Gaussian | 39.6 | 20.9 | 47.4 | ||
Etsy Skyline | 35.7 | 27.1 | 44.5 | ||
Bayesian Changepoint** | 17.7 | 3.2 | 32.2 | ||
EXPoSE | 16.4 | 3.2 | 26.9 | ||
Random*** | 11.0 | 1.2 | 19.5 | ||
Null | 0.0 | 0.0 | 0.0 |
As of NAB v1.0
* From NuPIC version 1.0 (available on PyPI); the range in scores represents runs using different random seeds.
** The original algorithm was modified for anomaly detection. Implementation details are in the detector's code.
*** Scores reflect the mean across a range of random seeds. The spread of scores for each profile are 7.95 to 16.83 for Standard, -1.56 to 2.14 for Reward Low FP, and 11.34 to 23.68 for Reward Low FN.
**** We have included the results for RCF using an AWS proprietary implementation; even though the algorithm code is not open source, the algorithm description is public and the code we used to run NAB on RCF is open source.
***** This is the same simple threshold detector that Numenta uses in their NuPIC-based detector implementation. It was added to demonstrate how "powerful" it is on NAB.
† Algorithm was an entry to the 2016 NAB Competition.
Please see the wiki section on contributing algorithms for discussion on posting algorithms to the scoreboard.
For comparison, here are the NAB V1.0 scores for some additional flavors of HTM.
- Numenta HTM using NuPIC v.0.5.6: This version of NuPIC was used to generate the data for the paper mentioned above (Unsupervised real-time anomaly detection for streaming data. Neurocomputing, ISSN 0925-2312, https://doi.org/10.1016/j.neucom.2017.04.070). If you are interested in replicating the results shown in the paper, use this version.
- HTM Java is a Community-Driven Java port of HTM.
- nab-comportex is a twist on HTM anomaly detection using Comportex, a community-driven HTM implementation in Clojure. Please see Felix Andrew's blog post on experiments with this algorithm.
- NumentaTM HTM detector uses the implementation of temporal memory found here.
- Numenta HTM detector with no likelihood uses the raw anomaly scores directly. To run without likelihood, set the variable
self.useLikelihood
in numenta_detector.py toFalse
.
Detector | Standard Profile | Reward Low FP | Reward Low FN |
---|---|---|---|
Numenta HTM using NuPIC v0.5.6* | 70.1 | 63.1 | 74.3 |
nab-comportex† | 64.6 | 58.8 | 69.6 |
NumentaTM HTM* | 64.6 | 56.7 | 69.2 |
htm.core | 63.1 | 58.8 | 66.2 |
HTM Java | 56.8 | 50.7 | 61.4 |
Numenta HTM*, no likelihood | 53.62 | 34.15 | 61.89 |
* From NuPIC version 0.5.6 (available on PyPI). † Algorithm was an entry to the 2016 NAB Competition.
The NAB corpus of 58 timeseries data files is designed to provide data for research in streaming anomaly detection. It is comprised of both real-world and artifical timeseries data containing labeled anomalous periods of behavior.
The majority of the data is real-world from a variety of sources such as AWS server metrics, Twitter volume, advertisement clicking metrics, traffic data, and more. All data is included in the repository, with more details in the data readme. We are in the process of adding more data, and actively searching for more data. Please contact us at nab@numenta.org if you have similar data (ideally with known anomalies) that you would like to see incorporated into NAB.
The NAB version will be updated whenever new data (and corresponding labels) is added to the corpus; NAB is currently in v1.0.
- OSX 10.9 and higher
- Linux
Other platforms may work but have not been tested.
You need to manually install the following:
Use the Github download links provided in the right sidebar,
or git clone https://github.com/htm-community/NAB
Recommended:
cd NAB
pip install . --user --extra-index-url https://test.pypi.org/simple/
If you want to manage dependency versions yourself, you can skip dependencies with:
pip install . --user --no-deps
If you are actively working on the code and are familiar with manual PYTHONPATH setup:
pip install -e . --install-option="--prefix=/some/other/path/" --extra-index-url https://test.pypi.org/simple/
Note:
- the
--extra-index-url https://test.pypi.org/simple/
allows to installhtm.core
from our testing PyPI repository.
There are several different use cases for NAB:
-
If you just want to look at all the results we reported in the paper, there is no need to run anything. All the data files are in the data subdirectory and all individual detections for reported algorithms are checked in to the results subdirectory. Please see the README files in those locations.
-
If you want to plot some of the results, please see the README in the
scripts
directory forscripts/plot.py
. For examplepython scripts/plot.py
will open default data-plots in the browser. -
If you have your own algorithm and want to run the NAB benchmark, please see the NAB Entry Points section in the wiki. (The easiest option is often to simply run your algorithm on the data and output results in the CSV format we specify. Then run the NAB scoring algorithm to compute the final scores. This is how we scored the Twitter algorithm, which is written in R.)
-
If you are a NuPIC user and just want to run the Numenta HTM detector follow the directions below to "Run HTM with NAB".
-
If you want to run everything including the bundled Skyline detector follow the directions below to "Run full NAB". Note that this will take hours as the Skyline code is quite slow.
-
If you just want to run NAB on one or more data files (e.g. for debugging) follow the directions below to "Run a subset of NAB".
cd /path/to/nab
python run.py -d htmcore --detect --optimize --score --normalize
This will run the community HTM detector htmcore
(to run Numenta's detector use -d numenta
) and produce normalized scores.
Note that by default it tries to use all the cores on your machine. The above command
should take about 20-30 minutes on a current powerful laptop with 4-8 cores.
For debugging you can run subsets of the data files by modifying and specifying
specific label files (see section below). Please type:
python run.py --help
to see all the options.
Note that to replicate results exactly as in the paper you may need to checkout the specific version of NuPIC (and associated nupic.core) that is noted in the Scoreboard:
cd /path/to/nupic/
git checkout -b nab {TAG NAME}
cd /path/to/nupic.core/
git checkout -b nab {TAG NAME}
cd /path/to/nab
python run.py
This will run everything and produce results files for all anomaly detection methods. Several algorithms are included in the repo, such as the Numenta HTM anomaly detection method, as well as methods from the Etsy Skyline anomaly detection library, a sliding window detector, Bayes Changepoint, and so on. This will also pass those results files to the scoring script to generate final NAB scores. Note: this option will take many many hours to run.
For debugging it is sometimes useful to be able to run your algorithm on a
subset of the NAB data files or on your own set of data files. You can do that
by creating a custom combined_windows.json
file that only contains labels for
the files you want to run. This new file should be in exactly the same format as
combined_windows.json
except it would only contain windows for the files you
are interested in.
Example: an example file containing two files is in
labels/combined_windows_tiny.json
. The following command shows you how to run
NAB on a subset of labels:
cd /path/to/nab
python run.py -d numenta --detect --windowsFile labels/combined_windows_tiny.json
This will run the detect
phase of NAB on the data files specified in the above
JSON file. Note that scoring and normalization are not supported with this
option. Note also that you may see warning messages regarding the lack of labels
for other files. You can ignore these warnings.
You can run parameter optimization using your own framework or the framework provided by htm.core. As of now, this is only enabled for the htm.core detector, but the same can be done for any detector with low effort (see #792 for details). Requirements
- Docker Desktop
pip install docker
Usage
- Set
use_optimization = True
in the htm.core detector settings. - Build a docker image from the Dockerfile provided in this repo with
docker build -t optimize-htmcore-nab:latest . -f htmcore.Dockerfile
- Then:
- Option A: Check
optimize_bayesopt.py
for an example on how to run with Bayesian Optimization. Note: The script requirespip install bayesian-optimization
. - Option B: Check
optimize_swarm.py
for an example on how to run the htm.core optimization framework. You can execute the script using the optimization framework with e.g.python -m htm.optimization.ae -n 3 --memory_limit 4 -v --swarming 100 optimize_anomaly_swarm.py
. Note for MacOS users: You need toexport OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
before running the script.
- Option A: Check