diff --git a/CHANGELOG.md b/CHANGELOG.md
index 468ed13..5d6f5b5 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -15,6 +15,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Add Feature to Use Config Files by @AjinkyaIndulkar in https://github.com/sensity-ai/dot/pull/17
- Add Github Templates by @AjinkyaIndulkar in https://github.com/sensity-ai/dot/pull/16
- Add contributors list by @AjinkyaIndulkar in https://github.com/sensity-ai/dot/pull/31
+- Add Google Colab demo notebook by @AjinkyaIndulkar https://github.com/sensity-ai/dot/pull/33
#### Updated
diff --git a/README.md b/README.md
index d8ebf4e..141c5ee 100644
--- a/README.md
+++ b/README.md
@@ -8,6 +8,10 @@
[](https://github.com/sensity-ai/dot/actions/workflows/build_dot.yaml)
[](https://github.com/sensity-ai/dot/actions/workflows/code_check.yaml)
+
+
+
+
*dot* (aka Deepfake Offensive Toolkit) makes real-time, controllable deepfakes ready for virtual cameras injection. *dot* is created for performing penetration testing against e.g. identity verification and video conferencing systems, for the use by security analysts, Red Team members, and biometrics researchers.
diff --git a/data/test_video.mp4 b/data/test_video.mp4
new file mode 100644
index 0000000..09b74d0
Binary files /dev/null and b/data/test_video.mp4 differ
diff --git a/docs/run_without_camera.md b/docs/run_without_camera.md
index 7965be1..87313dc 100644
--- a/docs/run_without_camera.md
+++ b/docs/run_without_camera.md
@@ -23,12 +23,12 @@ dot \
--use_gpu
```
-## Faceswap images from directory(Simswap)
+## Faceswap images from directory (Simswap)
You can pass a `--source` folder with images and some `--target` images. Faceswapped images will be generated at `--save_folder` including a metadata json file.
```bash
-python image_swap.py \
+python scripts/image_swap.py \
--config \
--source \
--target \
@@ -36,10 +36,10 @@ python image_swap.py \
--limit 100
```
-## Faceswap images from metadata
+## Faceswap images from metadata (SimSwap)
```bash
-python metadata_swap.py \
+python scripts/metadata_swap.py \
--config \
--local_root_path \
--metadata \
@@ -48,11 +48,11 @@ python metadata_swap.py \
--limit 100
```
-## Faceswap on video files
+## Faceswap on video files (SimSwap)
```bash
-python video_swap.py \
--c \
+python scripts/video_swap.py \
+-c \
-s \
-t \
-o \
diff --git a/notebooks/colab_demo.ipynb b/notebooks/colab_demo.ipynb
new file mode 100644
index 0000000..beab48c
--- /dev/null
+++ b/notebooks/colab_demo.ipynb
@@ -0,0 +1,212 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Deepfake Offensive Toolkit\n",
+ "\n",
+ "> **Disclaimer**: This notebook is primarily used for demo purposes on Google Colab.\n",
+ "\n",
+ "**Note**: We recommend running this notebook on Google Colab with GPU enabled.\n",
+ "\n",
+ "To enable GPU, do the following: \n",
+ "\n",
+ "`Click \"Runtime\" tab > select \"Change runtime type\" option > set \"Hardware accelerator\" to \"GPU\"`\n",
+ "\n",
+ "### Install Notebook Pre-requisites:\n",
+ "\n",
+ "We install the following pre-requisities:\n",
+ "- `ffmpeg`\n",
+ "- `conda` (via [condacolab](https://github.com/conda-incubator/condacolab))\n",
+ "\n",
+ "Note: The notebook session will restart after installing the pre-requisites. \n",
+ "\n",
+ "**RUN THE BELOW CELL ONLY ONCE.**\n",
+ "\n",
+ "**ONCE THE NOTEBOOK SESSION RESTARTS, SKIP THIS CELL MOVE TO \"STEP 1\" SECTION OF THIS NOTEBOOK**"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# install linux pre-requisites\n",
+ "!sudo apt install ffmpeg\n",
+ "\n",
+ "# install miniconda3\n",
+ "!pip install -q condacolab\n",
+ "import condacolab\n",
+ "condacolab.install_miniconda()\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Step 1 - Clone Repository"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "os.chdir('/content')\n",
+ "CODE_DIR = 'dot'\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!git clone https://github.com/sensity-ai/dot.git $CODE_DIR\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "os.chdir(f'./{CODE_DIR}')\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Step 2 - Setup Conda Environment"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# update base conda environment: install python=3.8 + cudatoolkit=11.3\n",
+ "!conda install python=3.8 cudatoolkit=11.3\n",
+ "\n",
+ "# install pip requirements\n",
+ "!pip install llvmlite==0.36.0 onnxruntime-gpu==1.9.0\n",
+ "!pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113\n",
+ "!pip install -r requirements.txt\n",
+ "\n",
+ "# install dot\n",
+ "!pip install -e .\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Step 2 - Download Pretrained models"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# download binaries\n",
+ "! wget https://github.com/sensity-ai/dot/releases/download/1.0.0/dot_model_checkpoints.z01 \\\n",
+ "&& wget https://github.com/sensity-ai/dot/releases/download/1.0.0/dot_model_checkpoints.z02 \\\n",
+ "&& wget https://github.com/sensity-ai/dot/releases/download/1.0.0/dot_model_checkpoints.zip\n",
+ "\n",
+ "# unzip binaries\n",
+ "! zip -s 0 dot_model_checkpoints.zip --out saved_models.zip \\\n",
+ "&& unzip saved_models.zip\n",
+ "\n",
+ "# clean-up\n",
+ "!rm -rf *.z*\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Step 3: Run dot on image and video files instead of camera feed\n",
+ "\n",
+ "### Using SimSwap on Images\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!dot \\\n",
+ "-c ./configs/simswap.yaml \\\n",
+ "--target \"data/\" \\\n",
+ "--source \"data/\" \\\n",
+ "--save_folder \"image_simswap_output/\" \\\n",
+ "--use_image \\\n",
+ "--use_gpu\n"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Using SimSwap on Videos"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "!python scripts/video_swap.py \\\n",
+ "-s \"data/\" \\\n",
+ "-t \"data/\" \\\n",
+ "-o \"video_simswap_output/\" \\\n",
+ "-d 5 \\\n",
+ "-l 5\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "accelerator": "GPU",
+ "colab": {
+ "collapsed_sections": [],
+ "name": "colab_demo.ipynb",
+ "provenance": []
+ },
+ "gpuClass": "standard",
+ "kernelspec": {
+ "display_name": "Python 3 (ipykernel)",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 0
+}