Patch-based Privacy Preserving Neural Network for Vision Tasks
This paper is presented on IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023, and available in WACV2023 open access.
h5py
: 3.7.0hydra-core
: 1.2.0lightning-bolts
: 0.4.0pytorch-lightning
: 1.6.1torch
: 1.12.0torchvision
: 0.13.0 (>= ver 0.12.0 supportsPCAM
dataset )thop
: 0.1.1
For details, see requirements.txt
.
- Move to
cifar
folder.
$ cd cifar
- ResNet (Reference model) training.
$ python resnet_cifar_main.py
- Patch SplitNN
$ python patch_splitnn_cifar_main.py adapt_net=False
- Patch SplitNN+
$ python patch_splitnn_cifar_main.py adapt_net=True
-
Before training, dataset should be downloaded. Then set the path to
YAML
. -
Move to
pcam
folder.
$ cd pcam
- ResNet (Reference model) training.
$ python resnet_pcam_main.py
- Patch SplitNN
$ python patch_splitnn_main.py adapt_net=False
- Patch SplitNN+
$ python patch_splitnn_main.py adapt_net=True
Configuration files are stored in each config/
.
data_dir
: path to dataset directory.logger
:tensorboard
orwandb
can be used.dataset
:CIFAR10
,CIFAR100
,PatchCIFAR10
,PatchCIFAR100
, orPCAM
.base_model
:resnet18
orresnet34
patch_size
: the size of patch images.patch_stride
: the stride size for splitting.num_uppmodels
: the number of upper models for Patch SplitNN.adapt_net
: ifTrue
, Adaptation Net is set.upp_loss_ratio
: Coefficient value for loss function of upper loss.drop_rate
: the ratio of dropping patches.
NVIDIA Container Toolkit is needed.
- Build docker image
$ docker build -f docker/Docker . -t <image_name>
- Run docker image
$ docker run --rm \
--gpus all \
-v /path/to/patch-splitnn:/opt/patch-splitnn \
-it <image_name> \
/bin/bash
$ cd /opt/patch-splitnn
@InProceedings{Mabuchi_2023_WACV,
author = {Mabuchi, Mitsuhiro and Ishikawa, Tetsuya},
title = {Patch-Based Privacy Preserving Neural Network for Vision Tasks},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2023},
pages = {1550-1559}
}