Skip to content

Commit 1ca1484

Browse files
committedAug 23, 2021
feat(//examples/int8/qat): Install pytorch-quantization with
requirements.txt Signed-off-by: Naren Dasan <naren@narendasan.com> Signed-off-by: Naren Dasan <narens@nvidia.com>
1 parent 68ba63c commit 1ca1484

File tree

2 files changed

+7
-4
lines changed

2 files changed

+7
-4
lines changed
 

‎examples/int8/training/vgg16/README.md

+4-2
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ python3 main.py --lr 0.01 --batch-size 128 --drop-ratio 0.15 --ckpt-dir $(pwd)/v
2222
2323
You can monitor training with tensorboard, logs are stored by default at `/tmp/vgg16_logs`
2424

25-
### Quantization
25+
### Quantization Aware Fine Tuning (for trying out QAT Workflows)
2626

2727
To perform quantization aware training, it is recommended that you finetune your model obtained from previous step with quantization layers.
2828

@@ -51,12 +51,14 @@ After QAT is completed, you should see the checkpoint of QAT model in the `$pwd/
5151

5252
Use the exporter script to create a torchscript module you can compile with TRTorch
5353

54+
### For PTQ
5455
```
5556
python3 export_ckpt.py <path-to-checkpoint>
5657
```
5758

58-
It should produce a file called `trained_vgg16.jit.pt`
59+
The checkpoint file should be from the original training and not quatization aware fine tuning. THe script should produce a file called `trained_vgg16.jit.pt`
5960

61+
### For QAT
6062
To export a QAT model, you can run
6163

6264
```
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,3 @@
1-
torch>=1.4.0
2-
tensorboard>=1.14.0
1+
torch>=1.9.0
2+
tensorboard>=1.14.0
3+
pytorch-quantization --extra-index-url https://pypi.ngc.nvidia.com

0 commit comments

Comments
 (0)