PaddleOCR/deploy/slim/quantization
LDOUBLEV cf8bc662de fix type 2020-09-21 21:08:46 +08:00
..
README.md add some content to slim doc 2020-09-21 12:51:18 +00:00
README_en.md fix type 2020-09-21 21:08:46 +08:00
export_model.py add slim quantization 2020-09-15 20:17:23 +08:00
quant.py fix issue 2020-09-15 21:10:38 +08:00

README_en.md

Introduction

Generally, a more complex model would achive better performance in the task, but it also leads to some redundancy in the model. Quantization is a technique that reduces this redundancy by reducing the full precision data to a fixed number, so as to reduce model calculation complexity and improve model inference performance.

This example uses PaddleSlim provided APIs of Quantization to compress the OCR model.

It is recommended that you could understand following pages before reading this example

Quick Start

Quantization is mostly suitable for the deployment of lightweight models on mobile terminals. After training, if you want to further compress the model size and accelerate the prediction, you can use quantization methods to compress the model according to the following steps.

  1. Install PaddleSlim
  2. Prepare trained model
  3. Quantization-Aware Training
  4. Export inference model
  5. Deploy quantization inference model

1. Install PaddleSlim

git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd Paddleslim
python setup.py install

2. Download Pretrain Model

PaddleOCR provides a series of trained models. If the model to be quantified is not in the list, you need to follow the Regular Training method to get the trained model.

3. Quant-Aware Training

Quantization training includes offline quantization training and online quantization training. Online quantization training is more effective. It is necessary to load the pre-training model. After the quantization strategy is defined, the model can be quantified.

The code for quantization training is located in slim/quantization/quant/py. For example, to train a detection model, the training instructions are as follows:

python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights='your trained model'   Global.save_model_dir=./output/quant_model

# download provided model
wget https://paddleocr.bj.bcebos.com/20-09-22/mobile/det/ch_ppocr_mobile_v1.1_det_train.tar
tar xf ch_ppocr_mobile_v1.1_det_train.tar
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./ch_ppocr_mobile_v1.1_det_train/best_accuracy   Global.save_model_dir=./output/quant_model

4. Export inference model

After getting the model after pruning and finetuning we, can export it as inference_model for predictive deployment:

python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_model_dir=./output/quant_inference_model

5. Deploy

The numerical range of the quantized model parameters derived from the above steps is still FP32, but the numerical range of the parameters is int8. The derived model can be converted through the opt tool of PaddleLite.

For quantitative model deployment, please refer to Mobile terminal model deployment