PaddleOCR/deploy/slim/prune/README_en.md

5.6 KiB
Raw Blame History

> PaddleSlim develop version should be installed before runing this example.

Model compress tutorial (Pruning)

Compress results

ID Task Model Compress Strategy[3][4] Criterion(Chinese dataset) Inference Time[1](ms) Inference Time(Total model)[2](ms) Acceleration Ratio Model Size(MB) Commpress Ratio Download Link
0 Detection MobileNetV3_DB None 61.7 224 375 - 8.6 -
Recognition MobileNetV3_CRNN None 62.0 9.52
1 Detection SlimTextDet PACT Quant Aware Training 62.1 195 348 8% 2.8 67.82%
Recognition SlimTextRec PACT Quant Aware Training 61.48 8.6
2 Detection SlimTextDet_quat_pruning Pruning+PACT Quant Aware Training 60.86 142 288 30% 2.8 67.82%
Recognition SlimTextRec PPACT Quant Aware Training 61.48 8.6
3 Detection SlimTextDet_pruning Pruning 61.57 138 295 27% 2.9 66.28%
Recognition SlimTextRec PACT Quant Aware Training 61.48 8.6

Overview

Generally, a more complex model would achive better performance in the task, but it also leads to some redundancy in the model. Model Pruning is a technique that reduces this redundancy by removing the sub-models in the neural network model, so as to reduce model calculation complexity and improve model inference performance.

This example uses PaddleSlim providedAPIs of Pruning to compress the OCR model.

It is recommended that you could understand following pages before reading this example,

- The training strategy of OCR model

- PaddleSlim Document

Install PaddleSlim


git clone https://github.com/PaddlePaddle/PaddleSlim.git

cd Paddleslim

python setup.py install

Download Pretrain Model

Download link of Detection pretrain model

Pruning sensitivity analysis

After the pre-training model is loaded, sensitivity analysis is performed on each network layer of the model to understand the redundancy of each network layer, thereby determining the pruning ratio of each network layer. For specific details of sensitivity analysis, seeSensitivity analysis

Enter the PaddleOCR root directoryperform sensitivity analysis on the model with the following command


python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1

Model pruning and Fine-tune

When pruning, the previous sensitivity analysis file would determines the pruning ratio of each network layer. In the specific implementation, in order to retain as many low-level features extracted from the image as possible, we skipped the 4 convolutional layers close to the input in the backbone. Similarly, in order to reduce the model performance loss caused by pruning, we selected some of the less redundant and more sensitive network layer through the sensitivity table obtained from the previous sensitivity analysis.And choose to skip these network layers in the subsequent pruning process. After pruning, the model need a finetune process to recover the performance and the training strategy of finetune is similar to the strategy of training original OCR detection model.


python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1

Export inference model

After getting the model after pruning and finetuning we, can export it as inference_model for predictive deployment:


python deploy/slim/prune/export_prune_model.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./output/det_db/best_accuracy Global.test_batch_size_per_card=1 Global.save_inference_dir=inference_model