refine slim prune doc

This commit is contained in:
yukavio 2020-09-19 16:19:59 +00:00
parent 9958fdde66
commit dc0aea4ec3
2 changed files with 16 additions and 17 deletions

View File

@ -128,7 +128,7 @@
## 安装PaddleSlim
\```bash
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
@ -136,7 +136,7 @@ cd Paddleslim
python setup.py install
\```
```
## 获取预训练模型
@ -148,22 +148,22 @@ python setup.py install
进入PaddleOCR根目录通过以下命令对模型进行敏感度分析
\```bash
```bash
python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1
\```
```
## 裁剪模型与fine-tune
裁剪时通过之前的敏感度分析文件决定每个网络层的裁剪比例。在具体实现时为了尽可能多的保留从图像中提取的低阶特征我们跳过了backbone中靠近输入的4个卷积层。同样为了减少由于裁剪导致的模型性能损失我们通过之前敏感度分析所获得的敏感度表挑选出了一些冗余较少对裁剪较为敏感的[网络层](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/pruning_and_finetune.py#L41)并在之后的裁剪过程中选择避开这些网络层。裁剪过后finetune的过程沿用OCR检测模型原始的训练策略。
\```bash
```bash
python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1
\```
```
@ -173,8 +173,8 @@ python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -
在得到裁剪训练保存的模型后我们可以将其导出为inference_model用于预测部署
\```bash
```bash
python deploy/slim/prune/export_prune_model.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./output/det_db/best_accuracy Global.test_batch_size_per_card=1 Global.save_inference_dir=inference_model
\```
```

View File

@ -128,7 +128,7 @@ It is recommended that you could understand following pages before reading this
## Install PaddleSlim
\```bash
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
@ -136,7 +136,7 @@ cd Paddleslim
python setup.py install
\```
```
## Download Pretrain Model
@ -150,11 +150,11 @@ python setup.py install
Enter the PaddleOCR root directoryperform sensitivity analysis on the model with the following command
\```bash
```bash
python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1
\```
```
@ -162,11 +162,11 @@ python deploy/slim/prune/sensitivity_anal.py -c configs/det/det_mv3_db.yml -o Gl
When pruning, the previous sensitivity analysis file would determines the pruning ratio of each network layer. In the specific implementation, in order to retain as many low-level features extracted from the image as possible, we skipped the 4 convolutional layers close to the input in the backbone. Similarly, in order to reduce the model performance loss caused by pruning, we selected some of the less redundant and more sensitive [network layer](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/deploy/slim/prune/pruning_and_finetune.py#L41) through the sensitivity table obtained from the previous sensitivity analysis.And choose to skip these network layers in the subsequent pruning process. After pruning, the model need a finetune process to recover the performance and the training strategy of finetune is similar to the strategy of training original OCR detection model.
\```bash
```bash
python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./deploy/slim/prune/pretrain_models/det_mv3_db/best_accuracy Global.test_batch_size_per_card=1
\```
```
@ -175,9 +175,8 @@ python deploy/slim/prune/pruning_and_finetune.py -c configs/det/det_mv3_db.yml -
## Export inference model
After getting the model after pruning and finetuning we, can export it as inference_model for predictive deployment:
\```bash
```bash
python deploy/slim/prune/export_prune_model.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./output/det_db/best_accuracy Global.test_batch_size_per_card=1 Global.save_inference_dir=inference_model
\```
```