fix doc of slim

This commit is contained in:
LDOUBLEV 2021-05-21 09:47:17 +08:00
parent 5e35b62512
commit 7cb0b5499e
4 changed files with 13 additions and 13 deletions

View File

@ -24,12 +24,12 @@
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
git checkout develop
cd Paddleslim
cd PaddleSlim
python3 setup.py install
```
### 2. 获取预训练模型
模型裁剪需要加载事先训练好的模型PaddleOCR也提供了一系列(模型)[../../../doc/doc_ch/models_list.md],开发者可根据需要自行选择模型或使用自己的模型。
模型裁剪需要加载事先训练好的模型PaddleOCR也提供了一系列[模型](../../../doc/doc_ch/models_list.md),开发者可根据需要自行选择模型或使用自己的模型。
### 3. 敏感度分析训练

View File

@ -23,14 +23,14 @@ Five steps for OCR model prune:
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
git checkout develop
cd Paddleslim
cd PaddleSlim
python3 setup.py install
```
### 2. Download Pretrain Model
Model prune needs to load pre-trained models.
PaddleOCR also provides a series of (models)[../../../doc/doc_en/models_list_en.md]. Developers can choose their own models or use their own models according to their needs.
PaddleOCR also provides a series of [models](../../../doc/doc_en/models_list_en.md). Developers can choose their own models or use their own models according to their needs.
### 3. Pruning sensitivity analysis

View File

@ -23,7 +23,7 @@
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd Paddleslim
cd PaddleSlim
python setup.py install
```
@ -37,12 +37,12 @@ PaddleOCR提供了一系列训练好的[模型](../../../doc/doc_ch/models_list.
量化训练的代码位于slim/quantization/quant.py 中,比如训练检测模型,训练指令如下:
```bash
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model
python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model
# 比如下载提供的训练模型
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar
tar -xf ch_ppocr_mobile_v2.0_det_train.tar
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_inference_dir=./output/quant_inference_model
python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_inference_model
```
如果要训练识别模型的量化,修改配置文件和加载的模型参数即可。
@ -52,10 +52,10 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global
在得到量化训练保存的模型后我们可以将其导出为inference_model用于预测部署
```bash
python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_model_dir=./output/quant_inference_model
python deploy/slim/quantization/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model
```
### 5. 量化模型部署
上述步骤导出的量化模型参数精度仍然是FP32表现为量化后的模型大小不变,但是参数的数值范围是int8导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。
上述步骤导出的量化模型参数精度仍然是FP32但是参数的数值范围是int8导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。
量化模型部署的可参考 [移动端模型部署](../../lite/readme.md)

View File

@ -26,7 +26,7 @@ After training, if you want to further compress the model size and accelerate th
```bash
git clone https://github.com/PaddlePaddle/PaddleSlim.git
cd Paddleslim
cd PaddlSlim
python setup.py install
```
@ -43,12 +43,12 @@ After the quantization strategy is defined, the model can be quantified.
The code for quantization training is located in `slim/quantization/quant.py`. For example, to train a detection model, the training instructions are as follows:
```bash
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model
python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model='your trained model' Global.save_model_dir=./output/quant_model
# download provided model
wget https://paddleocr.bj.bcebos.com/dygraph_v2.0/ch/ch_ppocr_mobile_v2.0_det_train.tar
tar -xf ch_ppocr_mobile_v2.0_det_train.tar
python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_model
python deploy/slim/quantization/quant.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.pretrained_model=./ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.save_model_dir=./output/quant_model
```
@ -57,7 +57,7 @@ python deploy/slim/quantization/quant.py -c configs/det/det_mv3_db.yml -o Global
After getting the model after pruning and finetuning we, can export it as inference_model for predictive deployment:
```bash
python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model
python deploy/slim/quantization/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.checkpoints=output/quant_model/best_accuracy Global.save_inference_dir=./output/quant_inference_model
```
### 5. Deploy