Merge remote-tracking branch 'origin/release/2.3' into 2.3

This commit is contained in:
Leif 2021-09-26 11:44:46 +08:00
commit 8891e92c94
10 changed files with 18 additions and 18 deletions

View File

@ -13,7 +13,7 @@ def read_params():
#params for text detector #params for text detector
cfg.det_algorithm = "DB" cfg.det_algorithm = "DB"
cfg.det_model_dir = "./inference/ch_ppocr_mobile_v2.0_det_infer/" cfg.det_model_dir = "./inference/ch_PP-OCRv2_det_infer/"
cfg.det_limit_side_len = 960 cfg.det_limit_side_len = 960
cfg.det_limit_type = 'max' cfg.det_limit_type = 'max'

View File

@ -13,7 +13,7 @@ def read_params():
#params for text recognizer #params for text recognizer
cfg.rec_algorithm = "CRNN" cfg.rec_algorithm = "CRNN"
cfg.rec_model_dir = "./inference/ch_ppocr_mobile_v2.0_rec_infer/" cfg.rec_model_dir = "./inference/ch_PP-OCRv2_rec_infer/"
cfg.rec_image_shape = "3, 32, 320" cfg.rec_image_shape = "3, 32, 320"
cfg.rec_char_type = 'ch' cfg.rec_char_type = 'ch'

View File

@ -13,7 +13,7 @@ def read_params():
#params for text detector #params for text detector
cfg.det_algorithm = "DB" cfg.det_algorithm = "DB"
cfg.det_model_dir = "./inference/ch_ppocr_mobile_v2.0_det_infer/" cfg.det_model_dir = "./inference/ch_PP-OCRv2_det_infer/"
cfg.det_limit_side_len = 960 cfg.det_limit_side_len = 960
cfg.det_limit_type = 'max' cfg.det_limit_type = 'max'
@ -31,7 +31,7 @@ def read_params():
#params for text recognizer #params for text recognizer
cfg.rec_algorithm = "CRNN" cfg.rec_algorithm = "CRNN"
cfg.rec_model_dir = "./inference/ch_ppocr_mobile_v2.0_rec_infer/" cfg.rec_model_dir = "./inference/ch_PP-OCRv2_rec_infer/"
cfg.rec_image_shape = "3, 32, 320" cfg.rec_image_shape = "3, 32, 320"
cfg.rec_char_type = 'ch' cfg.rec_char_type = 'ch'

View File

@ -34,10 +34,10 @@ pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/sim
``` ```
### 2. 下载推理模型 ### 2. 下载推理模型
安装服务模块前需要准备推理模型并放到正确路径。默认使用的是v2.0版的超轻量模型,默认模型路径为: 安装服务模块前,需要准备推理模型并放到正确路径。默认使用的是PP-OCRv2模型默认模型路径为
``` ```
检测模型:./inference/ch_ppocr_mobile_v2.0_det_infer/ 检测模型:./inference/ch_PP-OCRv2_det_infer/
识别模型:./inference/ch_ppocr_mobile_v2.0_rec_infer/ 识别模型:./inference/ch_PP-OCRv2_rec_infer/
方向分类器:./inference/ch_ppocr_mobile_v2.0_cls_infer/ 方向分类器:./inference/ch_ppocr_mobile_v2.0_cls_infer/
``` ```

View File

@ -35,10 +35,10 @@ pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/sim
``` ```
### 2. Download inference model ### 2. Download inference model
Before installing the service module, you need to prepare the inference model and put it in the correct path. By default, the ultra lightweight model of v2.0 is used, and the default model path is: Before installing the service module, you need to prepare the inference model and put it in the correct path. By default, the PP-OCRv2 models are used, and the default model path is:
``` ```
detection model: ./inference/ch_ppocr_mobile_v2.0_det_infer/ detection model: ./inference/ch_PP-OCRv2_det_infer/
recognition model: ./inference/ch_ppocr_mobile_v2.0_rec_infer/ recognition model: ./inference/ch_PP-OCRv2_rec_infer/
text direction classifier: ./inference/ch_ppocr_mobile_v2.0_cls_infer/ text direction classifier: ./inference/ch_ppocr_mobile_v2.0_cls_infer/
``` ```

View File

@ -35,4 +35,4 @@
| PP-OCR mobile | 356 | 11 6| | PP-OCR mobile | 356 | 11 6|
| PP-OCR server | 1056 | 200 | | PP-OCR server | 1056 | 200 |
更多 PP-OCR 系列模型的预测指标可以参考[PP-OCR Benchamrk](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_ch/benchmark.md) 更多 PP-OCR 系列模型的预测指标可以参考[PP-OCR Benchmark](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_ch/benchmark.md)

View File

@ -47,10 +47,10 @@ cd /path/to/ppocr_img
<a name="211"></a> <a name="211"></a>
#### 2.1.1 中英文模型 #### 2.1.1 中英文模型
* 检测+方向分类器+识别全流程:设置方向分类器参数`--use_angle_cls true`后可对竖排文本进行识别。 * 检测+方向分类器+识别全流程:`--use_angle_cls true`设置使用方向分类器识别180度旋转文字`--use_gpu false`设置不使用GPU
```bash ```bash
paddleocr --image_dir ./imgs/11.jpg --use_angle_cls true paddleocr --image_dir ./imgs/11.jpg --use_angle_cls true --use_gpu false
``` ```
结果是一个list每个item包含了文本框文字和识别置信度 结果是一个list每个item包含了文本框文字和识别置信度

View File

@ -38,4 +38,4 @@ Compares the time-consuming on CPU and T4 GPU (ms):
| PP-OCR mobile | 356 | 116| | PP-OCR mobile | 356 | 116|
| PP-OCR server | 1056 | 200 | | PP-OCR server | 1056 | 200 |
More indicators of PP-OCR series models can be referred to [PP-OCR Benchamrk](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/benchmark_en.md) More indicators of PP-OCR series models can be referred to [PP-OCR Benchmark](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.2/doc/doc_en/benchmark_en.md)

View File

@ -53,10 +53,10 @@ If you do not use the provided test image, you can replace the following `--imag
#### 2.1.1 Chinese and English Model #### 2.1.1 Chinese and English Model
* Detection, direction classification and recognition: set the direction classifier parameter`--use_angle_cls true` to recognize vertical text. * Detection, direction classification and recognition: set the parameter`--use_gpu false` to disable the gpu device
```bash ```bash
paddleocr --image_dir ./imgs_en/img_12.jpg --use_angle_cls true --lang en paddleocr --image_dir ./imgs_en/img_12.jpg --use_angle_cls true --lang en --use_gpu false
``` ```
Output will be a list, each item contains bounding box, text and recognition confidence Output will be a list, each item contains bounding box, text and recognition confidence

View File

@ -26,7 +26,7 @@ from paddle.jit import to_static
from ppocr.modeling.architectures import build_model from ppocr.modeling.architectures import build_model
from ppocr.postprocess import build_post_process from ppocr.postprocess import build_post_process
from ppocr.utils.save_load import init_model from ppocr.utils.save_load import load_dygraph_params
from ppocr.utils.logging import get_logger from ppocr.utils.logging import get_logger
from tools.program import load_config, merge_config, ArgsParser from tools.program import load_config, merge_config, ArgsParser
@ -99,7 +99,7 @@ def main():
else: # base rec model else: # base rec model
config["Architecture"]["Head"]["out_channels"] = char_num config["Architecture"]["Head"]["out_channels"] = char_num
model = build_model(config["Architecture"]) model = build_model(config["Architecture"])
init_model(config, model) _ = load_dygraph_params(config, model, logger, None)
model.eval() model.eval()
save_path = config["Global"]["save_inference_dir"] save_path = config["Global"]["save_inference_dir"]