Merge pull request #752 from MissPenguin/develop
remove det_mv3_db_v1.1.yml & add code annotation
This commit is contained in:
commit
1fd763f89b
|
@ -24,6 +24,7 @@ Backbone:
|
|||
function: ppocr.modeling.backbones.det_mobilenet_v3,MobileNetV3
|
||||
scale: 0.5
|
||||
model_name: large
|
||||
disable_se: true
|
||||
|
||||
Head:
|
||||
function: ppocr.modeling.heads.det_db_head,DBHead
|
||||
|
|
|
@ -1,55 +0,0 @@
|
|||
Global:
|
||||
algorithm: DB
|
||||
use_gpu: true
|
||||
epoch_num: 1200
|
||||
log_smooth_window: 20
|
||||
print_batch_step: 2
|
||||
save_model_dir: ./output/det_db/
|
||||
save_epoch_step: 200
|
||||
# evaluation is run every 5000 iterations after the 4000th iteration
|
||||
eval_batch_step: [4000, 5000]
|
||||
train_batch_size_per_card: 16
|
||||
test_batch_size_per_card: 16
|
||||
image_shape: [3, 640, 640]
|
||||
reader_yml: ./configs/det/det_db_icdar15_reader.yml
|
||||
pretrain_weights: ./pretrain_models/MobileNetV3_large_x0_5_pretrained/
|
||||
checkpoints:
|
||||
save_res_path: ./output/det_db/predicts_db.txt
|
||||
save_inference_dir:
|
||||
|
||||
Architecture:
|
||||
function: ppocr.modeling.architectures.det_model,DetModel
|
||||
|
||||
Backbone:
|
||||
function: ppocr.modeling.backbones.det_mobilenet_v3,MobileNetV3
|
||||
scale: 0.5
|
||||
model_name: large
|
||||
disable_se: true
|
||||
|
||||
Head:
|
||||
function: ppocr.modeling.heads.det_db_head,DBHead
|
||||
model_name: large
|
||||
k: 50
|
||||
inner_channels: 96
|
||||
out_channels: 2
|
||||
|
||||
Loss:
|
||||
function: ppocr.modeling.losses.det_db_loss,DBLoss
|
||||
balance_loss: true
|
||||
main_loss_type: DiceLoss
|
||||
alpha: 5
|
||||
beta: 10
|
||||
ohem_ratio: 3
|
||||
|
||||
Optimizer:
|
||||
function: ppocr.optimizer,AdamDecay
|
||||
base_lr: 0.001
|
||||
beta1: 0.9
|
||||
beta2: 0.999
|
||||
|
||||
PostProcess:
|
||||
function: ppocr.postprocess.db_postprocess,DBPostProcess
|
||||
thresh: 0.3
|
||||
box_thresh: 0.6
|
||||
max_candidates: 1000
|
||||
unclip_ratio: 1.5
|
|
@ -13,7 +13,7 @@ def read_params():
|
|||
|
||||
#params for text detector
|
||||
cfg.det_algorithm = "DB"
|
||||
cfg.det_model_dir = "./inference/ch_det_mv3_db/"
|
||||
cfg.det_model_dir = "./inference/ch_ppocr_mobile_v1.1_det_infer/"
|
||||
cfg.det_max_side_len = 960
|
||||
|
||||
#DB parmas
|
||||
|
|
|
@ -28,7 +28,7 @@ def read_params():
|
|||
|
||||
#params for text recognizer
|
||||
cfg.rec_algorithm = "CRNN"
|
||||
cfg.rec_model_dir = "./inference/ch_rec_mv3_crnn/"
|
||||
cfg.rec_model_dir = "./inference/ch_ppocr_mobile_v1.1_rec_infer/"
|
||||
|
||||
cfg.rec_image_shape = "3, 32, 320"
|
||||
cfg.rec_char_type = 'ch'
|
||||
|
|
|
@ -13,7 +13,7 @@ def read_params():
|
|||
|
||||
#params for text detector
|
||||
cfg.det_algorithm = "DB"
|
||||
cfg.det_model_dir = "./inference/ch_det_mv3_db/"
|
||||
cfg.det_model_dir = "./inference/ch_ppocr_mobile_v1.1_det_infer/"
|
||||
cfg.det_max_side_len = 960
|
||||
|
||||
#DB parmas
|
||||
|
@ -28,7 +28,7 @@ def read_params():
|
|||
|
||||
#params for text recognizer
|
||||
cfg.rec_algorithm = "CRNN"
|
||||
cfg.rec_model_dir = "./inference/ch_rec_mv3_crnn/"
|
||||
cfg.rec_model_dir = "./inference/ch_ppocr_mobile_v1.1_rec_infer/"
|
||||
|
||||
cfg.rec_image_shape = "3, 32, 320"
|
||||
cfg.rec_char_type = 'ch'
|
||||
|
|
|
@ -1,10 +1,8 @@
|
|||
# 服务部署
|
||||
[English](README_en.md) | 简体中文
|
||||
|
||||
PaddleOCR提供2种服务部署方式:
|
||||
- 基于HubServing的部署:已集成到PaddleOCR中([code](https://github.com/PaddlePaddle/PaddleOCR/tree/develop/deploy/hubserving)),按照本教程使用;
|
||||
- 基于PaddleServing的部署:详见PaddleServing官网[demo](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/ocr),后续也将集成到PaddleOCR。
|
||||
# 基于PaddleHub Serving的服务部署
|
||||
|
||||
服务部署目录下包括检测、识别、2阶段串联三种服务包,根据需求选择相应的服务包进行安装和启动。目录如下:
|
||||
hubserving服务部署目录下包括检测、识别、2阶段串联三种服务包,请根据需求选择相应的服务包进行安装和启动。目录结构如下:
|
||||
```
|
||||
deploy/hubserving/
|
||||
└─ ocr_det 检测模块服务包
|
||||
|
@ -30,11 +28,18 @@ pip3 install paddlehub --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
|
|||
|
||||
# 在Linux下设置环境变量
|
||||
export PYTHONPATH=.
|
||||
# 在Windows下设置环境变量
|
||||
|
||||
# 或者,在Windows下设置环境变量
|
||||
SET PYTHONPATH=.
|
||||
```
|
||||
|
||||
### 2. 安装服务模块
|
||||
### 2. 下载推理模型
|
||||
安装服务模块前,需要准备推理模型并放到正确路径。默认使用的是v1.1版的超轻量模型,默认检测模型路径为:
|
||||
`./inference/ch_ppocr_mobile_v1.1_det_infer/`,识别模型路径为:`./inference/ch_ppocr_mobile_v1.1_rec_infer/`。
|
||||
|
||||
**模型路径可在`params.py`中查看和修改。** 更多模型可以从PaddleOCR提供的[模型库](../../doc/doc_ch/models_list.md)下载,也可以替换成自己训练转换好的模型。
|
||||
|
||||
### 3. 安装服务模块
|
||||
PaddleOCR提供3种服务模块,根据需要安装所需模块。
|
||||
|
||||
* 在Linux环境下,安装示例如下:
|
||||
|
@ -61,15 +66,7 @@ hub install deploy\hubserving\ocr_rec\
|
|||
hub install deploy\hubserving\ocr_system\
|
||||
```
|
||||
|
||||
#### 安装模型
|
||||
安装服务模块前,需要将训练好的模型放到对应的文件夹内。默认使用的是:
|
||||
./inference/ch_det_mv3_db/
|
||||
和
|
||||
./inference/ch_rec_mv3_crnn/
|
||||
这两个模型可以在https://github.com/PaddlePaddle/PaddleOCR 下载
|
||||
可以在./deploy/hubserving/ocr_system/params.py 里面修改成自己的模型
|
||||
|
||||
### 3. 启动服务
|
||||
### 4. 启动服务
|
||||
#### 方式1. 命令行命令启动(仅支持CPU)
|
||||
**启动命令:**
|
||||
```shell
|
||||
|
@ -172,7 +169,7 @@ hub serving start -c deploy/hubserving/ocr_system/config.json
|
|||
```hub serving stop --port/-p XXXX```
|
||||
|
||||
- 2、 到相应的`module.py`和`params.py`等文件中根据实际需求修改代码。
|
||||
例如,如果需要替换部署服务所用模型,则需要到`params.py`中修改模型路径参数`det_model_dir`和`rec_model_dir`,当然,同时可能还需要修改其他相关参数,请根据实际情况修改调试。 建议修改后先直接运行`module.py`调试,能正确运行预测后再启动服务测试。
|
||||
例如,如果需要替换部署服务所用模型,则需要到`params.py`中修改模型路径参数`det_model_dir`和`rec_model_dir`,当然,同时可能还需要修改其他相关参数,请根据实际情况修改调试。 **强烈建议修改后先直接运行`module.py`调试,能正确运行预测后再启动服务测试。**
|
||||
|
||||
- 3、 卸载旧服务包
|
||||
```hub uninstall ocr_system```
|
|
@ -1,10 +1,8 @@
|
|||
# Service deployment
|
||||
English | [简体中文](README.md)
|
||||
|
||||
PaddleOCR provides 2 service deployment methods::
|
||||
- Based on **HubServing**:Has been integrated into PaddleOCR ([code](https://github.com/PaddlePaddle/PaddleOCR/tree/develop/deploy/hubserving)). Please follow this tutorial.
|
||||
- Based on **PaddleServing**:See PaddleServing official website for details ([demo](https://github.com/PaddlePaddle/Serving/tree/develop/python/examples/ocr)). Follow-up will also be integrated into PaddleOCR.
|
||||
# Service deployment based on PaddleHub Serving
|
||||
|
||||
The service deployment directory includes three service packages: detection, recognition, and two-stage series connection. Select the corresponding service package to install and start service according to your needs. The directory is as follows:
|
||||
The hubserving service deployment directory includes three service packages: detection, recognition, and two-stage series connection. Please select the corresponding service package to install and start service according to your needs. The directory is as follows:
|
||||
```
|
||||
deploy/hubserving/
|
||||
└─ ocr_det detection module service package
|
||||
|
@ -31,11 +29,17 @@ pip3 install paddlehub --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
|
|||
|
||||
# Set environment variables on Linux
|
||||
export PYTHONPATH=.
|
||||
|
||||
# Set environment variables on Windows
|
||||
SET PYTHONPATH=.
|
||||
```
|
||||
|
||||
### 2. Install Service Module
|
||||
### 2. Download inference model
|
||||
Before installing the service module, you need to prepare the inference model and put it in the correct path. By default, the ultra lightweight model of v1.1 is used, and the default detection model path is: `./inference/ch_ppocr_mobile_v1.1_det_infer/`, the default recognition model path is: `./inference/ch_ppocr_mobile_v1.1_rec_infer/`.
|
||||
|
||||
**The model path can be found and modified in `params.py`.** More models provided by PaddleOCR can be obtained from the [model library](../../doc/doc_en/models_list.md). You can also use models trained by yourself.
|
||||
|
||||
### 3. Install Service Module
|
||||
PaddleOCR provides 3 kinds of service modules, install the required modules according to your needs.
|
||||
|
||||
* On Linux platform, the examples are as follows.
|
||||
|
@ -62,7 +66,7 @@ hub install deploy\hubserving\ocr_rec\
|
|||
hub install deploy\hubserving\ocr_system\
|
||||
```
|
||||
|
||||
### 3. Start service
|
||||
### 4. Start service
|
||||
#### Way 1. Start with command line parameters (CPU only)
|
||||
|
||||
**start command:**
|
|
@ -204,6 +204,15 @@ def build(config, main_prog, startup_prog, mode):
|
|||
|
||||
def build_export(config, main_prog, startup_prog):
|
||||
"""
|
||||
Build input and output for exporting a checkpoints model to an inference model
|
||||
Args:
|
||||
config(dict): config
|
||||
main_prog(): main program
|
||||
startup_prog(): startup program
|
||||
Returns:
|
||||
feeded_var_names(list[str]): var names of input for exported inference model
|
||||
target_vars(list[Variable]): output vars for exported inference model
|
||||
fetches_var_name: dict of checkpoints model outputs(included loss and measures)
|
||||
"""
|
||||
with fluid.program_guard(main_prog, startup_prog):
|
||||
with fluid.unique_name.guard():
|
||||
|
@ -246,6 +255,9 @@ def train_eval_det_run(config,
|
|||
train_info_dict,
|
||||
eval_info_dict,
|
||||
is_pruning=False):
|
||||
'''
|
||||
main program of evaluation for detection
|
||||
'''
|
||||
train_batch_id = 0
|
||||
log_smooth_window = config['Global']['log_smooth_window']
|
||||
epoch_num = config['Global']['epoch_num']
|
||||
|
@ -337,6 +349,9 @@ def train_eval_det_run(config,
|
|||
|
||||
|
||||
def train_eval_rec_run(config, exe, train_info_dict, eval_info_dict):
|
||||
'''
|
||||
main program of evaluation for recognition
|
||||
'''
|
||||
train_batch_id = 0
|
||||
log_smooth_window = config['Global']['log_smooth_window']
|
||||
epoch_num = config['Global']['epoch_num']
|
||||
|
@ -513,6 +528,7 @@ def train_eval_cls_run(config, exe, train_info_dict, eval_info_dict):
|
|||
|
||||
|
||||
def preprocess():
|
||||
# load config from yml file
|
||||
FLAGS = ArgsParser().parse_args()
|
||||
config = load_config(FLAGS.config)
|
||||
merge_config(FLAGS.opt)
|
||||
|
@ -522,6 +538,7 @@ def preprocess():
|
|||
use_gpu = config['Global']['use_gpu']
|
||||
check_gpu(use_gpu)
|
||||
|
||||
# check whether the set algorithm belongs to the supported algorithm list
|
||||
alg = config['Global']['algorithm']
|
||||
assert alg in [
|
||||
'EAST', 'DB', 'SAST', 'Rosetta', 'CRNN', 'STARNet', 'RARE', 'SRN', 'CLS'
|
||||
|
|
|
@ -46,6 +46,7 @@ from paddle.fluid.contrib.model_stat import summary
|
|||
|
||||
|
||||
def main():
|
||||
# build train program
|
||||
train_build_outputs = program.build(
|
||||
config, train_program, startup_program, mode='train')
|
||||
train_loader = train_build_outputs[0]
|
||||
|
@ -54,6 +55,7 @@ def main():
|
|||
train_opt_loss_name = train_build_outputs[3]
|
||||
model_average = train_build_outputs[-1]
|
||||
|
||||
# build eval program
|
||||
eval_program = fluid.Program()
|
||||
eval_build_outputs = program.build(
|
||||
config, eval_program, startup_program, mode='eval')
|
||||
|
@ -61,9 +63,11 @@ def main():
|
|||
eval_fetch_varname_list = eval_build_outputs[2]
|
||||
eval_program = eval_program.clone(for_test=True)
|
||||
|
||||
# initialize train reader
|
||||
train_reader = reader_main(config=config, mode="train")
|
||||
train_loader.set_sample_list_generator(train_reader, places=place)
|
||||
|
||||
# initialize eval reader
|
||||
eval_reader = reader_main(config=config, mode="eval")
|
||||
|
||||
exe = fluid.Executor(place)
|
||||
|
|
Loading…
Reference in New Issue