Merge branch 'release/2.0' into doc_cp
This commit is contained in:
commit
9deee44a7a
|
@ -1,6 +1,8 @@
|
|||
# 服务器端C++预测
|
||||
|
||||
本教程将介绍在服务器端部署PaddleOCR超轻量中文检测、识别模型的详细步骤。
|
||||
本章节介绍PaddleOCR 模型的的C++部署方法,与之对应的python预测部署方式参考[文档](../../doc/doc_ch/inference.md)。
|
||||
C++在性能计算上优于python,因此,在大多数CPU、GPU部署场景,多采用C++的部署方式,本节将介绍如何在Linux\Windows (CPU\GPU)环境下配置C++环境并完成
|
||||
PaddleOCR模型部署。
|
||||
|
||||
|
||||
## 1. 准备环境
|
||||
|
|
|
@ -1,7 +1,9 @@
|
|||
# Server-side C++ inference
|
||||
|
||||
|
||||
In this tutorial, we will introduce the detailed steps of deploying PaddleOCR ultra-lightweight Chinese detection and recognition models on the server side.
|
||||
This chapter introduces the C++ deployment method of the PaddleOCR model, and the corresponding python predictive deployment method refers to [document](../../doc/doc_ch/inference.md).
|
||||
C++ is better than python in terms of performance calculation. Therefore, in most CPU and GPU deployment scenarios, C++ deployment is mostly used.
|
||||
This section will introduce how to configure the C++ environment and complete it in the Linux\Windows (CPU\GPU) environment
|
||||
PaddleOCR model deployment.
|
||||
|
||||
|
||||
## 1. Prepare the environment
|
||||
|
|
|
@ -2,10 +2,11 @@
|
|||
# 基于Python预测引擎推理
|
||||
|
||||
inference 模型(`paddle.jit.save`保存的模型)
|
||||
一般是模型训练完成后保存的固化模型,多用于预测部署。训练过程中保存的模型是checkpoints模型,保存的是模型的参数,多用于恢复训练等。
|
||||
与checkpoints模型相比,inference 模型会额外保存模型的结构信息,在预测部署、加速推理上性能优越,灵活方便,适合与实际系统集成。
|
||||
一般是模型训练,把模型结构和模型参数保存在文件中的固化模型,多用于预测部署场景。
|
||||
训练过程中保存的模型是checkpoints模型,保存的只有模型的参数,多用于恢复训练等。
|
||||
与checkpoints模型相比,inference 模型会额外保存模型的结构信息,在预测部署、加速推理上性能优越,灵活方便,适合于实际系统集成。
|
||||
|
||||
接下来首先介绍如何将训练的模型转换成inference模型,然后将依次介绍文本检测、文本角度分类器、文本识别以及三者串联基于预测引擎推理。
|
||||
接下来首先介绍如何将训练的模型转换成inference模型,然后将依次介绍文本检测、文本角度分类器、文本识别以及三者串联在CPU、GPU上的预测方法。
|
||||
|
||||
|
||||
- [一、训练模型转inference模型](#训练模型转inference模型)
|
||||
|
|
|
@ -394,6 +394,7 @@ def preprocess(is_train=False):
|
|||
logger = get_logger(name='root', log_file=log_file)
|
||||
if config['Global']['use_visualdl']:
|
||||
from visualdl import LogWriter
|
||||
save_model_dir = config['Global']['save_model_dir']
|
||||
vdl_writer_path = '{}/vdl/'.format(save_model_dir)
|
||||
os.makedirs(vdl_writer_path, exist_ok=True)
|
||||
vdl_writer = LogWriter(logdir=vdl_writer_path)
|
||||
|
|
2
train.sh
2
train.sh
|
@ -1,2 +1,2 @@
|
|||
# recommended paddle.__version__ == 2.0.0
|
||||
python3 -m paddle.distributed.launch --gpus '0,1,2,3,4,5,6,7' tools/train.py -c configs/rec/rec_mv3_none_bilstm_ctc.yml
|
||||
python3 -m paddle.distributed.launch --log_dir=./debug/ --gpus '0,1,2,3,4,5,6,7' tools/train.py -c configs/rec/rec_mv3_none_bilstm_ctc.yml
|
||||
|
|
Loading…
Reference in New Issue