Merge pull request #616 from grasswolfs/add_faq_0826
update the FAQ and revise the readme doc
This commit is contained in:
commit
75f3fe8def
298
README.md
298
README.md
|
@ -1,231 +1,209 @@
|
||||||
English | [简体中文](README_cn.md)
|
[English](README_en.md) | 简体中文
|
||||||
|
|
||||||
## Introduction
|
## 简介
|
||||||
PaddleOCR aims to create rich, leading, and practical OCR tools that help users train better models and apply them into practice.
|
PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力使用者训练出更好的模型,并应用落地。
|
||||||
|
|
||||||
**Recent updates**
|
**近期更新**
|
||||||
- 2020.8.24 Support the use of PaddleOCR through whl package installation,pelease refer [PaddleOCR Package](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/whl_en.md)
|
- 2020.8.26 更新OCR相关的83个常见问题及解答,具体参考[FAQ](./doc/doc_ch/FAQ.md)
|
||||||
- 2020.8.16, Release text detection algorithm [SAST](https://arxiv.org/abs/1908.05498) and text recognition algorithm [SRN](https://arxiv.org/abs/2003.12294)
|
- 2020.8.24 支持通过whl包安装使用PaddleOCR,具体参考[Paddleocr Package使用说明](./doc/doc_ch/whl.md)
|
||||||
- 2020.7.23, Release the playback and PPT of live class on BiliBili station, PaddleOCR Introduction, [address](https://aistudio.baidu.com/aistudio/course/introduce/1519)
|
- 2020.8.21 更新8月18日B站直播课回放和PPT,课节2,易学易用的OCR工具大礼包,[获取地址](https://aistudio.baidu.com/aistudio/education/group/info/1519)
|
||||||
- 2020.7.15, Add mobile App demo , support both iOS and Android ( based on easyedge and Paddle Lite)
|
- 2020.8.16 开源文本检测算法[SAST](https://arxiv.org/abs/1908.05498)和文本识别算法[SRN](https://arxiv.org/abs/2003.12294)
|
||||||
- 2020.7.15, Improve the deployment ability, add the C + + inference , serving deployment. In addition, the benchmarks of the ultra-lightweight OCR model are provided.
|
- 2020.7.23 发布7月21日B站直播课回放和PPT,课节1,PaddleOCR开源大礼包全面解读,[获取地址](https://aistudio.baidu.com/aistudio/course/introduce/1519)
|
||||||
- 2020.7.15, Add several related datasets, data annotation and synthesis tools.
|
- 2020.7.15 添加基于EasyEdge和Paddle-Lite的移动端DEMO,支持iOS和Android系统
|
||||||
- [more](./doc/doc_en/update_en.md)
|
- [more](./doc/doc_ch/update.md)
|
||||||
|
|
||||||
## Features
|
|
||||||
- Ultra-lightweight OCR model, total model size is only 8.6M
|
|
||||||
- Single model supports Chinese/English numbers combination recognition, vertical text recognition, long text recognition
|
|
||||||
- Detection model DB (4.1M) + recognition model CRNN (4.5M)
|
|
||||||
- Various text detection algorithms: EAST, DB
|
|
||||||
- Various text recognition algorithms: Rosetta, CRNN, STAR-Net, RARE
|
|
||||||
- Support Linux, Windows, macOS and other systems.
|
|
||||||
|
|
||||||
## Visualization
|
## 特性
|
||||||
|
- 超轻量级中文OCR模型,总模型仅8.6M
|
||||||
|
- 单模型支持中英文数字组合识别、竖排文本识别、长文本识别
|
||||||
|
- 检测模型DB(4.1M)+识别模型CRNN(4.5M)
|
||||||
|
- 实用通用中文OCR模型
|
||||||
|
- 多种预测推理部署方案,包括服务部署和端侧部署
|
||||||
|
- 多种文本检测训练算法,EAST、DB、SAST
|
||||||
|
- 多种文本识别训练算法,Rosetta、CRNN、STAR-Net、RARE、SRN
|
||||||
|
- 可运行于Linux、Windows、MacOS等多种系统
|
||||||
|
|
||||||
![](doc/imgs_results/11.jpg)
|
## 快速体验
|
||||||
|
|
||||||
![](doc/imgs_results/img_10.jpg)
|
<div align="center">
|
||||||
|
<img src="doc/imgs_results/11.jpg" width="800">
|
||||||
|
</div>
|
||||||
|
|
||||||
[More visualization](./doc/doc_en/visualization_en.md)
|
上图是超轻量级中文OCR模型效果展示,更多效果图请见[效果展示页面](./doc/doc_ch/visualization.md)。
|
||||||
|
|
||||||
You can also quickly experience the ultra-lightweight OCR : [Online Experience](https://www.paddlepaddle.org.cn/hub/scene/ocr)
|
- 超轻量级中文OCR在线体验地址:https://www.paddlepaddle.org.cn/hub/scene/ocr
|
||||||
|
- 移动端DEMO体验(基于EasyEdge和Paddle-Lite, 支持iOS和Android系统):[安装包二维码获取地址](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)
|
||||||
|
|
||||||
Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Android systems): [Sign in to the website to obtain the QR code for installing the App](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)
|
Android手机也可以扫描下面二维码安装体验。
|
||||||
|
|
||||||
Also, you can scan the QR code below to install the App (**Android support only**)
|
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="./doc/ocr-android-easyedge.png" width = "200" height = "200" />
|
<img src="./doc/ocr-android-easyedge.png" width = "200" height = "200" />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
- [**OCR Quick Start**](./doc/doc_en/quickstart_en.md)
|
|
||||||
|
|
||||||
<a name="Supported-Chinese-model-list"></a>
|
## 中文OCR模型列表
|
||||||
|
|
||||||
### Supported Models:
|
|模型名称|模型简介|检测模型地址|识别模型地址|支持空格的识别模型地址|
|
||||||
|
|
||||||
|Model Name|Description |Detection Model link|Recognition Model link| Support for space Recognition Model link|
|
|
||||||
|-|-|-|-|-|
|
|-|-|-|-|-|
|
||||||
|db_crnn_mobile|ultra-lightweight OCR model|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) / [pre-train model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)
|
|chinese_db_crnn_mobile|超轻量级中文OCR模型|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)
|
||||||
|db_crnn_server|General OCR model|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) / [pre-train model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)
|
|chinese_db_crnn_server|通用中文OCR模型|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)
|
||||||
|
|
||||||
|
## 文档教程
|
||||||
|
- [快速安装](./doc/doc_ch/installation.md)
|
||||||
|
- [中文OCR模型快速使用](./doc/doc_ch/quickstart.md)
|
||||||
|
- 算法介绍
|
||||||
|
- [文本检测](#文本检测算法)
|
||||||
|
- [文本识别](#文本识别算法)
|
||||||
|
- 模型训练/评估
|
||||||
|
- [文本检测](./doc/doc_ch/detection.md)
|
||||||
|
- [文本识别](./doc/doc_ch/recognition.md)
|
||||||
|
- [yml参数配置文件介绍](./doc/doc_ch/config.md)
|
||||||
|
- [中文OCR训练预测技巧](./doc/doc_ch/tricks.md)
|
||||||
|
- 预测部署
|
||||||
|
- [基于Python预测引擎推理](./doc/doc_ch/inference.md)
|
||||||
|
- [基于C++预测引擎推理](./deploy/cpp_infer/readme.md)
|
||||||
|
- [服务化部署](./doc/doc_ch/serving.md)
|
||||||
|
- [端侧部署](./deploy/lite/readme.md)
|
||||||
|
- 模型量化压缩(coming soon)
|
||||||
|
- [Benchmark](./doc/doc_ch/benchmark.md)
|
||||||
|
- 数据集
|
||||||
|
- [通用中英文OCR数据集](./doc/doc_ch/datasets.md)
|
||||||
|
- [手写中文OCR数据集](./doc/doc_ch/handwritten_datasets.md)
|
||||||
|
- [垂类多语言OCR数据集](./doc/doc_ch/vertical_and_multilingual_datasets.md)
|
||||||
|
- [常用数据标注工具](./doc/doc_ch/data_annotation.md)
|
||||||
|
- [常用数据合成工具](./doc/doc_ch/data_synthesis.md)
|
||||||
|
- 效果展示
|
||||||
|
- [超轻量级中文OCR效果展示](#超轻量级中文OCR效果展示)
|
||||||
|
- [通用中文OCR效果展示](#通用中文OCR效果展示)
|
||||||
|
- [支持空格的中文OCR效果展示](#支持空格的中文OCR效果展示)
|
||||||
|
- FAQ
|
||||||
|
- [【精选】OCR精选10个问题](./doc/doc_ch/FAQ.md)
|
||||||
|
- [【理论篇】OCR通用21个问题](./doc/doc_ch/FAQ.md)
|
||||||
|
- [【实战篇】PaddleOCR实战53个问题](./doc/doc_ch/FAQ.md)
|
||||||
|
- [技术交流群](#欢迎加入PaddleOCR技术交流群)
|
||||||
|
- [参考文献](./doc/doc_ch/reference.md)
|
||||||
|
- [许可证书](#许可证书)
|
||||||
|
- [贡献代码](#贡献代码)
|
||||||
|
|
||||||
## Tutorials
|
<a name="算法介绍"></a>
|
||||||
- [Installation](./doc/doc_en/installation_en.md)
|
## 算法介绍
|
||||||
- [Quick Start](./doc/doc_en/quickstart_en.md)
|
<a name="文本检测算法"></a>
|
||||||
- Algorithm introduction
|
### 1.文本检测算法
|
||||||
- [Text Detection Algorithm](#TEXTDETECTIONALGORITHM)
|
|
||||||
- [Text Recognition Algorithm](#TEXTRECOGNITIONALGORITHM)
|
|
||||||
- [END-TO-END OCR Algorithm](#ENDENDOCRALGORITHM)
|
|
||||||
- Model training/evaluation
|
|
||||||
- [Text Detection](./doc/doc_en/detection_en.md)
|
|
||||||
- [Text Recognition](./doc/doc_en/recognition_en.md)
|
|
||||||
- [Yml Configuration](./doc/doc_en/config_en.md)
|
|
||||||
- [Tricks](./doc/doc_en/tricks_en.md)
|
|
||||||
- Deployment
|
|
||||||
- [Python Inference](./doc/doc_en/inference_en.md)
|
|
||||||
- [C++ Inference](./deploy/cpp_infer/readme_en.md)
|
|
||||||
- [Serving](./doc/doc_en/serving_en.md)
|
|
||||||
- [Mobile](./deploy/lite/readme_en.md)
|
|
||||||
- Model Quantization and Compression (coming soon)
|
|
||||||
- [Benchmark](./doc/doc_en/benchmark_en.md)
|
|
||||||
- Datasets
|
|
||||||
- [General OCR Datasets(Chinese/English)](./doc/doc_en/datasets_en.md)
|
|
||||||
- [HandWritten_OCR_Datasets(Chinese)](./doc/doc_en/handwritten_datasets_en.md)
|
|
||||||
- [Various OCR Datasets(multilingual)](./doc/doc_en/vertical_and_multilingual_datasets_en.md)
|
|
||||||
- [Data Annotation Tools](./doc/doc_en/data_annotation_en.md)
|
|
||||||
- [Data Synthesis Tools](./doc/doc_en/data_synthesis_en.md)
|
|
||||||
- [FAQ](#FAQ)
|
|
||||||
- Visualization
|
|
||||||
- [Ultra-lightweight Chinese/English OCR Visualization](#UCOCRVIS)
|
|
||||||
- [General Chinese/English OCR Visualization](#GeOCRVIS)
|
|
||||||
- [Chinese/English OCR Visualization (Support Space Recognition )](#SpaceOCRVIS)
|
|
||||||
- [Community](#Community)
|
|
||||||
- [References](./doc/doc_en/reference_en.md)
|
|
||||||
- [License](#LICENSE)
|
|
||||||
- [Contribution](#CONTRIBUTION)
|
|
||||||
|
|
||||||
<a name="TEXTDETECTIONALGORITHM"></a>
|
PaddleOCR开源的文本检测算法列表:
|
||||||
## Text Detection Algorithm
|
|
||||||
|
|
||||||
PaddleOCR open source text detection algorithms list:
|
|
||||||
- [x] EAST([paper](https://arxiv.org/abs/1704.03155))
|
- [x] EAST([paper](https://arxiv.org/abs/1704.03155))
|
||||||
- [x] DB([paper](https://arxiv.org/abs/1911.08947))
|
- [x] DB([paper](https://arxiv.org/abs/1911.08947))
|
||||||
- [x] SAST([paper](https://arxiv.org/abs/1908.05498))(Baidu Self-Research)
|
- [x] SAST([paper](https://arxiv.org/abs/1908.05498))(百度自研)
|
||||||
|
|
||||||
On the ICDAR2015 dataset, the text detection result is as follows:
|
在ICDAR2015文本检测公开数据集上,算法效果如下:
|
||||||
|
|
||||||
|Model|Backbone|precision|recall|Hmean|Download link|
|
|模型|骨干网络|precision|recall|Hmean|下载链接|
|
||||||
|-|-|-|-|-|-|
|
|-|-|-|-|-|-|
|
||||||
|EAST|ResNet50_vd|88.18%|85.51%|86.82%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)|
|
|EAST|ResNet50_vd|88.18%|85.51%|86.82%|[下载链接](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)|
|
||||||
|EAST|MobileNetV3|81.67%|79.83%|80.74%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)|
|
|EAST|MobileNetV3|81.67%|79.83%|80.74%|[下载链接](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)|
|
||||||
|DB|ResNet50_vd|83.79%|80.65%|82.19%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)|
|
|DB|ResNet50_vd|83.79%|80.65%|82.19%|[下载链接](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)|
|
||||||
|DB|MobileNetV3|75.92%|73.18%|74.53%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)|
|
|DB|MobileNetV3|75.92%|73.18%|74.53%|[下载链接](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)|
|
||||||
|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)|
|
|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[下载链接](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)|
|
||||||
|
|
||||||
On Total-Text dataset, the text detection result is as follows:
|
在Total-text文本检测公开数据集上,算法效果如下:
|
||||||
|
|
||||||
|Model|Backbone|precision|recall|Hmean|Download link|
|
|模型|骨干网络|precision|recall|Hmean|下载链接|
|
||||||
|-|-|-|-|-|-|
|
|-|-|-|-|-|-|
|
||||||
|SAST|ResNet50_vd|88.74%|79.80%|84.03%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)|
|
|SAST|ResNet50_vd|88.74%|79.80%|84.03%|[下载链接](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)|
|
||||||
|
|
||||||
**Note:** Additional data, like icdar2013, icdar2017, COCO-Text, ArT, was added to the model training of SAST. Download English public dataset in organized format used by PaddleOCR from [Baidu Drive](https://pan.baidu.com/s/12cPnZcVuV1zn5DOd4mqjVw) (download code: 2bpi).
|
**说明:** SAST模型训练额外加入了icdar2013、icdar2017、COCO-Text、ArT等公开数据集进行调优。PaddleOCR用到的经过整理格式的英文公开数据集下载:[百度云地址](https://pan.baidu.com/s/12cPnZcVuV1zn5DOd4mqjVw) (提取码: 2bpi)
|
||||||
|
|
||||||
For use of [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) street view dataset with a total of 3w training data,the related configuration and pre-trained models for text detection task are as follows:
|
|
||||||
|Model|Backbone|Configuration file|Pre-trained model|
|
使用[LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/datasets.md#1icdar2019-lsvt)街景数据集共3w张数据,训练中文检测模型的相关配置和预训练文件如下:
|
||||||
|
|
||||||
|
|模型|骨干网络|配置文件|预训练模型|
|
||||||
|-|-|-|-|
|
|-|-|-|-|
|
||||||
|ultra-lightweight OCR model|MobileNetV3|det_mv3_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|
|
|超轻量中文模型|MobileNetV3|det_mv3_db.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|
|
||||||
|General OCR model|ResNet50_vd|det_r50_vd_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|
|
|通用中文OCR模型|ResNet50_vd|det_r50_vd_db.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|
|
||||||
|
|
||||||
* Note: For the training and evaluation of the above DB model, post-processing parameters box_thresh=0.6 and unclip_ratio=1.5 need to be set. If using different datasets and different models for training, these two parameters can be adjusted for better result.
|
* 注: 上述DB模型的训练和评估,需设置后处理参数box_thresh=0.6,unclip_ratio=1.5,使用不同数据集、不同模型训练,可调整这两个参数进行优化
|
||||||
|
|
||||||
For the training guide and use of PaddleOCR text detection algorithms, please refer to the document [Text detection model training/evaluation/prediction](./doc/doc_en/detection_en.md)
|
PaddleOCR文本检测算法的训练和使用请参考文档教程中[模型训练/评估中的文本检测部分](./doc/doc_ch/detection.md)。
|
||||||
|
|
||||||
<a name="TEXTRECOGNITIONALGORITHM"></a>
|
<a name="文本识别算法"></a>
|
||||||
## Text Recognition Algorithm
|
### 2.文本识别算法
|
||||||
|
|
||||||
PaddleOCR open-source text recognition algorithms list:
|
PaddleOCR开源的文本识别算法列表:
|
||||||
- [x] CRNN([paper](https://arxiv.org/abs/1507.05717))
|
- [x] CRNN([paper](https://arxiv.org/abs/1507.05717))
|
||||||
- [x] Rosetta([paper](https://arxiv.org/abs/1910.05085))
|
- [x] Rosetta([paper](https://arxiv.org/abs/1910.05085))
|
||||||
- [x] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html))
|
- [x] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html))
|
||||||
- [x] RARE([paper](https://arxiv.org/abs/1603.03915v1))
|
- [x] RARE([paper](https://arxiv.org/abs/1603.03915v1))
|
||||||
- [x] SRN([paper](https://arxiv.org/abs/2003.12294))(Baidu Self-Research)
|
- [x] SRN([paper](https://arxiv.org/abs/2003.12294))(百度自研)
|
||||||
|
|
||||||
Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation result of these above text recognition (using MJSynth and SynthText for training, evaluate on IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE) is as follow:
|
参考[DTRB](https://arxiv.org/abs/1904.01906)文字识别训练和评估流程,使用MJSynth和SynthText两个文字识别数据集训练,在IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE数据集上进行评估,算法效果如下:
|
||||||
|
|
||||||
|Model|Backbone|Avg Accuracy|Module combination|Download link|
|
|模型|骨干网络|Avg Accuracy|模型存储命名|下载链接|
|
||||||
|-|-|-|-|-|
|
|-|-|-|-|-|
|
||||||
|Rosetta|Resnet34_vd|80.24%|rec_r34_vd_none_none_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_none_ctc.tar)|
|
|Rosetta|Resnet34_vd|80.24%|rec_r34_vd_none_none_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_none_ctc.tar)|
|
||||||
|Rosetta|MobileNetV3|78.16%|rec_mv3_none_none_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_none_none_ctc.tar)|
|
|Rosetta|MobileNetV3|78.16%|rec_mv3_none_none_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_none_none_ctc.tar)|
|
||||||
|CRNN|Resnet34_vd|82.20%|rec_r34_vd_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_bilstm_ctc.tar)|
|
|CRNN|Resnet34_vd|82.20%|rec_r34_vd_none_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_bilstm_ctc.tar)|
|
||||||
|CRNN|MobileNetV3|79.37%|rec_mv3_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_none_bilstm_ctc.tar)|
|
|CRNN|MobileNetV3|79.37%|rec_mv3_none_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_none_bilstm_ctc.tar)|
|
||||||
|STAR-Net|Resnet34_vd|83.93%|rec_r34_vd_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_ctc.tar)|
|
|STAR-Net|Resnet34_vd|83.93%|rec_r34_vd_tps_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_ctc.tar)|
|
||||||
|STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)|
|
|STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)|
|
||||||
|RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)|
|
|RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)|
|
||||||
|RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)|
|
|RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)|
|
||||||
|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[Download link](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)|
|
|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[下载链接](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)|
|
||||||
|
|
||||||
**Note:** SRN model uses data expansion method to expand the two training sets mentioned above, and the expanded data can be downloaded from [Baidu Drive](https://pan.baidu.com/s/1-HSZ-ZVdqBF2HaBZ5pRAKA) (download code: y3ry).
|
**说明:** SRN模型使用了数据扰动方法对上述提到对两个训练集进行增广,增广后的数据可以在[百度网盘](https://pan.baidu.com/s/1-HSZ-ZVdqBF2HaBZ5pRAKA)上下载,提取码: y3ry。
|
||||||
|
原始论文使用两阶段训练平均精度为89.74%,PaddleOCR中使用one-stage训练,平均精度为88.33%。两种预训练权重均在[下载链接](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)中。
|
||||||
|
|
||||||
The average accuracy of the two-stage training in the original paper is 89.74%, and that of one stage training in paddleocr is 88.33%. Both pre-trained weights can be downloaded [here](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar).
|
使用[LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/datasets.md#1icdar2019-lsvt)街景数据集根据真值将图crop出来30w数据,进行位置校准。此外基于LSVT语料生成500w合成数据训练中文模型,相关配置和预训练文件如下:
|
||||||
|
|
||||||
We use [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) dataset and cropout 30w training data from original photos by using position groundtruth and make some calibration needed. In addition, based on the LSVT corpus, 500w synthetic data is generated to train the model. The related configuration and pre-trained models are as follows:
|
|模型|骨干网络|配置文件|预训练模型|
|
||||||
|
|
||||||
|Model|Backbone|Configuration file|Pre-trained model|
|
|
||||||
|-|-|-|-|
|
|-|-|-|-|
|
||||||
|ultra-lightweight OCR model|MobileNetV3|rec_chinese_lite_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)|
|
|超轻量中文模型|MobileNetV3|rec_chinese_lite_train.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|
|
||||||
|General OCR model|Resnet34_vd|rec_chinese_common_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)|
|
|通用中文OCR模型|Resnet34_vd|rec_chinese_common_train.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|
|
||||||
|
|
||||||
Please refer to the document for training guide and use of PaddleOCR text recognition algorithms [Text recognition model training/evaluation/prediction](./doc/doc_en/recognition_en.md)
|
PaddleOCR文本识别算法的训练和使用请参考文档教程中[模型训练/评估中的文本识别部分](./doc/doc_ch/recognition.md)。
|
||||||
|
|
||||||
<a name="ENDENDOCRALGORITHM"></a>
|
## 效果展示
|
||||||
## END-TO-END OCR Algorithm
|
|
||||||
- [ ] [End2End-PSL](https://arxiv.org/abs/1909.07808)(Baidu Self-Research, coming soon)
|
|
||||||
|
|
||||||
## Visualization
|
<a name="超轻量级中文OCR效果展示"></a>
|
||||||
|
### 1.超轻量级中文OCR效果展示 [more](./doc/doc_ch/visualization.md)
|
||||||
<a name="UCOCRVIS"></a>
|
|
||||||
### 1.Ultra-lightweight Chinese/English OCR Visualization [more](./doc/doc_en/visualization_en.md)
|
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="doc/imgs_results/1.jpg" width="800">
|
<img src="doc/imgs_results/1.jpg" width="800">
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<a name="GeOCRVIS"></a>
|
<a name="通用中文OCR效果展示"></a>
|
||||||
### 2. General Chinese/English OCR Visualization [more](./doc/doc_en/visualization_en.md)
|
### 2.通用中文OCR效果展示 [more](./doc/doc_ch/visualization.md)
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="doc/imgs_results/chinese_db_crnn_server/11.jpg" width="800">
|
<img src="doc/imgs_results/chinese_db_crnn_server/11.jpg" width="800">
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<a name="SpaceOCRVIS"></a>
|
<a name="支持空格的中文OCR效果展示"></a>
|
||||||
### 3.Chinese/English OCR Visualization (Space_support) [more](./doc/doc_en/visualization_en.md)
|
### 3.支持空格的中文OCR效果展示 [more](./doc/doc_ch/visualization.md)
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="doc/imgs_results/chinese_db_crnn_server/en_paper.jpg" width="800">
|
<img src="doc/imgs_results/chinese_db_crnn_server/en_paper.jpg" width="800">
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<a name="FAQ"></a>
|
<a name="欢迎加入PaddleOCR技术交流群"></a>
|
||||||
|
## 欢迎加入PaddleOCR技术交流群
|
||||||
## FAQ
|
请扫描下面二维码,完成问卷填写,获取加群二维码和OCR方向的炼丹秘籍
|
||||||
1. Error when using attention-based recognition model: KeyError: 'predict'
|
|
||||||
|
|
||||||
The inference of recognition model based on attention loss is still being debugged. For Chinese text recognition, it is recommended to choose the recognition model based on CTC loss first. In practice, it is also found that the recognition model based on attention loss is not as effective as the one based on CTC loss.
|
|
||||||
|
|
||||||
2. About inference speed
|
|
||||||
|
|
||||||
When there are a lot of texts in the picture, the prediction time will increase. You can use `--rec_batch_num` to set a smaller prediction batch size. The default value is 30, which can be changed to 10 or other values.
|
|
||||||
|
|
||||||
3. Service deployment and mobile deployment
|
|
||||||
|
|
||||||
It is expected that the service deployment based on Serving and the mobile deployment based on Paddle Lite will be released successively in mid-to-late June. Stay tuned for more updates.
|
|
||||||
|
|
||||||
4. Release time of self-developed algorithm
|
|
||||||
|
|
||||||
Baidu Self-developed algorithms such as SAST, SRN and end2end PSL will be released in June or July. Please be patient.
|
|
||||||
|
|
||||||
[more](./doc/doc_en/FAQ_en.md)
|
|
||||||
|
|
||||||
<a name="Community"></a>
|
|
||||||
## Community
|
|
||||||
Scan the QR code below with your wechat and completing the questionnaire, you can access to offical technical exchange group.
|
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="./doc/joinus.jpg" width = "200" height = "200" />
|
<img src="./doc/joinus.jpg" width = "200" height = "200" />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<a name="LICENSE"></a>
|
<a name="许可证书"></a>
|
||||||
## License
|
## 许可证书
|
||||||
This project is released under <a href="https://github.com/PaddlePaddle/PaddleOCR/blob/master/LICENSE">Apache 2.0 license</a>
|
本项目的发布受<a href="https://github.com/PaddlePaddle/PaddleOCR/blob/master/LICENSE">Apache 2.0 license</a>许可认证。
|
||||||
|
|
||||||
<a name="CONTRIBUTION"></a>
|
<a name="贡献代码"></a>
|
||||||
## Contribution
|
## 贡献代码
|
||||||
We welcome all the contributions to PaddleOCR and appreciate for your feedback very much.
|
我们非常欢迎你为PaddleOCR贡献代码,也十分感谢你的反馈。
|
||||||
|
|
||||||
- Many thanks to [Khanh Tran](https://github.com/xxxpsyduck) and [Karl Horky](https://github.com/karlhorky) for contributing and revising the English documentation.
|
- 非常感谢 [Khanh Tran](https://github.com/xxxpsyduck) 和 [Karl Horky](https://github.com/karlhorky) 贡献修改英文文档
|
||||||
- Many thanks to [zhangxin](https://github.com/ZhangXinNan) for contributing the new visualize function、add .gitgnore and discard set PYTHONPATH manually.
|
- 非常感谢 [zhangxin](https://github.com/ZhangXinNan)([Blog](https://blog.csdn.net/sdlypyzq)) 贡献新的可视化方式、添加.gitgnore、处理手动设置PYTHONPATH环境变量的问题
|
||||||
- Many thanks to [lyl120117](https://github.com/lyl120117) for contributing the code for printing the network structure.
|
- 非常感谢 [lyl120117](https://github.com/lyl120117) 贡献打印网络结构的代码
|
||||||
- Thanks [xiangyubo](https://github.com/xiangyubo) for contributing the handwritten Chinese OCR datasets.
|
- 非常感谢 [xiangyubo](https://github.com/xiangyubo) 贡献手写中文OCR数据集
|
||||||
- Thanks [authorfu](https://github.com/authorfu) for contributing Android demo and [xiadeye](https://github.com/xiadeye) contributing iOS demo, respectively.
|
- 非常感谢 [authorfu](https://github.com/authorfu) 贡献Android和[xiadeye](https://github.com/xiadeye) 贡献IOS的demo代码
|
||||||
- Thanks [BeyondYourself](https://github.com/BeyondYourself) for contributing many great suggestions and simplifying part of the code style.
|
- 非常感谢 [BeyondYourself](https://github.com/BeyondYourself) 给PaddleOCR提了很多非常棒的建议,并简化了PaddleOCR的部分代码风格。
|
||||||
- Thanks [tangmq](https://gitee.com/tangmq) for contributing Dockerized deployment services to PaddleOCR and supporting the rapid release of callable Restful API services.
|
- 非常感谢 [tangmq](https://gitee.com/tangmq) 给PaddleOCR增加Docker化部署服务,支持快速发布可调用的Restful API服务。
|
||||||
|
|
228
README_cn.md
228
README_cn.md
|
@ -1,228 +0,0 @@
|
||||||
[English](README.md) | 简体中文
|
|
||||||
|
|
||||||
## 简介
|
|
||||||
PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力使用者训练出更好的模型,并应用落地。
|
|
||||||
|
|
||||||
**近期更新**
|
|
||||||
- 2020.8.24 支持通过whl包安装使用PaddleOCR,具体参考[Paddleocr Package使用说明](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/whl.md)
|
|
||||||
- 2020.8.21 更新8月18日B站直播课回放和PPT,课节2,易学易用的OCR工具大礼包,[获取地址](https://aistudio.baidu.com/aistudio/education/group/info/1519)
|
|
||||||
- 2020.8.16 开源文本检测算法[SAST](https://arxiv.org/abs/1908.05498)和文本识别算法[SRN](https://arxiv.org/abs/2003.12294)
|
|
||||||
- 2020.7.23 发布7月21日B站直播课回放和PPT,课节1,PaddleOCR开源大礼包全面解读,[获取地址](https://aistudio.baidu.com/aistudio/course/introduce/1519)
|
|
||||||
- 2020.7.15 添加基于EasyEdge和Paddle-Lite的移动端DEMO,支持iOS和Android系统
|
|
||||||
- [more](./doc/doc_ch/update.md)
|
|
||||||
|
|
||||||
|
|
||||||
## 特性
|
|
||||||
- 超轻量级中文OCR模型,总模型仅8.6M
|
|
||||||
- 单模型支持中英文数字组合识别、竖排文本识别、长文本识别
|
|
||||||
- 检测模型DB(4.1M)+识别模型CRNN(4.5M)
|
|
||||||
- 实用通用中文OCR模型
|
|
||||||
- 多种预测推理部署方案,包括服务部署和端侧部署
|
|
||||||
- 多种文本检测训练算法,EAST、DB
|
|
||||||
- 多种文本识别训练算法,Rosetta、CRNN、STAR-Net、RARE
|
|
||||||
- 可运行于Linux、Windows、MacOS等多种系统
|
|
||||||
|
|
||||||
## 快速体验
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
<img src="doc/imgs_results/11.jpg" width="800">
|
|
||||||
</div>
|
|
||||||
|
|
||||||
上图是超轻量级中文OCR模型效果展示,更多效果图请见[效果展示页面](./doc/doc_ch/visualization.md)。
|
|
||||||
|
|
||||||
- 超轻量级中文OCR在线体验地址:https://www.paddlepaddle.org.cn/hub/scene/ocr
|
|
||||||
- 移动端DEMO体验(基于EasyEdge和Paddle-Lite, 支持iOS和Android系统):[安装包二维码获取地址](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)
|
|
||||||
|
|
||||||
Android手机也可以扫描下面二维码安装体验。
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
<img src="./doc/ocr-android-easyedge.png" width = "200" height = "200" />
|
|
||||||
</div>
|
|
||||||
|
|
||||||
- [**中文OCR模型快速使用**](./doc/doc_ch/quickstart.md)
|
|
||||||
|
|
||||||
|
|
||||||
## 中文OCR模型列表
|
|
||||||
|
|
||||||
|模型名称|模型简介|检测模型地址|识别模型地址|支持空格的识别模型地址|
|
|
||||||
|-|-|-|-|-|
|
|
||||||
|chinese_db_crnn_mobile|超轻量级中文OCR模型|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)
|
|
||||||
|chinese_db_crnn_server|通用中文OCR模型|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) / [预训练模型](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)
|
|
||||||
|
|
||||||
## 文档教程
|
|
||||||
- [快速安装](./doc/doc_ch/installation.md)
|
|
||||||
- [中文OCR模型快速使用](./doc/doc_ch/quickstart.md)
|
|
||||||
- 算法介绍
|
|
||||||
- [文本检测](#文本检测算法)
|
|
||||||
- [文本识别](#文本识别算法)
|
|
||||||
- [端到端OCR](#端到端OCR算法)
|
|
||||||
- 模型训练/评估
|
|
||||||
- [文本检测](./doc/doc_ch/detection.md)
|
|
||||||
- [文本识别](./doc/doc_ch/recognition.md)
|
|
||||||
- [yml参数配置文件介绍](./doc/doc_ch/config.md)
|
|
||||||
- [中文OCR训练预测技巧](./doc/doc_ch/tricks.md)
|
|
||||||
- 预测部署
|
|
||||||
- [基于Python预测引擎推理](./doc/doc_ch/inference.md)
|
|
||||||
- [基于C++预测引擎推理](./deploy/cpp_infer/readme.md)
|
|
||||||
- [服务化部署](./doc/doc_ch/serving.md)
|
|
||||||
- [端侧部署](./deploy/lite/readme.md)
|
|
||||||
- 模型量化压缩(coming soon)
|
|
||||||
- [Benchmark](./doc/doc_ch/benchmark.md)
|
|
||||||
- 数据集
|
|
||||||
- [通用中英文OCR数据集](./doc/doc_ch/datasets.md)
|
|
||||||
- [手写中文OCR数据集](./doc/doc_ch/handwritten_datasets.md)
|
|
||||||
- [垂类多语言OCR数据集](./doc/doc_ch/vertical_and_multilingual_datasets.md)
|
|
||||||
- [常用数据标注工具](./doc/doc_ch/data_annotation.md)
|
|
||||||
- [常用数据合成工具](./doc/doc_ch/data_synthesis.md)
|
|
||||||
- [FAQ](#FAQ)
|
|
||||||
- 效果展示
|
|
||||||
- [超轻量级中文OCR效果展示](#超轻量级中文OCR效果展示)
|
|
||||||
- [通用中文OCR效果展示](#通用中文OCR效果展示)
|
|
||||||
- [支持空格的中文OCR效果展示](#支持空格的中文OCR效果展示)
|
|
||||||
- [技术交流群](#欢迎加入PaddleOCR技术交流群)
|
|
||||||
- [参考文献](./doc/doc_ch/reference.md)
|
|
||||||
- [许可证书](#许可证书)
|
|
||||||
- [贡献代码](#贡献代码)
|
|
||||||
|
|
||||||
<a name="算法介绍"></a>
|
|
||||||
## 算法介绍
|
|
||||||
<a name="文本检测算法"></a>
|
|
||||||
### 1.文本检测算法
|
|
||||||
|
|
||||||
PaddleOCR开源的文本检测算法列表:
|
|
||||||
- [x] EAST([paper](https://arxiv.org/abs/1704.03155))
|
|
||||||
- [x] DB([paper](https://arxiv.org/abs/1911.08947))
|
|
||||||
- [x] SAST([paper](https://arxiv.org/abs/1908.05498))(百度自研)
|
|
||||||
|
|
||||||
在ICDAR2015文本检测公开数据集上,算法效果如下:
|
|
||||||
|
|
||||||
|模型|骨干网络|precision|recall|Hmean|下载链接|
|
|
||||||
|-|-|-|-|-|-|
|
|
||||||
|EAST|ResNet50_vd|88.18%|85.51%|86.82%|[下载链接](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)|
|
|
||||||
|EAST|MobileNetV3|81.67%|79.83%|80.74%|[下载链接](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)|
|
|
||||||
|DB|ResNet50_vd|83.79%|80.65%|82.19%|[下载链接](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)|
|
|
||||||
|DB|MobileNetV3|75.92%|73.18%|74.53%|[下载链接](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)|
|
|
||||||
|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[下载链接](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)|
|
|
||||||
|
|
||||||
在Total-text文本检测公开数据集上,算法效果如下:
|
|
||||||
|
|
||||||
|模型|骨干网络|precision|recall|Hmean|下载链接|
|
|
||||||
|-|-|-|-|-|-|
|
|
||||||
|SAST|ResNet50_vd|88.74%|79.80%|84.03%|[下载链接](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)|
|
|
||||||
|
|
||||||
**说明:** SAST模型训练额外加入了icdar2013、icdar2017、COCO-Text、ArT等公开数据集进行调优。PaddleOCR用到的经过整理格式的英文公开数据集下载:[百度云地址](https://pan.baidu.com/s/12cPnZcVuV1zn5DOd4mqjVw) (提取码: 2bpi)
|
|
||||||
|
|
||||||
|
|
||||||
使用[LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/datasets.md#1icdar2019-lsvt)街景数据集共3w张数据,训练中文检测模型的相关配置和预训练文件如下:
|
|
||||||
|
|
||||||
|模型|骨干网络|配置文件|预训练模型|
|
|
||||||
|-|-|-|-|
|
|
||||||
|超轻量中文模型|MobileNetV3|det_mv3_db.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|
|
|
||||||
|通用中文OCR模型|ResNet50_vd|det_r50_vd_db.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|
|
|
||||||
|
|
||||||
* 注: 上述DB模型的训练和评估,需设置后处理参数box_thresh=0.6,unclip_ratio=1.5,使用不同数据集、不同模型训练,可调整这两个参数进行优化
|
|
||||||
|
|
||||||
PaddleOCR文本检测算法的训练和使用请参考文档教程中[模型训练/评估中的文本检测部分](./doc/doc_ch/detection.md)。
|
|
||||||
|
|
||||||
<a name="文本识别算法"></a>
|
|
||||||
### 2.文本识别算法
|
|
||||||
|
|
||||||
PaddleOCR开源的文本识别算法列表:
|
|
||||||
- [x] CRNN([paper](https://arxiv.org/abs/1507.05717))
|
|
||||||
- [x] Rosetta([paper](https://arxiv.org/abs/1910.05085))
|
|
||||||
- [x] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html))
|
|
||||||
- [x] RARE([paper](https://arxiv.org/abs/1603.03915v1))
|
|
||||||
- [x] SRN([paper](https://arxiv.org/abs/2003.12294))(百度自研)
|
|
||||||
|
|
||||||
参考[DTRB](https://arxiv.org/abs/1904.01906)文字识别训练和评估流程,使用MJSynth和SynthText两个文字识别数据集训练,在IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE数据集上进行评估,算法效果如下:
|
|
||||||
|
|
||||||
|模型|骨干网络|Avg Accuracy|模型存储命名|下载链接|
|
|
||||||
|-|-|-|-|-|
|
|
||||||
|Rosetta|Resnet34_vd|80.24%|rec_r34_vd_none_none_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_none_ctc.tar)|
|
|
||||||
|Rosetta|MobileNetV3|78.16%|rec_mv3_none_none_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_none_none_ctc.tar)|
|
|
||||||
|CRNN|Resnet34_vd|82.20%|rec_r34_vd_none_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_bilstm_ctc.tar)|
|
|
||||||
|CRNN|MobileNetV3|79.37%|rec_mv3_none_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_none_bilstm_ctc.tar)|
|
|
||||||
|STAR-Net|Resnet34_vd|83.93%|rec_r34_vd_tps_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_ctc.tar)|
|
|
||||||
|STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)|
|
|
||||||
|RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)|
|
|
||||||
|RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)|
|
|
||||||
|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[下载链接](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)|
|
|
||||||
|
|
||||||
**说明:** SRN模型使用了数据扰动方法对上述提到对两个训练集进行增广,增广后的数据可以在[百度网盘](https://pan.baidu.com/s/1-HSZ-ZVdqBF2HaBZ5pRAKA)上下载,提取码: y3ry。
|
|
||||||
原始论文使用两阶段训练平均精度为89.74%,PaddleOCR中使用one-stage训练,平均精度为88.33%。两种预训练权重均在[下载链接](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)中。
|
|
||||||
|
|
||||||
使用[LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/datasets.md#1icdar2019-lsvt)街景数据集根据真值将图crop出来30w数据,进行位置校准。此外基于LSVT语料生成500w合成数据训练中文模型,相关配置和预训练文件如下:
|
|
||||||
|
|
||||||
|模型|骨干网络|配置文件|预训练模型|
|
|
||||||
|-|-|-|-|
|
|
||||||
|超轻量中文模型|MobileNetV3|rec_chinese_lite_train.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|
|
|
||||||
|通用中文OCR模型|Resnet34_vd|rec_chinese_common_train.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|
|
|
||||||
|
|
||||||
PaddleOCR文本识别算法的训练和使用请参考文档教程中[模型训练/评估中的文本识别部分](./doc/doc_ch/recognition.md)。
|
|
||||||
|
|
||||||
<a name="端到端OCR算法"></a>
|
|
||||||
### 3.端到端OCR算法
|
|
||||||
- [ ] [End2End-PSL](https://arxiv.org/abs/1909.07808)(百度自研, coming soon)
|
|
||||||
|
|
||||||
## 效果展示
|
|
||||||
|
|
||||||
<a name="超轻量级中文OCR效果展示"></a>
|
|
||||||
### 1.超轻量级中文OCR效果展示 [more](./doc/doc_ch/visualization.md)
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
<img src="doc/imgs_results/1.jpg" width="800">
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<a name="通用中文OCR效果展示"></a>
|
|
||||||
### 2.通用中文OCR效果展示 [more](./doc/doc_ch/visualization.md)
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
<img src="doc/imgs_results/chinese_db_crnn_server/11.jpg" width="800">
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<a name="支持空格的中文OCR效果展示"></a>
|
|
||||||
### 3.支持空格的中文OCR效果展示 [more](./doc/doc_ch/visualization.md)
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
<img src="doc/imgs_results/chinese_db_crnn_server/en_paper.jpg" width="800">
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<a name="FAQ"></a>
|
|
||||||
## FAQ
|
|
||||||
1. **转换attention识别模型时报错:KeyError: 'predict'**
|
|
||||||
问题已解,请更新到最新代码。
|
|
||||||
|
|
||||||
2. **关于推理速度**
|
|
||||||
图片中的文字较多时,预测时间会增,可以使用--rec_batch_num设置更小预测batch num,默认值为30,可以改为10或其他数值。
|
|
||||||
|
|
||||||
3. **服务部署与移动端部署**
|
|
||||||
预计6月中下旬会先后发布基于Serving的服务部署方案和基于Paddle Lite的移动端部署方案,欢迎持续关注。
|
|
||||||
|
|
||||||
4. **自研算法发布时间**
|
|
||||||
自研算法SAST、SRN、End2End-PSL都将在7-8月陆续发布,敬请期待。
|
|
||||||
|
|
||||||
[more](./doc/doc_ch/FAQ.md)
|
|
||||||
|
|
||||||
<a name="欢迎加入PaddleOCR技术交流群"></a>
|
|
||||||
## 欢迎加入PaddleOCR技术交流群
|
|
||||||
请扫描下面二维码,完成问卷填写,获取加群二维码和OCR方向的炼丹秘籍
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
<img src="./doc/joinus.jpg" width = "200" height = "200" />
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<a name="许可证书"></a>
|
|
||||||
## 许可证书
|
|
||||||
本项目的发布受<a href="https://github.com/PaddlePaddle/PaddleOCR/blob/master/LICENSE">Apache 2.0 license</a>许可认证。
|
|
||||||
|
|
||||||
<a name="贡献代码"></a>
|
|
||||||
## 贡献代码
|
|
||||||
我们非常欢迎你为PaddleOCR贡献代码,也十分感谢你的反馈。
|
|
||||||
|
|
||||||
- 非常感谢 [Khanh Tran](https://github.com/xxxpsyduck) 和 [Karl Horky](https://github.com/karlhorky) 贡献修改英文文档
|
|
||||||
- 非常感谢 [zhangxin](https://github.com/ZhangXinNan)([Blog](https://blog.csdn.net/sdlypyzq)) 贡献新的可视化方式、添加.gitgnore、处理手动设置PYTHONPATH环境变量的问题
|
|
||||||
- 非常感谢 [lyl120117](https://github.com/lyl120117) 贡献打印网络结构的代码
|
|
||||||
- 非常感谢 [xiangyubo](https://github.com/xiangyubo) 贡献手写中文OCR数据集
|
|
||||||
- 非常感谢 [authorfu](https://github.com/authorfu) 贡献Android和[xiadeye](https://github.com/xiadeye) 贡献IOS的demo代码
|
|
||||||
- 非常感谢 [BeyondYourself](https://github.com/BeyondYourself) 给PaddleOCR提了很多非常棒的建议,并简化了PaddleOCR的部分代码风格。
|
|
||||||
- 非常感谢 [tangmq](https://gitee.com/tangmq) 给PaddleOCR增加Docker化部署服务,支持快速发布可调用的Restful API服务。
|
|
|
@ -0,0 +1,231 @@
|
||||||
|
English | [简体中文](README.md)
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
PaddleOCR aims to create rich, leading, and practical OCR tools that help users train better models and apply them into practice.
|
||||||
|
|
||||||
|
**Recent updates**
|
||||||
|
- 2020.8.24 Support the use of PaddleOCR through whl package installation,pelease refer [PaddleOCR Package](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/whl_en.md)
|
||||||
|
- 2020.8.16, Release text detection algorithm [SAST](https://arxiv.org/abs/1908.05498) and text recognition algorithm [SRN](https://arxiv.org/abs/2003.12294)
|
||||||
|
- 2020.7.23, Release the playback and PPT of live class on BiliBili station, PaddleOCR Introduction, [address](https://aistudio.baidu.com/aistudio/course/introduce/1519)
|
||||||
|
- 2020.7.15, Add mobile App demo , support both iOS and Android ( based on easyedge and Paddle Lite)
|
||||||
|
- 2020.7.15, Improve the deployment ability, add the C + + inference , serving deployment. In addition, the benchmarks of the ultra-lightweight OCR model are provided.
|
||||||
|
- 2020.7.15, Add several related datasets, data annotation and synthesis tools.
|
||||||
|
- [more](./doc/doc_en/update_en.md)
|
||||||
|
|
||||||
|
## Features
|
||||||
|
- Ultra-lightweight OCR model, total model size is only 8.6M
|
||||||
|
- Single model supports Chinese/English numbers combination recognition, vertical text recognition, long text recognition
|
||||||
|
- Detection model DB (4.1M) + recognition model CRNN (4.5M)
|
||||||
|
- Various text detection algorithms: EAST, DB
|
||||||
|
- Various text recognition algorithms: Rosetta, CRNN, STAR-Net, RARE
|
||||||
|
- Support Linux, Windows, macOS and other systems.
|
||||||
|
|
||||||
|
## Visualization
|
||||||
|
|
||||||
|
![](doc/imgs_results/11.jpg)
|
||||||
|
|
||||||
|
![](doc/imgs_results/img_10.jpg)
|
||||||
|
|
||||||
|
[More visualization](./doc/doc_en/visualization_en.md)
|
||||||
|
|
||||||
|
You can also quickly experience the ultra-lightweight OCR : [Online Experience](https://www.paddlepaddle.org.cn/hub/scene/ocr)
|
||||||
|
|
||||||
|
Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Android systems): [Sign in to the website to obtain the QR code for installing the App](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)
|
||||||
|
|
||||||
|
Also, you can scan the QR code below to install the App (**Android support only**)
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="./doc/ocr-android-easyedge.png" width = "200" height = "200" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
- [**OCR Quick Start**](./doc/doc_en/quickstart_en.md)
|
||||||
|
|
||||||
|
<a name="Supported-Chinese-model-list"></a>
|
||||||
|
|
||||||
|
### Supported Models:
|
||||||
|
|
||||||
|
|Model Name|Description |Detection Model link|Recognition Model link| Support for space Recognition Model link|
|
||||||
|
|-|-|-|-|-|
|
||||||
|
|db_crnn_mobile|ultra-lightweight OCR model|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) / [pre-train model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)
|
||||||
|
|db_crnn_server|General OCR model|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_infer.tar) / [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) / [pre-train model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)
|
||||||
|
|
||||||
|
|
||||||
|
## Tutorials
|
||||||
|
- [Installation](./doc/doc_en/installation_en.md)
|
||||||
|
- [Quick Start](./doc/doc_en/quickstart_en.md)
|
||||||
|
- Algorithm introduction
|
||||||
|
- [Text Detection Algorithm](#TEXTDETECTIONALGORITHM)
|
||||||
|
- [Text Recognition Algorithm](#TEXTRECOGNITIONALGORITHM)
|
||||||
|
- [END-TO-END OCR Algorithm](#ENDENDOCRALGORITHM)
|
||||||
|
- Model training/evaluation
|
||||||
|
- [Text Detection](./doc/doc_en/detection_en.md)
|
||||||
|
- [Text Recognition](./doc/doc_en/recognition_en.md)
|
||||||
|
- [Yml Configuration](./doc/doc_en/config_en.md)
|
||||||
|
- [Tricks](./doc/doc_en/tricks_en.md)
|
||||||
|
- Deployment
|
||||||
|
- [Python Inference](./doc/doc_en/inference_en.md)
|
||||||
|
- [C++ Inference](./deploy/cpp_infer/readme_en.md)
|
||||||
|
- [Serving](./doc/doc_en/serving_en.md)
|
||||||
|
- [Mobile](./deploy/lite/readme_en.md)
|
||||||
|
- Model Quantization and Compression (coming soon)
|
||||||
|
- [Benchmark](./doc/doc_en/benchmark_en.md)
|
||||||
|
- Datasets
|
||||||
|
- [General OCR Datasets(Chinese/English)](./doc/doc_en/datasets_en.md)
|
||||||
|
- [HandWritten_OCR_Datasets(Chinese)](./doc/doc_en/handwritten_datasets_en.md)
|
||||||
|
- [Various OCR Datasets(multilingual)](./doc/doc_en/vertical_and_multilingual_datasets_en.md)
|
||||||
|
- [Data Annotation Tools](./doc/doc_en/data_annotation_en.md)
|
||||||
|
- [Data Synthesis Tools](./doc/doc_en/data_synthesis_en.md)
|
||||||
|
- [FAQ](#FAQ)
|
||||||
|
- Visualization
|
||||||
|
- [Ultra-lightweight Chinese/English OCR Visualization](#UCOCRVIS)
|
||||||
|
- [General Chinese/English OCR Visualization](#GeOCRVIS)
|
||||||
|
- [Chinese/English OCR Visualization (Support Space Recognition )](#SpaceOCRVIS)
|
||||||
|
- [Community](#Community)
|
||||||
|
- [References](./doc/doc_en/reference_en.md)
|
||||||
|
- [License](#LICENSE)
|
||||||
|
- [Contribution](#CONTRIBUTION)
|
||||||
|
|
||||||
|
<a name="TEXTDETECTIONALGORITHM"></a>
|
||||||
|
## Text Detection Algorithm
|
||||||
|
|
||||||
|
PaddleOCR open source text detection algorithms list:
|
||||||
|
- [x] EAST([paper](https://arxiv.org/abs/1704.03155))
|
||||||
|
- [x] DB([paper](https://arxiv.org/abs/1911.08947))
|
||||||
|
- [x] SAST([paper](https://arxiv.org/abs/1908.05498))(Baidu Self-Research)
|
||||||
|
|
||||||
|
On the ICDAR2015 dataset, the text detection result is as follows:
|
||||||
|
|
||||||
|
|Model|Backbone|precision|recall|Hmean|Download link|
|
||||||
|
|-|-|-|-|-|-|
|
||||||
|
|EAST|ResNet50_vd|88.18%|85.51%|86.82%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)|
|
||||||
|
|EAST|MobileNetV3|81.67%|79.83%|80.74%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)|
|
||||||
|
|DB|ResNet50_vd|83.79%|80.65%|82.19%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)|
|
||||||
|
|DB|MobileNetV3|75.92%|73.18%|74.53%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)|
|
||||||
|
|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)|
|
||||||
|
|
||||||
|
On Total-Text dataset, the text detection result is as follows:
|
||||||
|
|
||||||
|
|Model|Backbone|precision|recall|Hmean|Download link|
|
||||||
|
|-|-|-|-|-|-|
|
||||||
|
|SAST|ResNet50_vd|88.74%|79.80%|84.03%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)|
|
||||||
|
|
||||||
|
**Note:** Additional data, like icdar2013, icdar2017, COCO-Text, ArT, was added to the model training of SAST. Download English public dataset in organized format used by PaddleOCR from [Baidu Drive](https://pan.baidu.com/s/12cPnZcVuV1zn5DOd4mqjVw) (download code: 2bpi).
|
||||||
|
|
||||||
|
For use of [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) street view dataset with a total of 3w training data,the related configuration and pre-trained models for text detection task are as follows:
|
||||||
|
|Model|Backbone|Configuration file|Pre-trained model|
|
||||||
|
|-|-|-|-|
|
||||||
|
|ultra-lightweight OCR model|MobileNetV3|det_mv3_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|
|
||||||
|
|General OCR model|ResNet50_vd|det_r50_vd_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_r50_vd_db.tar)|
|
||||||
|
|
||||||
|
* Note: For the training and evaluation of the above DB model, post-processing parameters box_thresh=0.6 and unclip_ratio=1.5 need to be set. If using different datasets and different models for training, these two parameters can be adjusted for better result.
|
||||||
|
|
||||||
|
For the training guide and use of PaddleOCR text detection algorithms, please refer to the document [Text detection model training/evaluation/prediction](./doc/doc_en/detection_en.md)
|
||||||
|
|
||||||
|
<a name="TEXTRECOGNITIONALGORITHM"></a>
|
||||||
|
## Text Recognition Algorithm
|
||||||
|
|
||||||
|
PaddleOCR open-source text recognition algorithms list:
|
||||||
|
- [x] CRNN([paper](https://arxiv.org/abs/1507.05717))
|
||||||
|
- [x] Rosetta([paper](https://arxiv.org/abs/1910.05085))
|
||||||
|
- [x] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html))
|
||||||
|
- [x] RARE([paper](https://arxiv.org/abs/1603.03915v1))
|
||||||
|
- [x] SRN([paper](https://arxiv.org/abs/2003.12294))(Baidu Self-Research)
|
||||||
|
|
||||||
|
Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation result of these above text recognition (using MJSynth and SynthText for training, evaluate on IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE) is as follow:
|
||||||
|
|
||||||
|
|Model|Backbone|Avg Accuracy|Module combination|Download link|
|
||||||
|
|-|-|-|-|-|
|
||||||
|
|Rosetta|Resnet34_vd|80.24%|rec_r34_vd_none_none_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_none_ctc.tar)|
|
||||||
|
|Rosetta|MobileNetV3|78.16%|rec_mv3_none_none_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_none_none_ctc.tar)|
|
||||||
|
|CRNN|Resnet34_vd|82.20%|rec_r34_vd_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_none_bilstm_ctc.tar)|
|
||||||
|
|CRNN|MobileNetV3|79.37%|rec_mv3_none_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_none_bilstm_ctc.tar)|
|
||||||
|
|STAR-Net|Resnet34_vd|83.93%|rec_r34_vd_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_ctc.tar)|
|
||||||
|
|STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)|
|
||||||
|
|RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)|
|
||||||
|
|RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)|
|
||||||
|
|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[Download link](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)|
|
||||||
|
|
||||||
|
**Note:** SRN model uses data expansion method to expand the two training sets mentioned above, and the expanded data can be downloaded from [Baidu Drive](https://pan.baidu.com/s/1-HSZ-ZVdqBF2HaBZ5pRAKA) (download code: y3ry).
|
||||||
|
|
||||||
|
The average accuracy of the two-stage training in the original paper is 89.74%, and that of one stage training in paddleocr is 88.33%. Both pre-trained weights can be downloaded [here](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar).
|
||||||
|
|
||||||
|
We use [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) dataset and cropout 30w training data from original photos by using position groundtruth and make some calibration needed. In addition, based on the LSVT corpus, 500w synthetic data is generated to train the model. The related configuration and pre-trained models are as follows:
|
||||||
|
|
||||||
|
|Model|Backbone|Configuration file|Pre-trained model|
|
||||||
|
|-|-|-|-|
|
||||||
|
|ultra-lightweight OCR model|MobileNetV3|rec_chinese_lite_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)|
|
||||||
|
|General OCR model|Resnet34_vd|rec_chinese_common_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_enhance.tar)|
|
||||||
|
|
||||||
|
Please refer to the document for training guide and use of PaddleOCR text recognition algorithms [Text recognition model training/evaluation/prediction](./doc/doc_en/recognition_en.md)
|
||||||
|
|
||||||
|
<a name="ENDENDOCRALGORITHM"></a>
|
||||||
|
## END-TO-END OCR Algorithm
|
||||||
|
- [ ] [End2End-PSL](https://arxiv.org/abs/1909.07808)(Baidu Self-Research, coming soon)
|
||||||
|
|
||||||
|
## Visualization
|
||||||
|
|
||||||
|
<a name="UCOCRVIS"></a>
|
||||||
|
### 1.Ultra-lightweight Chinese/English OCR Visualization [more](./doc/doc_en/visualization_en.md)
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="doc/imgs_results/1.jpg" width="800">
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<a name="GeOCRVIS"></a>
|
||||||
|
### 2. General Chinese/English OCR Visualization [more](./doc/doc_en/visualization_en.md)
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="doc/imgs_results/chinese_db_crnn_server/11.jpg" width="800">
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<a name="SpaceOCRVIS"></a>
|
||||||
|
### 3.Chinese/English OCR Visualization (Space_support) [more](./doc/doc_en/visualization_en.md)
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="doc/imgs_results/chinese_db_crnn_server/en_paper.jpg" width="800">
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<a name="FAQ"></a>
|
||||||
|
|
||||||
|
## FAQ
|
||||||
|
1. Error when using attention-based recognition model: KeyError: 'predict'
|
||||||
|
|
||||||
|
The inference of recognition model based on attention loss is still being debugged. For Chinese text recognition, it is recommended to choose the recognition model based on CTC loss first. In practice, it is also found that the recognition model based on attention loss is not as effective as the one based on CTC loss.
|
||||||
|
|
||||||
|
2. About inference speed
|
||||||
|
|
||||||
|
When there are a lot of texts in the picture, the prediction time will increase. You can use `--rec_batch_num` to set a smaller prediction batch size. The default value is 30, which can be changed to 10 or other values.
|
||||||
|
|
||||||
|
3. Service deployment and mobile deployment
|
||||||
|
|
||||||
|
It is expected that the service deployment based on Serving and the mobile deployment based on Paddle Lite will be released successively in mid-to-late June. Stay tuned for more updates.
|
||||||
|
|
||||||
|
4. Release time of self-developed algorithm
|
||||||
|
|
||||||
|
Baidu Self-developed algorithms such as SAST, SRN and end2end PSL will be released in June or July. Please be patient.
|
||||||
|
|
||||||
|
[more](./doc/doc_en/FAQ_en.md)
|
||||||
|
|
||||||
|
<a name="Community"></a>
|
||||||
|
## Community
|
||||||
|
Scan the QR code below with your wechat and completing the questionnaire, you can access to offical technical exchange group.
|
||||||
|
|
||||||
|
<div align="center">
|
||||||
|
<img src="./doc/joinus.jpg" width = "200" height = "200" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<a name="LICENSE"></a>
|
||||||
|
## License
|
||||||
|
This project is released under <a href="https://github.com/PaddlePaddle/PaddleOCR/blob/master/LICENSE">Apache 2.0 license</a>
|
||||||
|
|
||||||
|
<a name="CONTRIBUTION"></a>
|
||||||
|
## Contribution
|
||||||
|
We welcome all the contributions to PaddleOCR and appreciate for your feedback very much.
|
||||||
|
|
||||||
|
- Many thanks to [Khanh Tran](https://github.com/xxxpsyduck) and [Karl Horky](https://github.com/karlhorky) for contributing and revising the English documentation.
|
||||||
|
- Many thanks to [zhangxin](https://github.com/ZhangXinNan) for contributing the new visualize function、add .gitgnore and discard set PYTHONPATH manually.
|
||||||
|
- Many thanks to [lyl120117](https://github.com/lyl120117) for contributing the code for printing the network structure.
|
||||||
|
- Thanks [xiangyubo](https://github.com/xiangyubo) for contributing the handwritten Chinese OCR datasets.
|
||||||
|
- Thanks [authorfu](https://github.com/authorfu) for contributing Android demo and [xiadeye](https://github.com/xiadeye) contributing iOS demo, respectively.
|
||||||
|
- Thanks [BeyondYourself](https://github.com/BeyondYourself) for contributing many great suggestions and simplifying part of the code style.
|
||||||
|
- Thanks [tangmq](https://gitee.com/tangmq) for contributing Dockerized deployment services to PaddleOCR and supporting the rapid release of callable Restful API services.
|
|
@ -1,25 +1,287 @@
|
||||||
## FAQ
|
# FAQ
|
||||||
|
|
||||||
1. **预测报错:got an unexpected keyword argument 'gradient_clip'**
|
## 写在前面
|
||||||
安装的paddle版本不对,目前本项目仅支持paddle1.7,近期会适配到1.8。
|
|
||||||
|
|
||||||
2. **转换attention识别模型时报错:KeyError: 'predict'**
|
- 我们收集整理了issues和用户群中的常见问题和解答,并且会不断更新,旨在为OCR的开发者提供一些参考,也希望帮助大家少走一些弯路。
|
||||||
问题已解决,请更新到最新代码。
|
|
||||||
|
|
||||||
3. **关于推理速度**
|
- OCR领域大佬众多,本文档回答主要依赖有限的项目实践,难免挂一漏万,如有遗漏和不足,也**希望有识之士帮忙补充和修正**,万分感谢。
|
||||||
图片中的文字较多时,预测时间会增,可以使用--rec_batch_num设置更小预测batch num,默认值为30,可以改为10或其他数值。
|
|
||||||
|
|
||||||
4. **服务部署与移动端部署**
|
|
||||||
预计6月中下旬会先后发布基于Serving的服务部署方案和基于Paddle Lite的移动端部署方案,欢迎持续关注。
|
|
||||||
|
|
||||||
5. **自研算法发布时间**
|
## PaddleOCR常见问题汇总(持续更新)
|
||||||
自研算法SAST、SRN、End2End-PSL都将在7-8月陆续发布,敬请期待。
|
|
||||||
|
|
||||||
6. **如何在Windows或Mac系统上运行**
|
* [【精选】OCR精选10个问题](#【精选】OCR精选10个问题)
|
||||||
PaddleOCR已完成Windows和Mac系统适配,运行时注意两点:1、在[快速安装](./installation.md)时,如果不想安装docker,可跳过第一步,直接从第二步安装paddle开始。2、inference模型下载时,如果没有安装wget,可直接点击模型链接或将链接地址复制到浏览器进行下载,并解压放置到相应目录。
|
* [【理论篇】OCR通用21个问题](#【理论篇】OCR通用问题)
|
||||||
|
* [基础知识3题](#基础知识)
|
||||||
|
* [数据集4题](#数据集)
|
||||||
|
* [模型训练调优6题](#模型训练调优)
|
||||||
|
* [预测部署8题](#预测部署)
|
||||||
|
* [【实战篇】PaddleOCR实战53个问题](#【实战篇】PaddleOCR实战问题)
|
||||||
|
* [使用咨询16题](#使用咨询)
|
||||||
|
* [数据集9题](#数据集)
|
||||||
|
* [模型训练调优13题](#模型训练调优)
|
||||||
|
* [预测部署15题](#[预测部署)
|
||||||
|
|
||||||
7. **超轻量模型和通用OCR模型的区别**
|
|
||||||
目前PaddleOCR开源了2个中文模型,分别是8.6M超轻量中文模型和通用中文OCR模型。两者对比信息如下:
|
|
||||||
|
|
||||||
|
## 【精选】OCR精选10个问题
|
||||||
|
|
||||||
|
#### Q1.1.1:基于深度学习的文字检测方法有哪几种?各有什么优缺点?
|
||||||
|
|
||||||
|
**A**:常用的基于深度学习的文字检测方法一般可以分为基于回归的、基于分割的两大类,当然还有一些将两者进行结合的方法。
|
||||||
|
(1)基于回归的方法分为box回归和像素值回归。a. 采用box回归的方法主要有CTPN、Textbox系列和EAST,这类算法对规则形状文本检测效果较好,但无法准确检测不规则形状文本。 b. 像素值回归的方法主要有CRAFT和SA-Text,这类算法能够检测弯曲文本且对小文本效果优秀但是实时性能不够。
|
||||||
|
(2)基于分割的算法,如PSENet,这类算法不受文本形状的限制,对各种形状的文本都能取得较好的效果,但是往往后处理比较复杂,导致耗时严重。目前也有一些算法专门针对这个问题进行改进,如DB,将二值化进行近似,使其可导,融入训练,从而获取更准确的边界,大大降低了后处理的耗时。
|
||||||
|
|
||||||
|
#### Q1.1.2:对于中文行文本识别,CTC和Attention哪种更优?
|
||||||
|
|
||||||
|
**A**:(1)从效果上来看,通用OCR场景CTC的识别效果优于Attention,因为带识别的字典中的字符比较多,常用中文汉字三千字以上,如果训练样本不足的情况下,对于这些字符的序列关系挖掘比较困难。中文场景下Attention模型的优势无法体现。而且Attention适合短语句识别,对长句子识别比较差。
|
||||||
|
(2)从训练和预测速度上,Attention的串行解码结构限制了预测速度,而CTC网络结构更高效,预测速度上更有优势。
|
||||||
|
|
||||||
|
#### Q1.1.3:弯曲形变的文字识别需要怎么处理?TPS应用场景是什么,是否好用?
|
||||||
|
|
||||||
|
**A**:(1)在大多数情况下,如果遇到的场景弯曲形变不是太严重,检测4个顶点,然后直接通过仿射变换转正识别就足够了。
|
||||||
|
(2)如果不能满足需求,可以尝试使用TPS(Thin Plate Spline),即薄板样条插值。TPS是一种插值算法,经常用于图像变形等,通过少量的控制点就可以驱动图像进行变化。一般用在有弯曲形变的文本识别中,当检测到不规则的/弯曲的(如,使用基于分割的方法检测算法)文本区域,往往先使用TPS算法对文本区域矫正成矩形再进行识别,如,STAR-Net、RARE等识别算法中引入了TPS模块。
|
||||||
|
**Warning**:TPS看起来美好,在实际应用时经常发现并不够鲁棒,并且会增加耗时,需要谨慎使用。
|
||||||
|
|
||||||
|
#### Q1.1.4:简单的对于精度要求不高的OCR任务,数据集需要准备多少张呢?
|
||||||
|
|
||||||
|
**A**:(1)训练数据的数量和需要解决问题的复杂度有关系。难度越大,精度要求越高,则数据集需求越大,而且一般情况实际中的训练数据越多效果越好。
|
||||||
|
(2)对于精度要求不高的场景,检测任务和识别任务需要的数据量是不一样的。对于检测任务,500张图像可以保证基本的检测效果。对于识别任务,需要保证识别字典中每个字符出现在不同场景的行文本图像数目需要大于200张(举例,如果有字典中有5个字,每个字都需要出现在200张图片以上,那么最少要求的图像数量应该在200-1000张之间),这样可以保证基本的识别效果。
|
||||||
|
|
||||||
|
#### Q1.1.5:背景干扰的文字(如印章盖到落款上,需要识别落款或者印章中的文字),如何识别?
|
||||||
|
|
||||||
|
**A**:(1)在人眼确认可识别的条件下,对于背景有干扰的文字,首先要保证检测框足够准确,如果检测框不准确,需要考虑是否可以通过过滤颜色等方式对图像预处理并且增加更多相关的训练数据;在识别的部分,注意在训练数据中加入背景干扰类的扩增图像。
|
||||||
|
(2)如果MobileNet模型不能满足需求,可以尝试ResNet系列大模型来获得更好的效果
|
||||||
|
。
|
||||||
|
|
||||||
|
#### Q1.1.6:OCR领域常用的评估指标是什么?
|
||||||
|
|
||||||
|
**A**:对于两阶段的可以分开来看,分别是检测和识别阶段
|
||||||
|
|
||||||
|
(1)检测阶段:先按照检测框和标注框的IOU评估,IOU大于某个阈值判断为检测准确。这里检测框和标注框不同于一般的通用目标检测框,是采用多边形进行表示。
|
||||||
|
|
||||||
|
检测准确率:正确的检测框个数在全部检测框的占比,主要是判断检测指标.
|
||||||
|
|
||||||
|
检测召回率:正确的检测框个数在全部标注框的占比,主要是判断漏检的指标。
|
||||||
|
|
||||||
|
|
||||||
|
(2)识别阶段:
|
||||||
|
字符识别准确率,即正确识别的文本行占标注的文本行数量的比例,只有整行文本识别对才算正确识别。
|
||||||
|
|
||||||
|
(3)端到端统计:
|
||||||
|
端对端准确率:准确检测并正确识别文本行在全部标注文本行的占比;
|
||||||
|
端到端召回率:准确检测并正确识别文本行在 检测到的文本行数量 的占比;准确检测的标准是检测框与标注框的IOU大于某个阈值,正确识别的的检测框中的文本与标注的文本相同。
|
||||||
|
|
||||||
|
|
||||||
|
#### Q1.1.7:单张图上多语种并存识别(如单张图印刷体和手写文字并存),应该如何处理?
|
||||||
|
|
||||||
|
**A**:单张图像中存在多种类型文本的情况很常见,典型的以学生的试卷为代表,一张图像同时存在手写体和印刷体两种文本,这类情况下,可以尝试”1个检测模型+1个N分类模型+N个识别模型”的解决方案。
|
||||||
|
其中不同类型文本共用同一个检测模型,N分类模型指额外训练一个分类器,将检测到的文本进行分类,如手写+印刷的情况就是二分类,N种语言就是N分类,在识别的部分,针对每个类型的文本单独训练一个识别模型,如手写+印刷的场景,就需要训练一个手写体识别模型,一个印刷体识别模型,如果一个文本框的分类结果是手写体,那么就传给手写体识别模型进行识别,其他情况同理。
|
||||||
|
|
||||||
|
#### Q1.1.8:请问PaddleOCR项目中的中文超轻量和通用模型用了哪些数据集?训练多少样本,gpu什么配置,跑了多少个epoch,大概跑了多久?
|
||||||
|
|
||||||
|
**A**:
|
||||||
|
(1)检测的话,LSVT街景数据集共3W张图像,超轻量模型,150epoch左右,2卡V100 跑了不到2天;通用模型:2卡V100 150epoch 不到4天。
|
||||||
|
(2)
|
||||||
|
识别的话,520W左右的数据集(真实数据26W+合成数据500W)训练,超轻量模型:4卡V100,总共训练了5天左右。通用模型:4卡V100,共训练6天。
|
||||||
|
|
||||||
|
超轻量模型训练分为2个阶段:
|
||||||
|
(1)全量数据训练50epoch,耗时3天
|
||||||
|
(2)合成数据+真实数据按照1:1数据采样,进行finetune训练200epoch,耗时2天
|
||||||
|
|
||||||
|
通用模型训练:
|
||||||
|
真实数据+合成数据,动态采样(1:1)训练,200epoch,耗时 6天左右。
|
||||||
|
|
||||||
|
|
||||||
|
#### Q1.1.9:PaddleOCR模型推理方式有几种?各自的优缺点是什么
|
||||||
|
|
||||||
|
**A**:目前推理方式支持基于训练引擎推理和基于预测引擎推理。
|
||||||
|
(1)基于训练引擎推理不需要转换模型,但是需要先组网再load参数,语言只支持python,不适合系统集成。
|
||||||
|
(2)基于预测引擎的推理需要先转换模型为inference格式,然后可以进行不需要组网的推理,语言支持c++和python,适合系统集成。
|
||||||
|
|
||||||
|
#### Q1.1.10:PaddleOCR中,对于模型预测加速,CPU加速的途径有哪些?基于TenorRT加速GPU对输入有什么要求?
|
||||||
|
|
||||||
|
**A**:(1)CPU可以使用mkldnn进行加速;对于python inference的话,可以把enable_mkldnn改为true,[参考代码](https://github.com/PaddlePaddle/PaddleOCR/blob/549108fe0aa0d87c0a3b2d471f1c653e89daab80/tools/infer/utility.py#L73),对于cpp inference的话,在配置文件里面配置use_mkldnn 1即可,[参考代码](https://github.com/PaddlePaddle/PaddleOCR/blob/549108fe0aa0d87c0a3b2d471f1c653e89daab80/deploy/cpp_infer/tools/config.txt#L6)
|
||||||
|
(2)GPU需要注意变长输入问题等,TRT6 之后才支持变长输入
|
||||||
|
|
||||||
|
|
||||||
|
## 【理论篇】OCR通用问题
|
||||||
|
### 基础知识
|
||||||
|
|
||||||
|
#### Q2.1.1:CRNN能否识别两行的文字?还是说必须一行?
|
||||||
|
|
||||||
|
**A**:CRNN是一种基于1D-CTC的算法,其原理决定无法识别2行或多行的文字,只能单行识别。
|
||||||
|
|
||||||
|
#### Q2.1.2:怎么判断行文本图像是否是颠倒的?
|
||||||
|
|
||||||
|
**A**:有两种方案:(1)原始图像和颠倒图像都进行识别预测,取得分较高的为识别结果。
|
||||||
|
(2)训练一个正常图像和颠倒图像的方向分类器进行判断。
|
||||||
|
|
||||||
|
#### Q2.1.3:目前OCR普遍是二阶段,端到端的方案在业界落地情况如何?
|
||||||
|
|
||||||
|
**A**:端到端在文字分布密集的业务场景,效率会比较有保证,精度的话看自己业务数据积累情况,如果行级别的识别数据积累比较多的话two-stage会比较好。百度的落地场景,比如工业仪表识别、车牌识别都用到端到端解决方案。
|
||||||
|
|
||||||
|
|
||||||
|
### 数据集
|
||||||
|
|
||||||
|
#### Q2.2.1:支持空格的模型,标注数据的时候是不是要标注空格?中间几个空格都要标注出来么?
|
||||||
|
|
||||||
|
**A**:如果需要检测和识别模型,就需要在标注的时候把空格标注出来,而且在字典中增加空格对应的字符。标注过程中,如果中间几个空格标注一个就行。
|
||||||
|
|
||||||
|
#### Q2.2.2:如果考虑支持竖排文字识别,相关的数据集如何合成?
|
||||||
|
|
||||||
|
**A**:竖排文字与横排文字合成方式相同,只是选择了垂直字体。合成工具推荐:[text_renderer](https://github.com/Sanster/text_renderer)
|
||||||
|
|
||||||
|
#### Q2.2.3:训练文字识别模型,真实数据有30w,合成数据有500w,需要做样本均衡吗?
|
||||||
|
|
||||||
|
**A**:需要,一般需要保证一个batch中真实数据样本和合成数据样本的比例是1:1~1:3左右效果比较理想。如果合成数据过大,会过拟合到合成数据,预测效果往往不佳。还有一种**启发性**的尝试是可以先用大量合成数据训练一个base模型,然后再用真实数据微调,在一些简单场景效果也是会有提升的。
|
||||||
|
|
||||||
|
#### Q2.2.4:请问一下,竖排文字识别时候,字的特征已经变了,这种情况在数据集和字典标注是新增一个类别还是多个角度的字共享一个类别?
|
||||||
|
|
||||||
|
**A**:可以根据实际场景做不同的尝试,共享一个类别是可以收敛,效果也还不错。但是如果分开训练,同类样本之间一致性更好,更容易收敛,识别效果会更优。
|
||||||
|
|
||||||
|
### 训练训练调优
|
||||||
|
|
||||||
|
#### Q2.3.1:如何更换文本检测/识别的backbone?
|
||||||
|
**A**:无论是文字检测,还是文字识别,骨干网络的选择是预测效果和预测效率的权衡。一般,选择更大规模的骨干网络,例如ResNet101_vd,则检测或识别更准确,但预测耗时相应也会增加。而选择更小规模的骨干网络,例如MobileNetV3_small_x0_35,则预测更快,但检测或识别的准确率会大打折扣。幸运的是不同骨干网络的检测或识别效果与在ImageNet数据集图像1000分类任务效果正相关。[**飞桨图像分类套件PaddleClas**](https://github.com/PaddlePaddle/PaddleClas)汇总了ResNet_vd、Res2Net、HRNet、MobileNetV3、GhostNet等23种系列的分类网络结构,在上述图像分类任务的top1识别准确率,GPU(V100和T4)和CPU(骁龙855)的预测耗时以及相应的[**117个预训练模型下载地址**](https://paddleclas.readthedocs.io/zh_CN/latest/models/models_intro.html)。
|
||||||
|
(1)文字检测骨干网络的替换,主要是确定类似与ResNet的4个stages,以方便集成后续的类似FPN的检测头。此外,对于文字检测问题,使用ImageNet训练的分类预训练模型,可以加速收敛和效果提升。
|
||||||
|
(2)文字识别的骨干网络的替换,需要注意网络宽高stride的下降位置。由于文本识别一般宽高比例很大,因此高度下降频率少一些,宽度下降频率多一些。可以参考PaddleOCR中[MobileNetV3骨干网络](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/ppocr/modeling/backbones/rec_mobilenet_v3.py)的改动。
|
||||||
|
|
||||||
|
#### Q2.3.2:文本识别训练不加LSTM是否可以收敛?
|
||||||
|
|
||||||
|
**A**:理论上是可以收敛的,加上LSTM模块主要是为了挖掘文字之间的序列关系,提升识别效果。对于有明显上下文语义的场景效果会比较明显。
|
||||||
|
|
||||||
|
#### Q2.3.3:文本识别中LSTM和GRU如何选择?
|
||||||
|
|
||||||
|
**A**:从项目实践经验来看,序列模块采用LSTM的识别效果优于GRU,但是LSTM的计算量比GRU大一些,可以根据自己实际情况选择。
|
||||||
|
|
||||||
|
#### Q2.3.4:对于CRNN模型,backbone采用DenseNet和ResNet_vd,哪种网络结构更好?
|
||||||
|
|
||||||
|
**A**:Backbone的识别效果在CRNN模型上的效果,与Imagenet 1000 图像分类任务上识别效果和效率一致。在图像分类任务上ResnNet_vd(79%+)的识别精度明显优于DenseNet(77%+),此外对于GPU,Nvidia针对ResNet系列模型做了优化,预测效率更高,所以相对而言,resnet_vd是较好选择。如果是移动端,可以优先考虑MobileNetV3系列。
|
||||||
|
|
||||||
|
#### Q2.3.5:训练识别时,如何选择合适的网络输入shape?
|
||||||
|
|
||||||
|
**A**:一般高度采用32,最长宽度的选择,有两种方法:
|
||||||
|
(1)统计训练样本图像的宽高比分布。最大宽高比的选取考虑满足80%的训练样本。
|
||||||
|
(2)统计训练样本文字数目。最长字符数目的选取考虑满足80%的训练样本。然后中文字符长宽比近似认为是1,英文认为3:1,预估一个最长宽度。
|
||||||
|
|
||||||
|
#### Q2.3.6:如何识别文字比较长的文本?
|
||||||
|
|
||||||
|
**A**:在中文识别模型训练时,并不是采用直接将训练样本缩放到[3,32,320]进行训练,而是先等比例缩放图像,保证图像高度为32,宽度不足320的部分补0,宽高比大于10的样本直接丢弃。预测时,如果是单张图像预测,则按上述操作直接对图像缩放,不做宽度320的限制。如果是多张图预测,则采用batch方式预测,每个batch的宽度动态变换,采用这个batch中最长宽度。
|
||||||
|
|
||||||
|
### 预测部署
|
||||||
|
|
||||||
|
#### Q2.4.1:请问对于图片中的密集文字,有什么好的处理办法吗?
|
||||||
|
|
||||||
|
**A**:可以先试用预训练模型测试一下,例如DB+CRNN,判断下密集文字图片中是检测还是识别的问题,然后针对性的改善。还有一种是如果图象中密集文字较小,可以尝试增大图像分辨率,对图像进行一定范围内的拉伸,将文字稀疏化,提高识别效果。
|
||||||
|
|
||||||
|
#### Q2.4.2:对于一些在识别时稍微模糊的文本,有没有一些图像增强的方式?
|
||||||
|
|
||||||
|
**A**:在人类肉眼可以识别的前提下,可以考虑图像处理中的均值滤波、中值滤波或者高斯滤波等模糊算子尝试。也可以尝试从数据扩增扰动来强化模型鲁棒性,另外新的思路有对抗性训练和超分SR思路,可以尝试借鉴。但目前业界尚无普遍认可的最优方案,建议优先在数据采集阶段增加一些限制提升图片质量。
|
||||||
|
|
||||||
|
#### Q2.4.3:对于特定文字检测,例如身份证只检测姓名,检测指定区域文字更好,还是检测全部区域再筛选更好?
|
||||||
|
|
||||||
|
**A**:两个角度来说明一般检测全部区域再筛选更好。
|
||||||
|
(1)由于特定文字和非特定文字之间的视觉特征并没有很强的区分行,只检测指定区域,容易造成特定文字漏检。
|
||||||
|
(2)产品的需求可能是变化的,不排除后续对于模型需求变化的可能性(比如又需要增加一个字段),相比于训练模型,后处理的逻辑会更容易调整。
|
||||||
|
|
||||||
|
#### Q2.4.4:对于小白如何快速入门中文OCR项目实践?
|
||||||
|
|
||||||
|
**A**:建议可以先了解OCR方向的基础知识,大概了解基础的检测和识别模型算法。然后在Github上可以查看OCR方向相关的repo。目前来看,从内容的完备性来看,PaddleOCR的中英文双语教程文档是有明显优势的,在数据集、模型训练、预测部署文档详实,可以快速入手。而且还有微信用户群答疑,非常适合学习实践。项目地址:[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)
|
||||||
|
|
||||||
|
#### Q2.4.5:如何识别带空格的英文行文本图像?
|
||||||
|
|
||||||
|
**A**:空格识别可以考虑以下两种方案:
|
||||||
|
(1)优化文本检测算法。检测结果在空格处将文本断开。这种方案在检测数据标注时,需要将含有空格的文本行分成好多段。
|
||||||
|
(2)优化文本识别算法。在识别字典里面引入空格字符,然后在识别的训练数据中,如果用空行,进行标注。此外,合成数据时,通过拼接训练数据,生成含有空格的文本。
|
||||||
|
|
||||||
|
#### Q2.4.6:中英文一起识别时也可以加空格字符来训练吗
|
||||||
|
|
||||||
|
**A**:中文识别可以加空格当做分隔符训练,具体的效果如何没法给出直接评判,根据实际业务数据训练来判断。
|
||||||
|
|
||||||
|
#### Q2.4.7:低像素文字或者字号比较小的文字有什么超分辨率方法吗
|
||||||
|
|
||||||
|
**A**:超分辨率方法分为传统方法和基于深度学习的方法。基于深度学习的方法中,比较经典的有SRCNN,另外CVPR2020也有一篇超分辨率的工作可以参考:Unpaired Image Super-Resolution using Pseudo-Supervision,但是没有充分的实践验证过,需要看实际场景下的效果。
|
||||||
|
|
||||||
|
#### Q2.4.8:表格识别有什么好的模型 或者论文推荐么
|
||||||
|
|
||||||
|
**A**:表格目前学术界比较成熟的解决方案不多 ,可以尝试下分割的论文方案。
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## 【实战篇】PaddleOCR实战问题
|
||||||
|
|
||||||
|
### 使用咨询
|
||||||
|
|
||||||
|
#### Q3.1.1:OSError: [WinError 126] 找不到指定的模块。mac pro python 3.4 shapely import 问题
|
||||||
|
|
||||||
|
**A**:这个问题是因为shapely库安装有误,可以参考 [#212](https://github.com/PaddlePaddle/PaddleOCR/issues/212) 这个issue重新安装一下
|
||||||
|
|
||||||
|
#### Q3.1.2:安装了paddle-gpu,运行时提示没有安装gpu版本的paddle,可能是什么原因?
|
||||||
|
|
||||||
|
**A**:用户同时安装了paddle cpu和gpu版本,都删掉之后,重新安装gpu版本的padle就好了
|
||||||
|
|
||||||
|
#### Q3.1.3:试用报错:Cannot load cudnn shared library,是什么原因呢?
|
||||||
|
|
||||||
|
**A**:需要把cudnn lib添加到LD_LIBRARY_PATH中去。
|
||||||
|
|
||||||
|
#### Q3.1.4:PaddlePaddle怎么指定GPU运行 os.environ["CUDA_VISIBLE_DEVICES"]这种不生效
|
||||||
|
|
||||||
|
**A**:通过设置 export CUDA_VISIBLE_DEVICES='0'环境变量
|
||||||
|
|
||||||
|
#### Q3.1.5:windows下训练没有问题,aistudio中提示数据路径有问题
|
||||||
|
|
||||||
|
**A**:需要把`\`改为`/`(windows和linux的文件夹分隔符不一样,windows下的是`\`,linux下是`/`)
|
||||||
|
|
||||||
|
#### Q3.1.6:gpu版的paddle虽然能在cpu上运行,但是必须要有gpu设备
|
||||||
|
|
||||||
|
**A**:export CUDA_VISIBLE_DEVICES='',CPU是可以正常跑的
|
||||||
|
|
||||||
|
#### Q3.1.7:预测报错ImportError: dlopen: cannot load any more object with static TLS
|
||||||
|
|
||||||
|
**A**:glibc的版本问题,运行需要glibc的版本号大于2.23。
|
||||||
|
|
||||||
|
#### Q3.1.8:提供的inference model和预训练模型的区别
|
||||||
|
|
||||||
|
**A**:inference model为固化模型,文件中包含网络结构和网络参数,多用于预测部署。预训练模型是训练过程中保存好的模型,多用于fine-tune训练或者断点训练。
|
||||||
|
|
||||||
|
#### Q3.1.9:模型的解码部分有后处理?
|
||||||
|
|
||||||
|
**A**:有的检测的后处理在ppocr/postprocess路径下,识别的后处理均在ppocr/utils/character.py文件内
|
||||||
|
|
||||||
|
#### Q3.1.10:PaddleOCR中文模型是否支持数字识别?
|
||||||
|
|
||||||
|
**A**:支持的,可以看下ppocr/utils/ppocr_keys_v1.txt 这个文件,是支持的识别字符列表,其中包含了数字识别。
|
||||||
|
|
||||||
|
#### Q3.1.11:PaddleOCR如何做到横排和竖排同时支持的?
|
||||||
|
|
||||||
|
**A**:合成了一批竖排文字,逆时针旋转90度后加入训练集与横排一起训练。预测时根据图片长款比判断是否为竖排,若为竖排则将crop出的文本逆时针旋转90度后送入识别网络。
|
||||||
|
|
||||||
|
#### Q3.1.12:如何获取检测文本框的坐标?
|
||||||
|
|
||||||
|
**A**:文本检测的结果有box和文本信息, 具体 [参考代码](https://github.com/PaddlePaddle/PaddleOCR/blob/9d33e36df550762b204d5fbfd7977a25e31b2c44/tools/infer/predict_system.py#L13)
|
||||||
|
|
||||||
|
#### Q3.1.13:识别模型框出来的位置太紧凑,会丢失边缘的文字信息,导致识别错误
|
||||||
|
|
||||||
|
**A**: 可以在命令中加入 --det_db_unclip_ratio ,参数[定义位置](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/tools/infer/utility.py#L49),这个参数是检测后处理时控制文本框大小的,默认2.0,可以尝试改成2.5或者更大
|
||||||
|
,反之,如果觉得文本框不够紧凑,也可以把该参数调小。
|
||||||
|
|
||||||
|
#### Q3.1.14:英文手写体识别有计划提供的预训练模型吗?
|
||||||
|
|
||||||
|
**A**:近期也在开展需求调研,如果企业用户需求较多,我们会考虑增加相应的研发投入,后续提供对应的预训练模型,如果有需求欢迎通过issue或者加入微信群联系我们。
|
||||||
|
|
||||||
|
#### Q3.1.17:超轻量模型和通用OCR模型的区别?
|
||||||
|
|
||||||
|
|
||||||
|
**A**:理论上只要有相应的数据集,都是可以的。当然手写识别毕竟和印刷体有区别,对应训练调优策略可能需要适配性优化。
|
||||||
|
|
||||||
|
|
||||||
|
#### Q3.1.16:PaddleOCR的算法可以用于手写文字检测识别吗?后续有计划推出手写预训练模型么?
|
||||||
|
|
||||||
|
PaddleOCR已完成Windows和Mac系统适配,并且python预测支持使用pip包安装。运行时注意两点:1、在[快速安装](./installation.md)时,如果不想安装docker,可跳过第一步,直接从第二步安装paddle开始。2、inference模型下载时,如果没有安装wget,可直接点击模型链接或将链接地址复制到浏览器进行下载,并解压放置到相应目录。
|
||||||
|
|
||||||
|
#### Q3.1.15:PaddleOCR是否支持在Windows或Mac系统上运行?
|
||||||
|
**A**:目前PaddleOCR开源了2个中文模型,分别是8.6M超轻量中文模型和通用中文OCR模型。两者对比信息如下:
|
||||||
- 相同点:两者使用相同的**算法**和**训练数据**;
|
- 相同点:两者使用相同的**算法**和**训练数据**;
|
||||||
- 不同点:不同之处在于**骨干网络**和**通道参数**,超轻量模型使用MobileNetV3作为骨干网络,通用模型使用Resnet50_vd作为检测模型backbone,Resnet34_vd作为识别模型backbone,具体参数差异可对比两种模型训练的配置文件.
|
- 不同点:不同之处在于**骨干网络**和**通道参数**,超轻量模型使用MobileNetV3作为骨干网络,通用模型使用Resnet50_vd作为检测模型backbone,Resnet34_vd作为识别模型backbone,具体参数差异可对比两种模型训练的配置文件.
|
||||||
|
|
||||||
|
@ -28,26 +290,186 @@ PaddleOCR已完成Windows和Mac系统适配,运行时注意两点:1、在[
|
||||||
|8.6M超轻量中文OCR模型|MobileNetV3+MobileNetV3|det_mv3_db.yml|rec_chinese_lite_train.yml|
|
|8.6M超轻量中文OCR模型|MobileNetV3+MobileNetV3|det_mv3_db.yml|rec_chinese_lite_train.yml|
|
||||||
|通用中文OCR模型|Resnet50_vd+Resnet34_vd|det_r50_vd_db.yml|rec_chinese_common_train.yml|
|
|通用中文OCR模型|Resnet50_vd+Resnet34_vd|det_r50_vd_db.yml|rec_chinese_common_train.yml|
|
||||||
|
|
||||||
8. **是否有计划开源仅识别数字或仅识别英文+数字的模型**
|
#### Q3.1.18:是否有计划开源仅识别数字或仅识别英文+数字的模型
|
||||||
暂不计划开源仅数字、仅数字+英文、或其他小垂类专用模型。PaddleOCR开源了多种检测、识别算法供用户自定义训练,两种中文模型也是基于开源的算法库训练产出,有小垂类需求的小伙伴,可以按照教程准备好数据,选择合适的配置文件,自行训练,相信能有不错的效果。训练有任何问题欢迎提issue或在交流群提问,我们会及时解答。
|
|
||||||
|
|
||||||
9. **开源模型使用的训练数据是什么,能否开源**
|
**A**:目前主要是开源通用类OCR模型,暂不计划开源小垂类专用模型。PaddleOCR开源了多种检测、识别算法供用户自定义训练,两种中文模型也是基于开源的算法库训练产出,有小垂类需求的小伙伴,可以按照教程准备好数据,选择合适的配置文件,自行训练,相信能有不错的效果。训练有任何问题欢迎提issue或在交流群提问,我们会及时解答。
|
||||||
目前开源的模型,数据集和量级如下:
|
|
||||||
|
|
||||||
|
|
||||||
|
### 数据集
|
||||||
|
|
||||||
|
#### Q3.2.1:如何制作PaddleOCR支持的数据格式
|
||||||
|
|
||||||
|
**A**:可以参考检测与识别训练文档,里面有数据格式详细介绍。[检测文档](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/detection.md),[识别文档](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/recognition.md)
|
||||||
|
|
||||||
|
#### Q3.2.2:请问一下,如果想用预训练模型,但是我的数据里面又出现了预训练模型字符集中没有的字符,新的字符是在字符集前面添加还是在后面添加?
|
||||||
|
|
||||||
|
**A**:在后面添加,修改dict之后,就改变了模型最后一层fc的结构,之前训练到的参数没有用到,相当于从头训练,因此acc是0。
|
||||||
|
|
||||||
|
#### Q3.2.3:如何调试数据读取程序?
|
||||||
|
|
||||||
|
**A**:tools/train.py中有一个test_reader()函数用于调试数据读取。
|
||||||
|
|
||||||
|
#### Q3.2.4:开源模型使用的训练数据是什么,能否开源?
|
||||||
|
|
||||||
|
**A**:目前开源的模型,数据集和量级如下:
|
||||||
- 检测:
|
- 检测:
|
||||||
英文数据集,ICDAR2015
|
英文数据集,ICDAR2015
|
||||||
中文数据集,LSVT街景数据集训练数据3w张图片
|
中文数据集,LSVT街景数据集训练数据3w张图片
|
||||||
- 识别:
|
- 识别:
|
||||||
英文数据集,MJSynth和SynthText合成数据,数据量上千万。
|
英文数据集,MJSynth和SynthText合成数据,数据量上千万。
|
||||||
中文数据集,LSVT街景数据集根据真值将图crop出来,并进行位置校准,总共30w张图像。此外基于LSVT的语料,合成数据500w。
|
中文数据集,LSVT街景数据集根据真值将图crop出来,并进行位置校准,总共30w张图像。此外基于LSVT的语料,合成数据500w。
|
||||||
|
|
||||||
其中,公开数据集都是开源的,用户可自行搜索下载,也可参考[中文数据集](./datasets.md),合成数据暂不开源,用户可使用开源合成工具自行合成,可参考的合成工具包括[text_renderer](https://github.com/Sanster/text_renderer)、[SynthText](https://github.com/ankush-me/SynthText)、[TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator)等。
|
其中,公开数据集都是开源的,用户可自行搜索下载,也可参考[中文数据集](./datasets.md),合成数据暂不开源,用户可使用开源合成工具自行合成,可参考的合成工具包括[text_renderer](https://github.com/Sanster/text_renderer)、[SynthText](https://github.com/ankush-me/SynthText)、[TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator)等。
|
||||||
|
|
||||||
10. **使用带TPS的识别模型预测报错**
|
#### Q3.2.5:请问中文字符集多大呢?支持生僻字识别吗?
|
||||||
报错信息:Input(X) dims[3] and Input(Grid) dims[2] should be equal, but received X dimension[3](320) != Grid dimension[2](100)
|
|
||||||
原因:TPS模块暂时无法支持变长的输入,请设置 --rec_image_shape='3,32,100' --rec_char_type='en' 固定输入shape
|
|
||||||
|
|
||||||
11. **自定义字典训练的模型,识别结果出现字典里没出现的字**
|
**A**:中文字符集是6623, 支持生僻字识别。训练样本中有部分生僻字,但样本不多,如果有特殊需求建议使用自己的数据集做fine-tune。
|
||||||
预测时没有设置采用的自定义字典路径。设置方法是在预测时,通过增加输入参数rec_char_dict_path来设置。
|
|
||||||
|
|
||||||
12. **cpp infer与python inference的结果不一致,相差较大**
|
#### Q3.2.6:中文文本检测、文本识别构建训练集的话,大概需要多少数据量
|
||||||
导出的inference model版本与预测库版本需要保持一致,比如在Windows下,Paddle官网提供的预测库版本是1.8,而PaddleOCR提供的inference model 版本是1.7,因此最终预测结果会有差别。可以在Paddle1.8环境下导出模型,再基于该模型进行预测。
|
|
||||||
|
**A**:检测需要的数据相对较少,在PaddleOCR模型的基础上进行Fine-tune,一般需要500张可达到不错的效果。
|
||||||
|
识别分英文和中文,一般英文场景需要几十万数据可达到不错的效果,中文则需要几百万甚至更多。
|
||||||
|
|
||||||
|
#### Q3.2.7:中文识别模型如何选择?
|
||||||
|
|
||||||
|
**A**:中文模型共有2大类:通用模型和超轻量模型。他们各自的优势如下:
|
||||||
|
超轻量模型具有更小的模型大小,更快的预测速度。适合用于端侧使用。
|
||||||
|
通用模型具有更高的模型精度,适合对模型大小不敏感的场景。
|
||||||
|
此外基于以上模型,PaddleOCR还提供了支持空格识别的模型,主要针对中文场景中的英文句子。
|
||||||
|
您可以根据实际使用需求进行选择。
|
||||||
|
|
||||||
|
#### Q3.2.8:图像旋转90° 文本检测可以正常检测到具体文本位置,但是识别准确度大幅降低,是否会考虑增加相应的旋转预处理?
|
||||||
|
|
||||||
|
**A**:目前模型只支持两种方向的文字:水平和垂直。 为了降低模型大小,加快模型预测速度,PaddleOCR暂时没有加入图片的方向判断。建议用户在识别前自行转正,后期也会考虑添加选择角度判断。
|
||||||
|
|
||||||
|
#### Q3.2.9:同一张图通用检测出21个条目,轻量级检测出26个 ,难道不是轻量级的好吗?
|
||||||
|
|
||||||
|
**A**:可以主要参考可视化效果,通用模型更倾向于检测一整行文字,轻量级可能会有一行文字被分成两段检测的情况,不是数量越多,效果就越好。
|
||||||
|
|
||||||
|
### 模型训练
|
||||||
|
调优
|
||||||
|
|
||||||
|
#### Q3.3.1:文本长度超过25,应该怎么处理?
|
||||||
|
|
||||||
|
**A**:默认训练时的文本可识别的最大长度为25,超过25的文本会被忽略不参与训练。如果您训练样本中的长文本较多,可以修改配置文件中的 max\_text\_length 字段,设置为更大的最长文本长度,具体位置在[这里](https://github.com/PaddlePaddle/PaddleOCR/blob/fb9e47b262529386983edc21b33abfa16bbf06ac/configs/rec/rec_chinese_lite_train.yml#L13)。
|
||||||
|
|
||||||
|
#### Q3.3.2:配置文件里面检测的阈值设置么?
|
||||||
|
|
||||||
|
**A**:有的,检测相关的参数主要有以下几个:
|
||||||
|
``max_side_len:预测时图像resize的长边尺寸
|
||||||
|
thresh: 用于二值化输出图的阈值
|
||||||
|
box_thresh:用于过滤文本框的阈值,低于此阈值的文本框不要
|
||||||
|
unclip_ratio: 文本框扩张的系数,关系到文本框的大小``
|
||||||
|
|
||||||
|
这些参数的默认值见[代码](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/tools/infer/utility.py#L40),可以通过从命令行传递参数进行修改。
|
||||||
|
|
||||||
|
#### Q3.3.3:我想请教一下,你们在训练识别时候,lsvt里的非矩形框文字,你们是怎么做处理的呢。忽略掉还是去最小旋转框?
|
||||||
|
|
||||||
|
**A**:现在是忽略处理的
|
||||||
|
|
||||||
|
#### Q3.3.4:训练过程中,如何恰当的停止训练(直接kill,经常还有显存占用的问题)
|
||||||
|
|
||||||
|
**A**:可以通过下面的脚本终止所有包含train.py字段的进程,ps -axu | grep train.py | awk '{print $2}' | xargs kill -9
|
||||||
|
|
||||||
|
#### Q3.3.5:读数据进程数设置4~8时训练一会进程接连defunct后gpu利用率一直为0卡死
|
||||||
|
|
||||||
|
**A**:修改多进程的队列数后解决, 将[代码段]( https://github.com/PaddlePaddle/PaddleOCR/blob/549108fe0aa0d87c0a3b2d471f1c653e89daab80/ppocr/data/reader_main.py#L75 ) 修改为:
|
||||||
|
|
||||||
|
```
|
||||||
|
return paddle.reader.multiprocess_reader(readers, False, queue_size=320)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Q3.3.6:可不可以将pretrain_weights设置为空呢?想从零开始训练一个model
|
||||||
|
|
||||||
|
**A**:这个是可以的,在训练通用识别模型的时候,pretrain_weights就设置为空,但是这样可能需要更长的迭代轮数才能达到相同的精度。
|
||||||
|
|
||||||
|
#### Q3.3.7:PaddleOCR默认不是200个step保存一次模型吗?为啥文件夹下面都没有生成
|
||||||
|
|
||||||
|
**A**:eval_batch_step [4000, 5000]改为[0, 5000] 就是从第0次迭代开始,每5000迭代保存一次模型
|
||||||
|
|
||||||
|
#### Q3.3.8:如何进行模型微调?
|
||||||
|
|
||||||
|
**A**:注意配置好匹配的数据集合适,然后在finetune训练时,可以加载我们提供的预训练模型,设置配置文件中Global.pretrain_weights 参数为要加载的预训练模型路径。
|
||||||
|
|
||||||
|
#### Q3.3.9:文本检测换成自己的数据没法训练,有一些”###”是什么意思?
|
||||||
|
|
||||||
|
**A**:数据格式有问题,”###” 表示要被忽略的文本区域,所以你的数据都被跳过了,可以换成其他任意字符或者就写个空的。
|
||||||
|
|
||||||
|
#### Q3.3.10:copy_from_cpu这个地方,这块input不变(t_data的size不变)连续调用两次copy_from_cpu()时,这里面的gpu_place会重新malloc GPU内存吗?还是只有当ele_size变化时才会重新在GPU上malloc呢?
|
||||||
|
|
||||||
|
**A**:小于等于的时候都不会重新分配,只有大于的时候才会重新分配
|
||||||
|
|
||||||
|
#### Q3.3.11:自己训练出来的未inference转换的模型 可以当作预训练模型吗?
|
||||||
|
|
||||||
|
**A**:可以的,但是如果训练数据两量少的话,可能会过拟合到少量数据上,泛化性能不佳。
|
||||||
|
#### Q3.3.13:使用带TPS的识别模型预测报错
|
||||||
|
|
||||||
|
**A**:直接更换配置文件里的Backbone.function即可,格式为:网络文件路径,网络Class名词。如果所需的backbone在PaddleOCR里没有提供,可以参照PaddleClas里面的网络结构,进行修改尝试。具体修改原则可以参考OCR通用问题中 "如何更换文本检测/识别的backbone" 的回答。
|
||||||
|
|
||||||
|
#### Q3.3.12:如何更换文本检测/识别的backbone?
|
||||||
|
**A**:报错信息:'''Input(X) dims[3] and Input(Grid) dims[2] should be equal, but received X dimension[3](320) != Grid dimension[2](100)
|
||||||
|
**A**:TPS模块暂时无法支持变长的输入,请设置 --rec_image_shape='3,32,100' --rec_char_type='en' 固定输入shape
|
||||||
|
|
||||||
|
### 预测部署
|
||||||
|
|
||||||
|
#### Q3.4.1:如何pip安装opt模型转换工具?
|
||||||
|
|
||||||
|
**A**:由于OCR端侧部署需要某些算子的支持,这些算子仅在Paddle-Lite 最新develop分支中,所以需要自己编译opt模型转换工具。opt工具可以通过编译PaddleLite获得,编译步骤参考:https://github.com/PaddlePaddle/PaddleOCR/blob/0791714b91/deploy/lite/readme.md 中2.1 模型优化部分。
|
||||||
|
|
||||||
|
#### Q3.4.2:如何将PaddleOCR预测模型封装成SDK
|
||||||
|
|
||||||
|
**A**:如果是Python的话,可以使用tools/infer/predict_system.py中的TextSystem进行sdk封装,如果是c++的话,可以使用deploy/cpp_infer/src下面的DBDetector和CRNNRecognizer完成封装
|
||||||
|
|
||||||
|
#### Q3.4.3:服务部署可以只发布 文本识别模型么?(不带文本检测模型)
|
||||||
|
|
||||||
|
**A**:可以的。默认的服务部署是检测和识别串联预测的。也支持单独发布文本检测或文本识别模型,比如使用PaddleHUBPaddleOCR 模型时,deploy下有三个文件夹,分别是
|
||||||
|
ocr_det:检测预测
|
||||||
|
ocr_rec: 识别预测
|
||||||
|
ocr_system: 检测识别串联预测
|
||||||
|
每个模块是单独分开的,所以可以选择只发布文本识别模型。使用PaddleServing部署时同理。
|
||||||
|
|
||||||
|
|
||||||
|
#### Q3.4.4:为什么PaddleOCR检测预测是只支持一张图片测试?即test_batch_size_per_card=1
|
||||||
|
|
||||||
|
**A**:测试的时候,对图像等比例缩放,最长边960,不同图像等比例缩放后长宽不一致,无法组成batch,所以设置为test_batch_size为1。
|
||||||
|
|
||||||
|
#### Q3.4.5:为什么使用c++ inference和py inference结果不一致
|
||||||
|
|
||||||
|
**A**:可能是导出的inference model版本与预测库版本需要保持一致,比如在Windows下,Paddle官网提供的预测库版本是1.8,而PaddleOCR提供的inference model 版本是1.7,因此最终预测结果会有差别。可以在Paddle1.8环境下导出模型,再基于该模型进行预测。
|
||||||
|
此外也需要保证两者的预测参数配置完全一致。
|
||||||
|
|
||||||
|
#### Q3.4.6:为什么第一张张图预测时间很长,第二张之后预测时间会降低?
|
||||||
|
|
||||||
|
**A**:第一张图需要初始化,耗时较多。完成模型加载后,之后的预测时间很短。
|
||||||
|
|
||||||
|
#### Q3.4.7:请问opt工具可以直接转int8量化后的模型为.nb文件吗
|
||||||
|
|
||||||
|
**A**:有的,PaddleLite提供完善的opt工具,可以参考[文档](https://paddle-lite.readthedocs.io/zh/latest/user_guides/post_quant_with_data.html)
|
||||||
|
|
||||||
|
#### Q3.4.8:请问在安卓端怎么设置这个参数 --det_db_unclip_ratio=3
|
||||||
|
|
||||||
|
**A**:在安卓APK上无法设置,没有暴露这个接口,如果使用的是PaddledOCR/deploy/lite/的demo,可以修改config.txt中的对应参数来设置
|
||||||
|
|
||||||
|
#### Q3.4.9:PaddleOCR模型是否可以转换成ONNX模型?
|
||||||
|
|
||||||
|
**A**:目前不支持转ONNX
|
||||||
|
|
||||||
|
#### Q3.4.10:使用opt工具对检测模型转换时报错 can not found op arguments for node conv2_b_attr
|
||||||
|
|
||||||
|
**A**:这个问题大概率是编译opt工具的Paddle-Lite不是develop分支,建议使用Paddle-Lite 的develop分支编译opt工具。
|
||||||
|
|
||||||
|
#### Q3.4.11:libopenblas.so找不到是什么意思?
|
||||||
|
|
||||||
|
**A**:目前包括mkl和openblas两种版本的预测库,推荐使用mkl的预测库,如果下载的预测库是mkl的,编译的时候也需要勾选`with_mkl`选项
|
||||||
|
,以Linux下编译为例,需要在设置这里为ON,`-DWITH_MKL=ON`,[参考链接](https://github.com/PaddlePaddle/PaddleOCR/blob/8a78af26df0dd8f15b734cc8db13e25d2a3656a2/deploy/cpp_infer/tools/build.sh#L12)。此外,使用预测库时,推荐在Linux或者Windows上进行开发,不推荐在MacOS上开发。
|
||||||
|
#### Q3.4.12:使用自定义字典训练,inference时如何修改
|
||||||
|
|
||||||
|
**A**:使用了自定义字典的话,用inference预测时,需要通过 --rec_char_dict_path 修改字典路径。详细操作可参考[文档](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/inference.md#%E8%87%AA%E5%AE%9A%E4%B9%89%E6%96%87%E6%9C%AC%E8%AF%86%E5%88%AB%E5%AD%97%E5%85%B8%E7%9A%84%E6%8E%A8%E7%90%86)
|
||||||
|
|
||||||
|
#### Q3.4.13:能否返回单字字符的位置?
|
||||||
|
|
||||||
|
**A**:训练的时候标注是整个文本行的标注,所以预测的也是文本行位置,如果要获取单字符位置信息,可以根据预测的文本,计算字符数量,再去根据整个文本行的位置信息,估计文本块中每个字符的位置。
|
||||||
|
|
||||||
|
#### Q3.4.16:PaddleOCR模型部署方式有哪几种?
|
||||||
|
**A**:目前有Inference部署,serving部署和手机端Paddle Lite部署,可根据不同场景做灵活的选择:Inference部署适用于本地离线部署,serving部署适用于云端部署,Paddle Lite部署适用于手机端集成。
|
||||||
|
|
Loading…
Reference in New Issue