Merge remote-tracking branch 'upstream/develop' into zxdev
This commit is contained in:
commit
9136cba8cb
|
@ -43,10 +43,8 @@ PaddleOCR已完成Windows和Mac系统适配,运行时注意两点:1、在[
|
|||
其中,公开数据集都是开源的,用户可自行搜索下载,也可参考[中文数据集](./datasets.md),合成数据暂不开源,用户可使用开源合成工具自行合成,可参考的合成工具包括[text_renderer](https://github.com/Sanster/text_renderer)、[SynthText](https://github.com/ankush-me/SynthText)、[TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator)等。
|
||||
|
||||
10. **使用带TPS的识别模型预测报错**
|
||||
|
||||
报错信息:Input(X) dims[3] and Input(Grid) dims[2] should be equal, but received X dimension[3](320) != Grid dimension[2](100)
|
||||
原因:TPS模块暂时无法支持变长的输入,请设置 --rec_image_shape='3,32,100' --rec_char_type='en' 固定输入shape
|
||||
|
||||
11. **自定义字典训练的模型,识别结果出现字典里没出现的字**
|
||||
|
||||
预测时没有设置采用的自定义字典路径。设置方法是在预测时,通过增加输入参数rec_char_dict_path来设置。
|
||||
|
|
|
@ -127,7 +127,7 @@ python3 tools/infer/predict_det.py --image_dir="./doc/imgs_en/img_10.jpg" --det_
|
|||
|
||||
## 文本识别模型推理
|
||||
|
||||
下面将介绍超轻量中文识别模型推理和基于CTC损失的识别模型推理。**而基于Attention损失的识别模型推理还在调试中**。对于中文文本识别,建议优先选择基于CTC损失的识别模型,实践中也发现基于Attention损失的效果不如基于CTC损失的识别模型。
|
||||
下面将介绍超轻量中文识别模型推理、基于CTC损失的识别模型推理和基于Attention损失的识别模型推理。对于中文文本识别,建议优先选择基于CTC损失的识别模型,实践中也发现基于Attention损失的效果不如基于CTC损失的识别模型。此外,如果训练时修改了文本的字典,请参考下面的自定义文本识别字典的推理。
|
||||
|
||||
|
||||
### 1.超轻量中文识别模型推理
|
||||
|
|
|
@ -45,7 +45,5 @@ At present, the open source model, dataset and magnitude are as follows:
|
|||
Among them, the public datasets are opensourced, users can search and download by themselves, or refer to [Chinese data set](./datasets_en.md), synthetic data is not opensourced, users can use open-source synthesis tools to synthesize data themselves. Current available synthesis tools include [text_renderer](https://github.com/Sanster/text_renderer), [SynthText](https://github.com/ankush-me/SynthText), [TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator), etc.
|
||||
|
||||
10. **Error in using the model with TPS module for prediction**
|
||||
|
||||
Error message: Input(X) dims[3] and Input(Grid) dims[2] should be equal, but received X dimension[3](108) != Grid dimension[2](100)
|
||||
|
||||
Solution:TPS does not support variable shape. Please set --rec_image_shape='3,32,100' and --rec_char_type='en'
|
||||
|
|
|
@ -59,7 +59,13 @@ def cal_det_res(exe, config, eval_info_dict):
|
|||
img_list.append(data[ino][0])
|
||||
ratio_list.append(data[ino][1])
|
||||
img_name_list.append(data[ino][2])
|
||||
try:
|
||||
img_list = np.concatenate(img_list, axis=0)
|
||||
except:
|
||||
err = "concatenate error usually caused by different input image shapes in evaluation or testing.\n \
|
||||
Please set \"test_batch_size_per_card\" in main yml as 1\n \
|
||||
or add \"test_image_shape: [h, w]\" in reader yml for EvalReader."
|
||||
raise Exception(err)
|
||||
outs = exe.run(eval_info_dict['program'], \
|
||||
feed={'image': img_list}, \
|
||||
fetch_list=eval_info_dict['fetch_varname_list'])
|
||||
|
|
Loading…
Reference in New Issue