update doc
This commit is contained in:
parent
042034b61d
commit
4561ec9798
|
@ -41,14 +41,17 @@ inference 模型(`paddle.jit.save`保存的模型)
|
|||
|
||||
下载超轻量级中文检测模型:
|
||||
```
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/{file} -C ./ch_lite/
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/ch_ppocr_mobile_v2.0_det_train.tar -C ./ch_lite/
|
||||
```
|
||||
上述模型是以MobileNetV3为backbone训练的DB算法,将训练好的模型转换成inference模型只需要运行如下命令:
|
||||
```
|
||||
# -c 后面设置训练算法的yml配置文件,需设置 `Global.load_static_weights=False`, 并将待转换的训练模型地址写入配置文件里的 `Global.pretrained_model` 字段下,不用添加文件后缀.pdmodel,.pdopt或.pdparams。
|
||||
# -o 后面设置转换的模型将保存的地址。
|
||||
# -c 后面设置训练算法的yml配置文件
|
||||
# -o 配置可选参数
|
||||
# Global.pretrained_model 参数设置待转换的训练模型地址,不用添加文件后缀 .pdmodel,.pdopt或.pdparams。
|
||||
# Global.load_static_weights 参数需要设置为 False。
|
||||
# Global.save_inference_dir参数设置转换的模型将保存的地址。
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_mv3_db_v1.1.yml -o ./inference/det_db/
|
||||
python3 tools/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.checkpoints=./ch_lite/ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_db/
|
||||
```
|
||||
转inference模型时,使用的配置文件和训练时使用的配置文件相同。另外,还需要设置配置文件中的`Global.checkpoints`参数,其指向训练中保存的模型参数文件。
|
||||
转换成功后,在模型保存目录下有三个文件:
|
||||
|
@ -64,14 +67,18 @@ inference/det_db/
|
|||
|
||||
下载超轻量中文识别模型:
|
||||
```
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/{file} -C ./ch_lite/
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/ch_ppocr_mobile_v2.0_rec_train.tar -C ./ch_lite/
|
||||
```
|
||||
|
||||
识别模型转inference模型与检测的方式相同,如下:
|
||||
```
|
||||
# -c 后面设置训练算法的yml配置文件,需设置 `Global.load_static_weights=False`, 并将待转换的训练模型地址写入配置文件里的 `Global.pretrained_model` 字段下,不用添加文件后缀.pdmodel,.pdopt或.pdparams。
|
||||
# -o 后面设置转换的模型将保存的地址。
|
||||
python3 tools/export_model.py -c configs/rec/ch_ppocr_v1.1/rec_chinese_lite_train_v1.1.yml -o ./inference/rec_crnn/
|
||||
# -c 后面设置训练算法的yml配置文件
|
||||
# -o 配置可选参数
|
||||
# Global.pretrained_model 参数设置待转换的训练模型地址,不用添加文件后缀 .pdmodel,.pdopt或.pdparams。
|
||||
# Global.load_static_weights 参数需要设置为 False。
|
||||
# Global.save_inference_dir参数设置转换的模型将保存的地址。
|
||||
|
||||
python3 tools/export_model.py -c configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml -o Global.checkpoints=./ch_lite/ch_ppocr_mobile_v2.0_rec_train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/rec_crnn/
|
||||
```
|
||||
|
||||
**注意:**如果您是在自己的数据集上训练的模型,并且调整了中文字符的字典文件,请注意修改配置文件中的`character_dict_path`是否是所需要的字典文件。
|
||||
|
@ -89,15 +96,18 @@ python3 tools/export_model.py -c configs/rec/ch_ppocr_v1.1/rec_chinese_lite_trai
|
|||
|
||||
下载方向分类模型:
|
||||
```
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/{file} -C ./ch_lite/
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/ch_ppocr_mobile_v2.0_cls_train.tar -C ./ch_lite/
|
||||
```
|
||||
|
||||
方向分类模型转inference模型与检测的方式相同,如下:
|
||||
```
|
||||
# -c 后面设置训练算法的yml配置文件,需设置 `Global.load_static_weights=False`, 并将待转换的训练模型地址写入配置文件里的 `Global.pretrained_model` 字段下,不用添加文件后缀.pdmodel,.pdopt或.pdparams。
|
||||
# -o 后面设置转换的模型将保存的地址。
|
||||
# -c 后面设置训练算法的yml配置文件
|
||||
# -o 配置可选参数
|
||||
# Global.pretrained_model 参数设置待转换的训练模型地址,不用添加文件后缀 .pdmodel,.pdopt或.pdparams。
|
||||
# Global.load_static_weights 参数需要设置为 False。
|
||||
# Global.save_inference_dir参数设置转换的模型将保存的地址。
|
||||
|
||||
python3 tools/export_model.py -c configs/cls/cls_mv3.yml -o ./inference/cls/
|
||||
python3 tools/export_model.py -c configs/cls/cls_mv3.yml -o Global.checkpoints=./ch_lite/ch_ppocr_mobile_v2.0_cls_train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/cls/
|
||||
```
|
||||
|
||||
转换成功后,在目录下有三个文件:
|
||||
|
@ -145,10 +155,7 @@ python3 tools/infer/predict_det.py --image_dir="./doc/imgs/2.jpg" --det_model_di
|
|||
首先将DB文本检测训练过程中保存的模型,转换成inference model。以基于Resnet50_vd骨干网络,在ICDAR2015英文数据集训练的模型为例([模型下载地址](link)),可以使用如下命令进行转换:
|
||||
|
||||
```
|
||||
# -c 后面设置训练算法的yml配置文件,需设置 `Global.load_static_weights=False`, 并将待转换的训练模型地址写入配置文件里的 `Global.pretrained_model` 字段下,不用添加文件后缀.pdmodel,.pdopt或.pdparams。
|
||||
# -o 后面设置转换的模型将保存的地址。
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_db.yml -o "./inference/det_db"
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_db.yml -o Global.checkpoints=./det_r50_vd_db_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_db
|
||||
```
|
||||
|
||||
DB文本检测模型推理,可以执行如下命令:
|
||||
|
@ -169,10 +176,7 @@ python3 tools/infer/predict_det.py --image_dir="./doc/imgs_en/img_10.jpg" --det_
|
|||
首先将EAST文本检测训练过程中保存的模型,转换成inference model。以基于Resnet50_vd骨干网络,在ICDAR2015英文数据集训练的模型为例([模型下载地址](link)),可以使用如下命令进行转换:
|
||||
|
||||
```
|
||||
# -c 后面设置训练算法的yml配置文件,需设置 `Global.load_static_weights=False`, 并将待转换的训练模型地址写入配置文件里的 `Global.pretrained_model` 字段下,不用添加文件后缀.pdmodel,.pdopt或.pdparams。
|
||||
# -o 后面设置转换的模型将保存的地址。
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_east.yml -o Global.checkpoints="./models/det_r50_vd_east/best_accuracy" Global.save_inference_dir="./inference/det_east"
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_east.yml -o Global.checkpoints=./det_r50_vd_east_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_east
|
||||
```
|
||||
|
||||
**EAST文本检测模型推理,需要设置参数`--det_algorithm="EAST"`**,可以执行如下命令:
|
||||
|
@ -192,10 +196,8 @@ python3 tools/infer/predict_det.py --det_algorithm="EAST" --image_dir="./doc/img
|
|||
#### (1). 四边形文本检测模型(ICDAR2015)
|
||||
首先将SAST文本检测训练过程中保存的模型,转换成inference model。以基于Resnet50_vd骨干网络,在ICDAR2015英文数据集训练的模型为例([模型下载地址](link)),可以使用如下命令进行转换:
|
||||
```
|
||||
# -c 后面设置训练算法的yml配置文件,需设置 `Global.load_static_weights=False`, 并将待转换的训练模型地址写入配置文件里的 `Global.pretrained_model` 字段下,不用添加文件后缀.pdmodel,.pdopt或.pdparams。
|
||||
# -o 后面设置转换的模型将保存的地址。
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_icdar15.yml -o Global.checkpoints=./det_r50_vd_sast_icdar15_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_sast_ic15
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_icdar15.yml -o "./inference/det_sast_ic15"
|
||||
```
|
||||
**SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`**,可以执行如下命令:
|
||||
```
|
||||
|
@ -209,10 +211,8 @@ python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/img
|
|||
首先将SAST文本检测训练过程中保存的模型,转换成inference model。以基于Resnet50_vd骨干网络,在Total-Text英文数据集训练的模型为例([模型下载地址](link)),可以使用如下命令进行转换:
|
||||
|
||||
```
|
||||
# -c 后面设置训练算法的yml配置文件,需设置 `Global.load_static_weights=False`, 并将待转换的训练模型地址写入配置文件里的 `Global.pretrained_model` 字段下,不用添加文件后缀.pdmodel,.pdopt或.pdparams。
|
||||
# -o 后面设置转换的模型将保存的地址。
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.checkpoints=./det_r50_vd_sast_totaltext_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_sast_tt
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o "./inference/det_sast_tt"
|
||||
```
|
||||
|
||||
**SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`,同时,还需要增加参数`--det_sast_polygon=True`,**可以执行如下命令:
|
||||
|
@ -251,30 +251,30 @@ Predicts of ./doc/imgs_words/ch/word_4.jpg:['实力活力', 0.89552695]
|
|||
<a name="基于CTC损失的识别模型推理"></a>
|
||||
### 2. 基于CTC损失的识别模型推理
|
||||
|
||||
我们以STAR-Net为例,介绍基于CTC损失的识别模型推理。 CRNN和Rosetta使用方式类似,不用设置识别算法参数rec_algorithm。
|
||||
我们以 CRNN 为例,介绍基于CTC损失的识别模型推理。 Rosetta 使用方式类似,不用设置识别算法参数rec_algorithm。
|
||||
|
||||
首先将STAR-Net文本识别训练过程中保存的模型,转换成inference model。以基于Resnet34_vd骨干网络,使用MJSynth和SynthText两个英文文本识别合成数据集训练
|
||||
首先将 Rosetta 文本识别训练过程中保存的模型,转换成inference model。以基于Resnet34_vd骨干网络,使用MJSynth和SynthText两个英文文本识别合成数据集训练
|
||||
的模型为例([模型下载地址](link)),可以使用如下命令进行转换:
|
||||
|
||||
```
|
||||
# -c 后面设置训练算法的yml配置文件,需设置 `Global.load_static_weights=False`, 并将待转换的训练模型地址写入配置文件里的 `Global.pretrained_model` 字段下,不用添加文件后缀.pdmodel,.pdopt或.pdparams。
|
||||
# -o 后面设置转换的模型将保存的地址。
|
||||
python3 tools/export_model.py -c configs/det/rec_r34_vd_none_bilstm_ctc.yml -o Global.checkpoints=./rec_r34_vd_none_bilstm_ctc_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/rec_crnn
|
||||
|
||||
python3 tools/export_model.py -c configs/rec/rec_r34_vd_tps_bilstm_ctc.yml -o "./inference/starnet"
|
||||
```
|
||||
|
||||
STAR-Net文本识别模型推理,可以执行如下命令:
|
||||
|
||||
```
|
||||
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./inference/starnet/" --rec_image_shape="3, 32, 100" --rec_char_type="en"
|
||||
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./inference/rec_crnn/" --rec_image_shape="3, 32, 100" --rec_char_type="en"
|
||||
```
|
||||
|
||||
<a name="基于Attention损失的识别模型推理"></a>
|
||||
### 3. 基于Attention损失的识别模型推理
|
||||
|
||||
基于Attention损失的识别模型与ctc不同,需要额外设置识别算法参数 --rec_algorithm="RARE"
|
||||
RARE 文本识别模型推理,可以执行如下命令:
|
||||
```
|
||||
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./inference/rare/" --rec_image_shape="3, 32, 100" --rec_char_type="en"
|
||||
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./inference/rare/" --rec_image_shape="3, 32, 100" --rec_char_type="en" --rec_algorithm="RARE"
|
||||
|
||||
```
|
||||
|
||||
![](../imgs_words_en/word_336.png)
|
||||
|
|
|
@ -43,15 +43,18 @@ Next, we first introduce how to convert a trained model into an inference model,
|
|||
|
||||
Download the lightweight Chinese detection model:
|
||||
```
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/{file} -C ./ch_lite/
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/ch_ppocr_mobile_v2.0_det_train.tar -C ./ch_lite/
|
||||
```
|
||||
|
||||
The above model is a DB algorithm trained with MobileNetV3 as the backbone. To convert the trained model into an inference model, just run the following command:
|
||||
```
|
||||
-c Set the yml configuration file of the algorithm, you need to set `Global.load_static_weights=False`, and write the path of the training model to be converted under the `Global.pretrained_model` parameter in the configuration file, without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# -o Set the address where the converted model will be saved.
|
||||
# -c Set the training algorithm yml configuration file
|
||||
# -o Set optional parameters
|
||||
# Global.checkpoints parameter Set the training model address to be converted without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# Global.load_static_weights needs to be set to False
|
||||
# Global.save_inference_dir Set the address where the converted model will be saved.
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_mv3_db_v1.1.yml -o ./inference/det_db/
|
||||
python3 tools/export_model.py -c configs/det/ch_ppocr_v2.0/ch_det_mv3_db_v2.0.yml -o Global.checkpoints=./ch_lite/ch_ppocr_mobile_v2.0_det_train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_db/
|
||||
```
|
||||
|
||||
When converting to an inference model, the configuration file used is the same as the configuration file used during training. In addition, you also need to set the `Global.checkpoints` parameter in the configuration file.
|
||||
|
@ -68,15 +71,18 @@ inference/det_db/
|
|||
|
||||
Download the lightweight Chinese recognition model:
|
||||
```
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/{file} -C ./ch_lite/
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/ch_ppocr_mobile_v2.0_rec_train.tar -C ./ch_lite/
|
||||
```
|
||||
|
||||
The recognition model is converted to the inference model in the same way as the detection, as follows:
|
||||
```
|
||||
-c Set the yml configuration file of the algorithm, you need to set `Global.load_static_weights=False`, and write the path of the training model to be converted under the `Global.pretrained_model` parameter in the configuration file, without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# -o Set the address where the converted model will be saved.
|
||||
# -c Set the training algorithm yml configuration file
|
||||
# -o Set optional parameters
|
||||
# Global.checkpoints parameter Set the training model address to be converted without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# Global.load_static_weights needs to be set to False
|
||||
# Global.save_inference_dir Set the address where the converted model will be saved.
|
||||
|
||||
python3 tools/export_model.py -c configs/cls/cls_mv3.yml -o ./inference/cls/
|
||||
python3 tools/export_model.py -c configs/rec/ch_ppocr_v2.0/rec_chinese_lite_train_v2.0.yml -o Global.checkpoints=./ch_lite/ch_ppocr_mobile_v2.0_rec_train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/rec_crnn/
|
||||
```
|
||||
|
||||
If you have a model trained on your own dataset with a different dictionary file, please make sure that you modify the `character_dict_path` in the configuration file to your dictionary file path.
|
||||
|
@ -94,15 +100,18 @@ inference/det_db/
|
|||
|
||||
Download the angle classification model:
|
||||
```
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/{file} -C ./ch_lite/
|
||||
wget -P ./ch_lite/ {link} && tar xf ./ch_lite/ch_ppocr_mobile_v2.0_cls_train.tar -C ./ch_lite/
|
||||
```
|
||||
|
||||
The angle classification model is converted to the inference model in the same way as the detection, as follows:
|
||||
```
|
||||
-c Set the yml configuration file of the algorithm, you need to set `Global.load_static_weights=False`, and write the path of the training model to be converted under the `Global.pretrained_model` parameter in the configuration file, without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# -o Set the address where the converted model will be saved.
|
||||
# -c Set the training algorithm yml configuration file
|
||||
# -o Set optional parameters
|
||||
# Global.checkpoints parameter Set the training model address to be converted without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# Global.load_static_weights needs to be set to False
|
||||
# Global.save_inference_dir Set the address where the converted model will be saved.
|
||||
|
||||
python3 tools/export_model.py -c configs/cls/cls_mv3.yml -o ./inference/cls/
|
||||
python3 tools/export_model.py -c configs/cls/cls_mv3.yml -o Global.checkpoints=./ch_lite/ch_ppocr_mobile_v2.0_cls_train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/cls/
|
||||
```
|
||||
|
||||
After the conversion is successful, there are two files in the directory:
|
||||
|
@ -152,10 +161,7 @@ python3 tools/infer/predict_det.py --image_dir="./doc/imgs/2.jpg" --det_model_di
|
|||
First, convert the model saved in the DB text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the ICDAR2015 English dataset as an example ([model download link](link)), you can use the following command to convert:
|
||||
|
||||
```
|
||||
-c Set the yml configuration file of the algorithm, you need to set `Global.load_static_weights=False`, and write the path of the training model to be converted under the `Global.pretrained_model` parameter in the configuration file, without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# -o Set the address where the converted model will be saved.
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_db.yml -o "./inference/det_db"
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_db.yml -o Global.checkpoints=./det_r50_vd_db_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_db
|
||||
```
|
||||
|
||||
DB text detection model inference, you can execute the following command:
|
||||
|
@ -176,10 +182,7 @@ The visualized text detection results are saved to the `./inference_results` fol
|
|||
First, convert the model saved in the EAST text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the ICDAR2015 English dataset as an example ([model download link](link)), you can use the following command to convert:
|
||||
|
||||
```
|
||||
-c Set the yml configuration file of the algorithm, you need to set `Global.load_static_weights=False`, and write the path of the training model to be converted under the `Global.pretrained_model` parameter in the configuration file, without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# -o Set the address where the converted model will be saved.
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_east.yml -o Global.checkpoints="./models/det_r50_vd_east/best_accuracy" Global.save_inference_dir="./inference/det_east"
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_east.yml -o Global.checkpoints=./det_r50_vd_east_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_east
|
||||
```
|
||||
**For EAST text detection model inference, you need to set the parameter ``--det_algorithm="EAST"``**, run the following command:
|
||||
|
||||
|
@ -200,10 +203,7 @@ The visualized text detection results are saved to the `./inference_results` fol
|
|||
First, convert the model saved in the SAST text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the ICDAR2015 English dataset as an example ([model download link](link)), you can use the following command to convert:
|
||||
|
||||
```
|
||||
-c Set the yml configuration file of the algorithm, you need to set `Global.load_static_weights=False`, and write the path of the training model to be converted under the `Global.pretrained_model` parameter in the configuration file, without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# -o Set the address where the converted model will be saved.
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_icdar15.yml -o "./inference/det_sast_ic15"
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_icdar15.yml -o Global.checkpoints=./det_r50_vd_sast_icdar15_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_sast_ic15
|
||||
```
|
||||
|
||||
**For SAST quadrangle text detection model inference, you need to set the parameter `--det_algorithm="SAST"`**, run the following command:
|
||||
|
@ -220,10 +220,7 @@ The visualized text detection results are saved to the `./inference_results` fol
|
|||
First, convert the model saved in the SAST text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the Total-Text English dataset as an example ([model download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)), you can use the following command to convert:
|
||||
|
||||
```
|
||||
-c Set the yml configuration file of the algorithm, you need to set `Global.load_static_weights=False`, and write the path of the training model to be converted under the `Global.pretrained_model` parameter in the configuration file, without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# -o Set the address where the converted model will be saved.
|
||||
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.checkpoints="./models/sast_r50_vd_total_text/best_accuracy" Global.save_inference_dir="./inference/det_sast_tt"
|
||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.checkpoints=./det_r50_vd_sast_totaltext_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/det_sast_tt
|
||||
```
|
||||
|
||||
**For SAST curved text detection model inference, you need to set the parameter `--det_algorithm="SAST"` and `--det_sast_polygon=True`**, run the following command:
|
||||
|
@ -263,18 +260,15 @@ Predicts of ./doc/imgs_words/ch/word_4.jpg:['实力活力', 0.89552695]
|
|||
<a name="CTC-BASED_RECOGNITION"></a>
|
||||
### 2. CTC-BASED TEXT RECOGNITION MODEL INFERENCE
|
||||
|
||||
Taking STAR-Net as an example, we introduce the recognition model inference based on CTC loss. CRNN and Rosetta are used in a similar way, by setting the recognition algorithm parameter `rec_algorithm`.
|
||||
Taking CRNN as an example, we introduce the recognition model inference based on CTC loss. Rosetta and Star-Net are used in a similar way, No need to set the recognition algorithm parameter rec_algorithm.
|
||||
|
||||
First, convert the model saved in the STAR-Net text recognition training process into an inference model. Taking the model based on Resnet34_vd backbone network, using MJSynth and SynthText (two English text recognition synthetic datasets) for training, as an example ([model download address](link)). It can be converted as follow:
|
||||
First, convert the model saved in the CRNN text recognition training process into an inference model. Taking the model based on Resnet34_vd backbone network, using MJSynth and SynthText (two English text recognition synthetic datasets) for training, as an example ([model download address](link)). It can be converted as follow:
|
||||
|
||||
```
|
||||
-c Set the yml configuration file of the algorithm, you need to set `Global.load_static_weights=False`, and write the path of the training model to be converted under the `Global.pretrained_model` parameter in the configuration file, without adding the file suffix .pdmodel, .pdopt or .pdparams.
|
||||
# -o Set the address where the converted model will be saved.
|
||||
|
||||
python3 tools/export_model.py -c configs/rec/rec_r34_vd_tps_bilstm_ctc.yml -o "./inference/starnet"
|
||||
python3 tools/export_model.py -c configs/det/rec_r34_vd_none_bilstm_ctc.yml -o Global.checkpoints=./rec_r34_vd_none_bilstm_ctc_v2.0.train/best_accuracy Global.load_static_weights=False Global.save_inference_dir=./inference/rec_crnn
|
||||
```
|
||||
|
||||
For STAR-Net text recognition model inference, execute the following commands:
|
||||
For CRNN text recognition model inference, execute the following commands:
|
||||
|
||||
```
|
||||
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./inference/starnet/" --rec_image_shape="3, 32, 100" --rec_char_type="en"
|
||||
|
@ -284,7 +278,11 @@ python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png
|
|||
### 3. ATTENTION-BASED TEXT RECOGNITION MODEL INFERENCE
|
||||
![](../imgs_words_en/word_336.png)
|
||||
|
||||
The recognition model based on Attention loss is different from ctc, and additional recognition algorithm parameters need to be set --rec_algorithm="RARE"
|
||||
After executing the command, the recognition result of the above image is as follows:
|
||||
```bash
|
||||
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./inference/rare/" --rec_image_shape="3, 32, 100" --rec_char_type="en" --rec_algorithm="RARE"
|
||||
```
|
||||
|
||||
Predicts of ./doc/imgs_words_en/word_336.png:['super', 0.9999555]
|
||||
|
||||
|
|
Loading…
Reference in New Issue