Merge pull request #3372 from WenmuZhou/fx_bug
add /path/to to gen_label doc
This commit is contained in:
commit
75aa6f2f10
|
@ -18,6 +18,7 @@ PaddleOCR模型部署。
|
||||||
* 首先需要从opencv官网上下载在Linux环境下源码编译的包,以opencv3.4.7为例,下载命令如下。
|
* 首先需要从opencv官网上下载在Linux环境下源码编译的包,以opencv3.4.7为例,下载命令如下。
|
||||||
|
|
||||||
```
|
```
|
||||||
|
cd deploy/cpp_infer
|
||||||
wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz
|
wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz
|
||||||
tar -xf 3.4.7.tar.gz
|
tar -xf 3.4.7.tar.gz
|
||||||
```
|
```
|
||||||
|
|
|
@ -18,6 +18,7 @@ PaddleOCR model deployment.
|
||||||
* First of all, you need to download the source code compiled package in the Linux environment from the opencv official website. Taking opencv3.4.7 as an example, the download command is as follows.
|
* First of all, you need to download the source code compiled package in the Linux environment from the opencv official website. Taking opencv3.4.7 as an example, the download command is as follows.
|
||||||
|
|
||||||
```
|
```
|
||||||
|
cd deploy/cpp_infer
|
||||||
wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz
|
wget https://github.com/opencv/opencv/archive/3.4.7.tar.gz
|
||||||
tar -xf 3.4.7.tar.gz
|
tar -xf 3.4.7.tar.gz
|
||||||
```
|
```
|
||||||
|
|
|
@ -29,6 +29,7 @@ deploy/hubserving/ocr_system/
|
||||||
### 1. 准备环境
|
### 1. 准备环境
|
||||||
```shell
|
```shell
|
||||||
# 安装paddlehub
|
# 安装paddlehub
|
||||||
|
# paddlehub 需要 python>3.6.2
|
||||||
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
|
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -30,6 +30,7 @@ The following steps take the 2-stage series service as an example. If only the d
|
||||||
### 1. Prepare the environment
|
### 1. Prepare the environment
|
||||||
```shell
|
```shell
|
||||||
# Install paddlehub
|
# Install paddlehub
|
||||||
|
# python>3.6.2 is required bt paddlehub
|
||||||
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
|
pip3 install paddlehub==2.1.0 --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -18,9 +18,9 @@ PaddleOCR 也提供了数据格式转换脚本,可以将官网 label 转换支
|
||||||
|
|
||||||
```
|
```
|
||||||
# 将官网下载的标签文件转换为 train_icdar2015_label.txt
|
# 将官网下载的标签文件转换为 train_icdar2015_label.txt
|
||||||
python gen_label.py --mode="det" --root_path="icdar_c4_train_imgs/" \
|
python gen_label.py --mode="det" --root_path="/path/to/icdar_c4_train_imgs/" \
|
||||||
--input_path="ch4_training_localization_transcription_gt" \
|
--input_path="/path/to/ch4_training_localization_transcription_gt" \
|
||||||
--output_label="train_icdar2015_label.txt"
|
--output_label="/path/to/train_icdar2015_label.txt"
|
||||||
```
|
```
|
||||||
|
|
||||||
解压数据集和下载标注文件后,PaddleOCR/train_data/ 有两个文件夹和两个文件,分别是:
|
解压数据集和下载标注文件后,PaddleOCR/train_data/ 有两个文件夹和两个文件,分别是:
|
||||||
|
|
|
@ -221,7 +221,7 @@ python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Gl
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|
||||||
**SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`,同时,还需要增加参数`--det_sast_polygon=True`,**可以执行如下命令:
|
SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`,同时,还需要增加参数`--det_sast_polygon=True`,可以执行如下命令:
|
||||||
```
|
```
|
||||||
python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True
|
python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True
|
||||||
```
|
```
|
||||||
|
|
|
@ -230,7 +230,7 @@ First, convert the model saved in the SAST text detection training process into
|
||||||
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.pretrained_model=./det_r50_vd_sast_totaltext_v2.0_train/best_accuracy Global.save_inference_dir=./inference/det_sast_tt
|
python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.pretrained_model=./det_r50_vd_sast_totaltext_v2.0_train/best_accuracy Global.save_inference_dir=./inference/det_sast_tt
|
||||||
```
|
```
|
||||||
|
|
||||||
**For SAST curved text detection model inference, you need to set the parameter `--det_algorithm="SAST"` and `--det_sast_polygon=True`**, run the following command:
|
For SAST curved text detection model inference, you need to set the parameter `--det_algorithm="SAST"` and `--det_sast_polygon=True`, run the following command:
|
||||||
|
|
||||||
```
|
```
|
||||||
python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True
|
python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True
|
||||||
|
|
|
@ -31,7 +31,9 @@ def gen_det_label(root_path, input_dir, out_label):
|
||||||
for label_file in os.listdir(input_dir):
|
for label_file in os.listdir(input_dir):
|
||||||
img_path = root_path + label_file[3:-4] + ".jpg"
|
img_path = root_path + label_file[3:-4] + ".jpg"
|
||||||
label = []
|
label = []
|
||||||
with open(os.path.join(input_dir, label_file), 'r') as f:
|
with open(
|
||||||
|
os.path.join(input_dir, label_file), 'r',
|
||||||
|
encoding='utf-8-sig') as f:
|
||||||
for line in f.readlines():
|
for line in f.readlines():
|
||||||
tmp = line.strip("\n\r").replace("\xef\xbb\xbf",
|
tmp = line.strip("\n\r").replace("\xef\xbb\xbf",
|
||||||
"").split(',')
|
"").split(',')
|
||||||
|
|
Loading…
Reference in New Issue