This commit is contained in:
YukSing12 2020-12-08 16:03:50 +08:00
parent 70f3414622
commit 8f5f35e643
2 changed files with 2 additions and 2 deletions

View File

@ -58,4 +58,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o
### 5. 量化模型部署
上述步骤导出的量化模型参数精度仍然是FP32但是参数的数值范围是int8导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。
量化模型部署的可参考 [移动端模型部署](../lite/readme.md)
量化模型部署的可参考 [移动端模型部署](../../lite/readme.md)

View File

@ -65,4 +65,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o
The numerical range of the quantized model parameters derived from the above steps is still FP32, but the numerical range of the parameters is int8.
The derived model can be converted through the `opt tool` of PaddleLite.
For quantitative model deployment, please refer to [Mobile terminal model deployment](../lite/readme_en.md)
For quantitative model deployment, please refer to [Mobile terminal model deployment](../../lite/readme_en.md)