Merge pull request #1350 from YukSing12/develop
Fix path in doc of quantization
This commit is contained in:
commit
2e0a1e233c
|
@ -58,4 +58,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o
|
|||
### 5. 量化模型部署
|
||||
|
||||
上述步骤导出的量化模型,参数精度仍然是FP32,但是参数的数值范围是int8,导出的模型可以通过PaddleLite的opt模型转换工具完成模型转换。
|
||||
量化模型部署的可参考 [移动端模型部署](../lite/readme.md)
|
||||
量化模型部署的可参考 [移动端模型部署](../../lite/readme.md)
|
||||
|
|
|
@ -65,4 +65,4 @@ python deploy/slim/quantization/export_model.py -c configs/det/det_mv3_db.yml -o
|
|||
The numerical range of the quantized model parameters derived from the above steps is still FP32, but the numerical range of the parameters is int8.
|
||||
The derived model can be converted through the `opt tool` of PaddleLite.
|
||||
|
||||
For quantitative model deployment, please refer to [Mobile terminal model deployment](../lite/readme_en.md)
|
||||
For quantitative model deployment, please refer to [Mobile terminal model deployment](../../lite/readme_en.md)
|
||||
|
|
Loading…
Reference in New Issue