-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
quantized.out 量化OCR模型出错 #3154
Comments
转 mnn 时加个 --keepInputFormat=0 试试? |
@jxt1234 感谢回复! ./quantized.out rec.mnn rec-quant.mnn imageInputConfig.json 运行日志如下
|
rec.mnn 可以直接运行么?比如使用 MNNV2Basic 工具测试? |
用MNNV2Basic工具测试报错了,我平时是用mnn的python接口可以进行推理。
另外有一个模型,MNNV2Basic测试可以通过,但量化还是报错
|
从这个 unary 不支持的 log 看上去,本地代码没有更新并重编。 先更新一下试试吧 |
模型来自https://paddlepaddle.github.io/PaddleOCR/main/ppocr/model_list.html#21
ch_PP-OCRv3_rec
现将paddle模型转为onnx,再转为fp32 mnn,再进行量化
以rec模型为例
运行报错
Compute Shape Error for x___tr4conv2d_97.tmp_0,其中x___tr4conv2d_97.tmp_0像是网络结构中第一层的一个shape转换NCHW->NC4HW4。这个ConvertTensor层无法正常推理?有什么思路来进一步定位和解决问题呢
日志输出Quantize model done!,但实际输出模型大小还是32位,并不是指定的8bit
imageInputConfig.json
模型文件
rec.onnx.zip
rec.mnn.zip
完整运行日志
平台(如果交叉编译请再附上交叉编译目标平台):
Platform(Include target platform as well if cross-compiling):
linux
Github版本:
Github Version:
3.0
编译方式:
Compiling Method
The text was updated successfully, but these errors were encountered: