1.芯片型号:J3-
2.天工开物开发包OpenExplorer版本:XJ3_OE_1.15.2-
3.问题定位:模型转换-
4.问题具体描述:利用sh 03_build.sh脚本转换模型过程中,共发现了如下4个问题
4.1 模型读取到我的output维度为 [0, 0, 0, 0],但是我看量化完后的onnx模型维度也是正确的,不知道这对我模型的转换是否有影响。log部分如下所示:-
ONNX IR version: 7-
Opset version: [11]-
Producer: pytorch1.10-
Domain: none-
Input name: input, [1, 3, 512, 512]-
Output name: output, [0, 0, 0, 0]
4.2 在Default calibration in progress处,log打印如下,这对于模型转换是否有影响呢?-
Default calibration in progress: 0%| | 0/13 [00:00<?, ?it/s]2023-05-17 10:58:41.851822862 [E:onnxruntime:, sequential_executor.cc:183 Execute] Non-zero status code returned while running Resize node. Name:‘Resize_140’ Status Message: /home/jenkins/agent/workspace/model_convert/onnxruntime/onnxruntime/core/providers/cpu/tensor/upsample.h:299 void onnxruntime::UpsampleBase::ScalesValidation(const std::vector&, onnxruntime::UpsampleMode) const scales.size() == 2 || (scales.size() == 4 && scales[0] == 1 && scales[1] == 1) was false. ‘Linear’ mode and ‘Cubic’ mode only support 2-D inputs (‘Bilinear’, ‘Bicubic’) or 4-D inputs with the corresponding outermost 2 scale values being 1 in the Resize operator
Default calibration in progress: 0%| | 0/13 [00:10<?, ?it/s]-
2023-05-17 10:58:41,852 INFO Above info is caused by batch mode infer and can be ignored-
2023-05-17 10:58:41,852 INFO Reset batch_size=1 and execute calibration again…-
Default calibration in progress: 100%|████████████████████████████████████| 100/100 [03:34<00:00, 2.14s/it]-
2023-05-17 11:03:09,197 INFO Select max-percentile:percentile=0.99995 method.
4.3 在我的模型中,有3处resize算子,但是前两处是在BPU上处理的,最后一处显示在CPU上,如何让最后一处resize算子也在BPU上运算呢-
Conv_130 BPU id(0) HzSQuantizedConv 0.998633 142.221069-
Resize_140 BPU id(0) HzQuantizedRoiResize 0.998626 180.959702-
Concat_141 BPU id(0) Concat 0.997866 180.959702-
Conv_142 BPU id(0) HzSQuantizedConv 0.997683 180.959702-
Resize_152 BPU id(0) HzQuantizedRoiResize 0.997761 130.957626-
Concat_153 BPU id(0) Concat 0.997822 130.957626-
Conv_154 BPU id(0) HzSQuantizedConv 0.997123 130.957626-
Conv_156 BPU id(0) HzSQuantizedConv 0.999196 122.873146-
Resize_165 CPU – Resize 0.999227
4.4 我的onnx模型能正常推理,但是在量化完成后,利用生成的optimized_float_model.onnx模型推理都不正确,不知道是4.1和4.2的问题还是其他原因呢?-
因为上传限制原因,现将模型放在百度网盘上,链接如下-
链接: https://pan.baidu.com/s/1Ovk0570AVjUuwHycpxPFKg 提取码: errp 复制这段内容后打开百度网盘手机App,操作更方便哦