X3 推理 yolov5n.bin 报错

用户您好,请详细描述您所遇到的问题:

  1. 系统软件版本:x3pi_ubuntu_v1.0.2
  2. 问题涉及的技术领域: onnx->bin
  3. 问题描述: python 端推理异常

```shell

[C][5004][08-08][11:09:09:639][configuration.cpp:51][EasyDNN]EasyDNN version: 0.3.5

[BPU_PLAT]BPU Platform Version(1.3.1)!

[HBRT] set log level as 0. version = 3.13.27

ioctl read error, ret = -1 error = 121

keros_i2c_read failed

ioctl write error, ret = -1 error = 121

keros_i2c_write failed

ioctl write error, ret = -1 error = 121

keros_i2c_write failed

ioctl read error, ret = -1 error = 121

keros_i2c_read failed

ioctl read error, ret = -1 error = 121

keros_i2c_read failed

ioctl read error, ret = -1 error = 121

keros_i2c_read failed

ioctl read error, ret = -1 error = 121

keros_i2c_read failed

ioctl read error, ret = -1 error = 121

keros_i2c_read failed

[000:000] (keros_util.cpp:99): keros_authentication failed, ret = 0

[000:000] (configuration.cpp:147): Keros key init failed.

[DNN] Runtime version = 1.8.4_(3.13.27 HBRT)

[HorizonRT] The model builder version = 1.8.7

tensor type: NV12_SEPARATE

data type: uint8

layout: NCHW

shape: (1, 3, 640, 640)

hbrtErrorInvalidBatchCount

file=f3a31eef122a41bb11cce71188f99333fc1ffdf9

3171

hbrtErrorIllegalHBMHandle

file=7b0c25e023bb537c72d2b9349b00937143a7e5f8

354

[000:880] (hbm_exec_plan.cpp:758): [HBRT ERROR] hbrtErrorInvalidBatchCount

[000:880] (multi_model_task.cpp:1229): RiContinue failed

[000:880] (cpu_schedule.cpp:72): Generate funccall failed, task:Task(task_id:0, core_id:2, priority:0,estimate execute time:66713, desc: [yolov5n_yaml], time_points: [7228472303,7228472399,0])

hbrtErrorRiIsNotInUse

file=f3a31eef122a41bb11cce71188f99333fc1ffdf9

4135

[000:880] (hbm_exec_plan.cpp:397): [HBRT ERROR] hbrtErrorRiIsNotInUse

[000:880] (multi_model_task.cpp:1288): RiDestroy failed

【WaitInferDone failed】

Traceback (most recent call last):

File “python_infer.py”, line 71, in

outputs = models[0].forward(nv12)

RuntimeError: Run model failed!

【INFO】: Offload model “yolov5n_yaml” Successfully.

```

转换模型时有处报错,但是模型转换成功了。

```shell

2022-08-08 11:21:49.630702471 [E:onnxruntime:, sequential_executor.cc:183 Execute] Non-zero status code returned while running Reshape node. Name:‘Reshape_232’ Status Message: /home/jenkins/workspace/model_convert/onnxruntime/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{8,255,20,20}, requested shape:{1,3,85,20,20}

```

您好,根据模型转换时的报错信息看,是模型输出时的尺寸无法reshape成后处理输入想要的尺寸,您可以看一下onnx模型的输出尺寸与生成bin模型的输出尺寸是否一致。

其中onnx模型的输出尺寸可使用netron工具查看,bin模型的输出尺寸可以在板端使用地平线提供的hrt_model_exec工具,具体使用方法如下:

hrt_model_exec model_info --model_file xxx.bin

我也遇上模型推理的问题,可以请教下吗

[ DNN] Runtime version = 1.9.5_ (3.14.5 HBRT ) _ oad model to DDR cost 130.209ms . I1223 21 : 48:58.568744 4317 main.cpp: 1044] get model handle success I1223 21 : 48:58.568821 4317 main. cpp:1665] get model input count successI122321:48:58.5689794317 main.cpp: 1672] prepare input tensor success!T1223 21 : 48 : 58. 569012 4317 main.cpp:1678] get model output count successSegmentation fault sunrise@ubuntu:-$