YOLOv5转换的onnx模型check报错

1、YOLOv5转onnx代码如下:

# ONNX exporttry:    import onnx    print('\nStarting ONNX export with onnx %s...' % onnx.__version__)    f = opt.weights.replace('.pt', '.onnx')  # filename    torch.onnx.export(model, img, f, verbose=False, opset_version=10, input_names=['data'],                      output_names=['classes', 'boxes'] if y is None else ['output'])    # Checks    onnx_model = onnx.load(f)  # load onnx model    onnx.checker.check_model(onnx_model)  # check onnx model    # print(onnx.helper.printable_graph(onnx_model.graph))  # print a human readable model    print('ONNX export success, saved as %s' % f)except Exception as e:    print('ONNX export failure: %s' % e)控制台运行日志如下:

2、服务器进行check,

输出日志如下:

2020-12-14 17:08:51,692 INFO hb_mapper_checker 63 Model type: onnx2020-12-14 17:08:51,692 DEBUG hb_mapper_checker 64 march: bernoulli22020-12-14 17:08:51,692 INFO hb_mapper_checker 65 output file: ./yolov5_checker.log2020-12-14 17:08:51,692 INFO hb_mapper_checker 70 input names []2020-12-14 17:08:51,693 INFO hb_mapper_checker 71 input shapes {}2020-12-14 17:08:51,693 INFO hb_mapper_checker 77 Begin model checking....2020-12-14 17:08:51,693 INFO build 31 [Mon Dec 14 17:08:51 2020] Start to Horizon NN Model Convert.2020-12-14 17:08:51,693 INFO build 129 The input parameter is not specified, convert with default parameters.2020-12-14 17:08:51,693 INFO build 166 The hbdk parameter is not specified, and the submodel will be compiled with the default parameter.2020-12-14 17:08:51,693 INFO build 116 HorizonNN version: 0.7.62020-12-14 17:08:51,694 INFO build 120 HBDK version: 3.12.92020-12-14 17:08:51,694 INFO build 31 [Mon Dec 14 17:08:51 2020] Start to parse the onnx model.2020-12-14 17:08:51,901 INFO onnx_parser 101 ONNX model info:ONNX IR version:  6Opset version:    10Input name:       data, [1, 3, 672, 672]2020-12-14 17:08:52,157 INFO build 34 [Mon Dec 14 17:08:52 2020] End to parse the onnx model.2020-12-14 17:08:52,159 INFO build 256 Model input names: ['data']2020-12-14 17:08:55,376 INFO build 600 Saving the original float model: ./.hb_check/original_float_model.onnx.2020-12-14 17:08:55,378 INFO build 31 [Mon Dec 14 17:08:55 2020] Start to optimize the model.2020-12-14 17:08:55,934 INFO build 34 [Mon Dec 14 17:08:55 2020] End to optimize the model.2020-12-14 17:08:58,626 INFO build 611 Saving the optimized model: ./.hb_check/optimized_float_model.onnx.2020-12-14 17:08:58,626 INFO build 31 [Mon Dec 14 17:08:58 2020] Start to calibrate the model.2020-12-14 17:08:58,936 INFO build 34 [Mon Dec 14 17:08:58 2020] End to calibrate the model.2020-12-14 17:08:58,940 INFO build 31 [Mon Dec 14 17:08:58 2020] Start to quantize the model.2020-12-14 17:09:00,718 INFO build 34 [Mon Dec 14 17:09:00 2020] End to quantize the model.2020-12-14 17:09:07,718 INFO build 625 Saving the quantized model: ./.hb_check/quantized_model.onnx.2020-12-14 17:09:07,718 INFO build 31 [Mon Dec 14 17:09:07 2020] Start to compile the model with march bernoulli2.2020-12-14 17:09:08,713 INFO hybrid_build 111 Compile submodel: torch-jit-export_subgraph_0控制台输出日志如下

3、build结果如下:

这个问题该怎么解决呢

你好,去资料下载里面申请1.1.18j版本进行试用,coredump这个问题可能是你当前版本的已知问题。

是否是已知问题,你可以正确的跑完“3、build结果”以后提供日志给我们进行判断哈。-
你当前这个配置文件都是错误,无法直接判断。

还是 出现段错误

可以发下你们yolov5的yaml格式的配置文件吗?

你以前申请了1.1.18j了吗?1.1.18j版本里面是有yolov5的yaml配置文件的。