Shape currently only support FLOAT, INT8 and INT32 but gives INT64

1.芯片型号:X3派

2.天工开物开发包OpenExplorer版本:horizon_xj3_open_explorer_v2.4.2_20221227

3.问题定位:模型转换<-->板端部署

4.问题具体描述

I am deploying a tracking algorithm named SiamGAT: https://github.com/ohhhyeahhh/SiamGAT

I have converted the network to onnx model successfully, which is provided in:

链接: https://pan.baidu.com/s/1rcSpzW62soM3_SDxJXPDEg 提取码: x4s9

However, when i use "hb_mapper" to check the model, an error appears:

hb_mapper checker --model-type onnx --march bernoulli2 --model SiamGAT.onnx --input-shape input1 1x3x127x127 --input-shape input2 1x3x287x287

2023-08-14 11:13:46,192 file: tool_utils.py func: tool_utils line No: 131 Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/horizon_tc_ui/hb_mapper_checker.py", line 164, in run
    {"hb_mapper_version": version})
  File "/usr/local/lib/python3.6/site-packages/horizon_tc_ui/hbdtort/onnx2horizonrt.py", line 4172, in build_runtime_model_wrapper
    make_nodes(onnx_graph, runtime_graph)
  File "/usr/local/lib/python3.6/site-packages/horizon_tc_ui/hbdtort/onnx2horizonrt.py", line 3594, in make_nodes
    onnx_graph)
  File "/usr/local/lib/python3.6/site-packages/horizon_tc_ui/hbdtort/onnx2horizonrt.py", line 1605, in convert_shape
    op_data_type_dict[elem_type])
ValueError: Shape currently only support FLOAT, INT8 and INT32 but gives INT64

hb_mapper_checker.log is in the attached file.

I have located the problem in onnx model:

The problem appears after input3(a bbox reshaped to 1x4x1x1) enters the network. The bbox is used as index to extract roi from a tensor:

stride = cfg.BACKBONE.STRIDE  # 8
offset = cfg.BACKBONE.OFFSET  # 45

mask = torch.zeros(1, 1, 13, 13).float()
# roi = torch.round((bbox + 1 - offset + stride / 2) / stride - 1)
# roi = torch.round((bbox + 1 - offset + stride / 2) / stride - 1).int()
roi = torch.round((bbox + 1 - offset + stride / 2) / stride - 1).long()

# mask[0, :,
# min(12, max(0, roi[0][1])): max(0, min(roi[0][3], 12)),
# min(12, max(0, roi[0][0])): max(0, min(roi[0][2], 12))
# ] = 1

mask[0, :,
max(0, roi[0][1]): (min(roi[0][3], 12)),
max(0, roi[0][0]): (min(roi[0][2], 12))
] = 1

# mask[0, :,
# max(0, int(roi[0][1])): (min(int(roi[0][3]), 12)),
# max(0, int(roi[0][0])): (min(int(roi[0][2]), 12))
# ] = 1.0

# mask[0, :,
# max(0, roi[0][1].item()): (min(roi[0][3].item(), 12)),
# max(0, roi[0][0].item()): (min(roi[0][2].item(), 12))
# ] = 1.0

x *= mask

The error about "int64" occurs because i use "long" tensor:

roi = torch.round((bbox + 1 - offset + stride / 2) / stride - 1).long()

I have tried to change ".long()" to ".int()", but onnx model cannot be generated in this way because:

to perform as an index, roi must be "int64" in onnx

The model which error appears in is provided in attached file.

hb_mapper_checker.log model_builder_deploy.py

Hi, have you tried converting roi.dtype to be “Int32” or “float32”. See whether it works or not?

And another point needs to be confirmed from “I have tried to change “.long()” to “.int()”, but onnx model cannot be generated in this way because to perform as an index, roi must be “int64” in onnx”: Does the data type of variable “roi” affect the export from networks into ONNX model?

If i change “.long()” to “.int()”, i.e.

roi = torch.round((bbox + 1 - offset + stride / 2) / stride - 1).int()

onnx model cannot be generated in this way because of:

"

ERROR [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (Tind) bound to different types (tensor(int64) and tensor(int32) in node (/backbone/Slice_2).

ERROR *** ERROR-OCCUR-DURING {horizon_nn.check_onnx} ***

"

Thanks for your response!

1. I have tried converting roi.dtype to be “Int32” or “float32”, but an error appears:

TypeError: only integer tensors of a single element can be converted to an index

This is because our code uses elements in roi as index to execute “slice” operation in the tensor “mask”:

mask[0, :, max(0, roi[0][1]): (min(roi[0][3], 12)), max(0, roi[0][0]): (min(roi[0][2], 12))] = 1

2. If i change the “.long()” to “.int()”, the export from networks into ONNX model makes a success.

However, when using hb_mapper to check this onnx model(we provide it in [1]), the ai-tool-chain will raise the Type Error mentioned above:

ERROR [ONNXRuntimeError] : 1 : FAIL : Type Error: Type parameter (Tind) bound to different types (tensor(int64) and tensor(int32) in node (/backbone/Slice_2).

ERROR *** ERROR-OCCUR-DURING {horizon_nn.check_onnx} ***

[1]链接:百度网盘 请输入提取码 提取码:yxwy

As shown in the following figure, int64 is not supported