C++代码,模型部署问题

用户您好,请详细描述您所遇到的问题,这会帮助我们快速定位问题~

1.芯片型号:XJ3

2.天工开物开发包OpenExplorer版本:horizon_xj3_open_explorer_v1.12.7_20220520

3.问题定位:板端部署

4.问题具体描述

报错信息:

[HBRT] set log level as 0. version = 3.13.31

ioctl read error, ret = -1 error = 5

keros_i2c_read failed

ioctl write error, ret = -1 error = 5

keros_i2c_write failed

ioctl write error, ret = -1 error = 5

keros_i2c_write failed

ioctl read error, ret = -1 error = 5

keros_i2c_read failed

ioctl read error, ret = -1 error = 5

keros_i2c_read failed

ioctl read error, ret = -1 error = 5

keros_i2c_read failed

ioctl read error, ret = -1 error = 5

keros_i2c_read failed

ioctl read error, ret = -1 error = 5

keros_i2c_read failed

[DNN] (/home/users/daofu.zhang/release-sdk/horizonrtd/src/util/keros_util.cpp: 98) keros_authentication failed, ret = 0

[DNN] (/home/users/daofu.zhang/release-sdk/horizonrtd/src/util/configuration.cpp: 159) Keros key init failed.

[DNN] Runtime version = 1.9.3_(3.13.31 HBRT)

input_memSize = 1057

output_memSize = 0

[DNN] (/home/users/daofu.zhang/release-sdk/horizonrtd/src/hb_sys.cpp: 58) The alloced memory size should be greater than 0

hbSysAllocCachedMem failed, error code:-6000129output_memSize size = 0

[DNN] (/home/users/daofu.zhang/release-sdk/horizonrtd/src/hb_sys.cpp: 58) The alloced memory size should be greater than 0

hbSysAllocCachedMem failed, error code:-6000129output_memSize size = -1630512184

[DNN] (/home/users/daofu.zhang/release-sdk/horizonrtd/src/hb_sys.cpp: 62) The alloced memory size should be less than 2^31 - 4096

hbSysAllocCachedMem failed, error code:-6000129ModelInit success!

使用C++代码推理时,hbDNNGetInputTensorProperties接口返回的alignedByteSize不正确是什么原因导致的。

从报错信息看,是分配的内存不满足大于0的要求。

建议1:使用hrt_model_exec工具能否正常推理模型?

建议2:看到你使用的OE与docker版本都比较老旧,建议更新OE与docker的版本进行再次尝试~获取链接:https://developer.horizon.cc/forumDetail/136488103547258769,X3建议使用OE2.6.2,J3建议使用OE1.16.2c

代码参照ddk\samples\ai_toolchain\horizon_runtime_sample\code\00_quick_start\src\run_mobileNetV1_224x224.cc文件,如下:-

hbPackedDNNHandle_t packed_dnn_handle;

hbDNNHandle_t dnn_handle;

const char** model_name_list;

int model_count = 0;

// Step1: get model handle

{

HB_CHECK_SUCCESS(

hbDNNInitializeFromFiles(&packed_dnn_handle, &modelFile, 1),

“hbDNNInitializeFromFiles failed”);

HB_CHECK_SUCCESS(hbDNNGetModelNameList(

&model_name_list, &model_count, packed_dnn_handle),

“hbDNNGetModelNameList failed”);

HB_CHECK_SUCCESS(

hbDNNGetModelHandle(&dnn_handle, packed_dnn_handle, model_name_list[0]),

“hbDNNGetModelHandle failed”);

}

std::vector input_tensors;

std::vector output_tensors;

int input_count = 0;

int output_count = 0;

// Step2: prepare input and output tensor

{

HB_CHECK_SUCCESS(hbDNNGetInputCount(&input_count, dnn_handle),

“hbDNNGetInputCount failed”);

HB_CHECK_SUCCESS(hbDNNGetOutputCount(&output_count, dnn_handle),

“hbDNNGetInputCount failed”);

input_tensors.resize(input_count);

output_tensors.resize(output_count);

PrepareTensor(input_tensors.data(), output_tensors.data(), dnn_handle);

}

int PrepareTensor(hbDNNTensor* input_tensor, hbDNNTensor* output_tensor, hbDNNHandle_t dnn_handle) {

int input_count = 0;

int output_count = 0;

hbDNNGetInputCount(&input_count, dnn_handle);

hbDNNGetOutputCount(&output_count, dnn_handle);

/** Tips:

* For input memory size:

* * input_memSize = input[i].properties.alignedByteSize

* For output memory size:

* * output_memSize = output[i].properties.alignedByteSize

*/

hbDNNTensor* input = input_tensor;

for (int i = 0; i < input_count; i++) {

HB_CHECK_SUCCESS(

hbDNNGetInputTensorProperties(&input[i].properties, dnn_handle, i),

“hbDNNGetInputTensorProperties failed”);

int input_memSize = input[i].properties.alignedByteSize;

printf(“input_memSize = %d\n”, input_memSize);

HB_CHECK_SUCCESS(hbSysAllocCachedMem(&input[i].sysMem[0], input_memSize),

“hbSysAllocCachedMem failed”);

/** Tips:

* For input tensor, aligned shape should always be equal to the real

* shape of the user’s data. If you are going to set your input data with

* padding, this step is not necessary.

* */

input[i].properties.alignedShape = input[i].properties.validShape;

}

hbDNNTensor* output = output_tensor;

for (int i = 0; i < output_count; i++) {

HB_CHECK_SUCCESS(

hbDNNGetOutputTensorProperties(&output[i].properties, dnn_handle, i),

“hbDNNGetOutputTensorProperties failed”);

int output_memSize = output[i].properties.alignedByteSize;

printf(“output_memSize size = %d\n”, output_memSize);

HB_CHECK_SUCCESS(hbSysAllocCachedMem(&output[i].sysMem[0], output_memSize),

“hbSysAllocCachedMem failed”);

}

return 0;

}

使用的OE包和编译环境如上图

关于该示例,欢迎参考教程:https://developer.horizon.cc/forumDetail/174216099150358528,以及对应视频:https://www.bilibili.com/video/BV1Xh411P73Z/?p=11&vd\_source=fceccd8a9a5f97cd8f38261a3c288567

我把例程中run_mobileNetV1_224x224.cc文件中后处理部分注释掉,导入自己转换后的模型文件是可以正常推理的,但是我将相同的代码复制出来到自己的工程内就不行,请问有可能是什么原因?

编译环境是相同的。

这个到自己的工程里,变量就很多了,建议认真比对一下代码层面的差异(还有就是,希望考虑一下我上面提的两条建议哈)

好的,谢谢,我再去检查一下。