RDK X5部署分割UNET无检测结果

onnx模型信息如下
INPUTS: tensor:float32[1,3,256,512],
OUTPUTS: tensor:float32[1,6,256,512],

量化参数如下:

,
转换后在RDK X5板端执行命令ros2 launch dnn_node_example dnn_node_example_feedback.launch.py dnn_example_config_file:=config/unetconfig.json dnn_example_image:=config/raw_unet.jpg
检测输出如下:

[INFO] [example-1]: process has finished cleanly [pid 6602]
sunrise@ubuntu:~/model_bin/unet$ ros2 launch dnn_node_example dnn_node_example_feedback.launch.py dnn_exampl                                       e_config_file:=config/unetconfig.json dnn_example_image:=config/raw_unet.jpg
[INFO] [launch]: All log files can be found below /home/sunrise/.ros/log/2025-06-13-15-16-16-843301-ubuntu-7                                       069
[INFO] [launch]: Default logging verbosity is set to INFO
dnn_node_example_path is  /opt/tros/humble/lib/dnn_node_example
cp_cmd is  cp -r /opt/tros/humble/lib/dnn_node_example/config .
[INFO] [example-1]: process started with pid [7072]
[example-1] [WARN] [1749798977.532958855] [dnn_example_node]: Parameter:
[example-1]  feed_type(0:local, 1:sub): 0
[example-1]  image: config/raw_unet.jpg
[example-1]  image_type: 0
[example-1]  dump_render_img: 1
[example-1]  is_shared_mem_sub: 0
[example-1]  config_file: config/unetconfig.json
[example-1]  msg_pub_topic_name: hobot_dnn_detection
[example-1]  info_msg_pub_topic_name: hobot_dnn_detection_info
[example-1]  ros_img_topic_name: /image
[example-1]  sharedmem_img_topic_name: /hbmem_img
[example-1] [WARN] [1749798977.533690771] [dnn_example_node]: Load [25] class types from file [config/unet.l                                       ist]
[example-1] [WARN] [1749798977.533852979] [dnn_example_node]: Parameter:
[example-1]  model_file_name: /home/sunrise/model_bin/unet/unet_mobilenet_256x512_nv12.bin
[example-1]  model_name:
[example-1] [INFO] [1749798977.533950979] [dnn]: Node init.
[example-1] [INFO] [1749798977.533981396] [dnn_example_node]: Set node para.
[example-1] [WARN] [1749798977.534026812] [dnn_example_node]: model_file_name_: /home/sunrise/model_bin/unet                                       /unet_mobilenet_256x512_nv12.bin, task_num: 4
[example-1] [INFO] [1749798977.534079395] [dnn]: Model init.
[example-1] [BPU_PLAT]BPU Platform Version(1.3.6)!
[example-1] [HBRT] set log level as 0. version = 3.15.54.0
[example-1] [DNN] Runtime version = 1.23.10_(3.15.54 HBRT)
[example-1] [A][DNN][packed_model.cpp:247][Model](2025-06-13,15:16:17.622.925) [HorizonRT] The model builder                                        version = 1.24.3
[example-1] [INFO] [1749798977.730564112] [dnn]: The model input 0 width is 512 and height is 256
[example-1] [INFO] [1749798977.730735028] [dnn]:
[example-1] Model Info:
[example-1] name: unet_mobilenet_256x512_nv12.
[example-1] [input]
[example-1]  - (0) Layout: NCHW, Shape: [1, 3, 256, 512], Type: HB_DNN_IMG_TYPE_NV12.
[example-1] [output]
[example-1]  - (0) Layout: NCHW, Shape: [1, 6, 256, 512], Type: HB_DNN_TENSOR_TYPE_F32.
[example-1]
[example-1] [INFO] [1749798977.730796237] [dnn]: Task init.
[example-1] [INFO] [1749798977.732509526] [dnn]: Set task_num [4]
[example-1] [WARN] [1749798977.732605984] [dnn_example_node]: Get model name: unet_mobilenet_256x512_nv12 fr                                       om load model.
[example-1] [INFO] [1749798977.732734692] [dnn_example_node]: The model input width is 512 and height is 256
[example-1] [WARN] [1749798977.732802067] [dnn_example_node]: Create ai msg publisher with topic_name: hobot                                       _dnn_detection
[example-1] [INFO] [1749798977.745738510] [dnn_example_node]: Dnn node feed with local image: config/raw_une                                       t.jpg
[example-1] [INFO] [1749798977.875056059] [dnn_example_node]: Output from frame_id: feedback, stamp: 0.0
[example-1] [INFO] [1749798977.919217004] [PostProcessBase]: out box size: 0
[example-1] [INFO] [1749798977.919309754] [ClassificationPostProcess]: out cls size: 0
[example-1] [INFO] [1749798977.919335296] [SegmentationPostProcess]: features size: 131072, width: 512, heig                                       ht: 256, num_classes: 19, step: 1
[example-1] [INFO] [1749798977.920040628] [ImageUtils]: target size: 1
[example-1] [INFO] [1749798977.920094087] [ImageUtils]: target type: parking_space, rois.size: 0
[example-1] [WARN] [1749798977.924550123] [ImageUtils]: Draw result to file: render_feedback_0_0.jpeg

检测图片没有分割效果

你好, 在算法开发的过程中,遇到各种数值不可控的问题都是正常的,算法开发本身就是需要厚积薄发的领域。算法工具链提供了完整的流程说明,debug工具及流程说明,供您参考。 PTQ流程详解:6.1. PTQ转换原理及流程 — Horizon Open Explorer

精度调优:8.2. PTQ模型精度调优 — Horizon Open Explorer

性能调优:8.1. 模型性能调优 — Horizon Open Explorer

精度debug工具详解:6.2.12. 精度debug工具 — Horizon Open Explorer

Runtime程序编写详解:9. 嵌入式应用开发(runtime)手册 — Horizon Open Explorer

如果将工具链手册所述的所有流程走完仍然不及预期,则说明模型及其权重本身无法量化。特别的,过拟合的模型本身容易出现异常值导致量化表示能力不足。

新算法开发建议

  1. 基本上新算法都需要做pipeline检查,来摸明白前后处理,一般不会是精度问题。

  2. 编写使用ONNXRuntime来推理原始浮点onnx的程序,来确定前后处理的baseline。

  3. 将输入类型设置为NCHW和featuremap,包括train和rt的两个type,前处理类型修改为no_preprocess,这样编译出来的quantized模型和bin模型所需要的数据,也就是所需要的前处理,和浮点onnx完全一致。建议在全featuremap的基础上进行准备校准数据,和bin模型编译。由于featuremap在板子上的python接口无法推理,只能用C/C++推理,调试阶段建议使用开发机器的HB_ONNXRuntime推理quantized onnx来调试。quantized onnx在全featuremap的编译基础上,前处理与浮点onnx完全一致。

  4. 如果在全featuremap的基础上,精度不达预期,可以查阅手册使用全int16编译,来确定精度上限。

  5. 全featuremap的基础上调通了,再来尝试配置nv12或rgb等让BPU加速前处理的配置方式。

这里TROS的案例有提供一个样例的bin模型,可以通过
hrt_model_exec model_info --model_file 命令查看bin模型的输入输出信息

特别的,BPU工具链是开放的工具链,您也完全可以按照自己的想法来设计部署方案,Model Zoo只是提供部署的参考,只要自己设计的bin模型部署方案和自己设计的前后处理程序能对上,都可以来部署算法。