借助AI Express实现手势识别

一、背景

手势识别是互动娱乐、智能车载等领域中的常用功能之一。AI-Express 2.4.0版本中,结合地平线基于序列的行为识别模型,给出了手势识别参考方案。

本教程中您将了解到如何在AI-Express中运行模型和算法策略,构建Workflow,并搭建智能应用。

二、方案介绍

手势识别solution的整体流程如下。它包含输入,手检测,跟踪,手关键点检测,手势识别,投票,输出这几大部分。

手检测运行Faster R-CNN模型,以检测框的方式给出手在画面中的位置。

之后会对手进行跟踪。在这一步中,检测框将被分配track_id,它们的坐标也会根据历史轨迹被修正。

手部关键点检测运行CNN模型,基于输入的图片和检测框对手部关键点进行预测。

手势识别运行CNN模型,它基于手部关键点序列给出手势预测结果。目前支持的手势共14种:无手势,单个手指指,两个手指指,单手指点击,单手指双击,两个手指点击,两个手指双击,向上抛掷,向下抛掷,向左抛掷,向右抛掷,放大,缩小,开合。

为了得到更平滑的检测结果,我们在手势识别模块后添加了投票模块。利用滑窗,结合多帧上的预测给出最终结果。

三、教程

0 Prerequisite

您需要了解地平线AI-Express应用开发中间件,熟悉Linux平台下的代码编译、部署。

1 开发环境(软硬件、编译环境)

本教程基于AI Express软件包和地平线96Board AI开发板/X3-Dev开发板。您可以参考软件包文档中的“快速上手”章节,准备开发环境。

2 开发教程

本教程将首先介绍如何通过XStream Method使用模型。之后说明如何将不同Method串联起来组成workflow。最后简述如何利用XProto搭建具有手势识别功能的智能应用。

2.1 使用XStream Method

如方案介绍中所述,一个完整流程中包含许多小而具体的任务。AI Express通过不同的Method承载不同的具体任务。一个Method或运行算法模型,或执行算法策略。在手势识别solution使用了FasterRCNNMehtod,CNNMethod,MotMethod,和VoteMethod。前两者运行模型,后两者执行算法策咯。该solution中没有新建Method,复用现有Method的方式十分简单。您只需要通过Method配置文件指定模型文件和其他相关信息即可(包括但不限于:模型输出,后处理参数)。

FasterRCNNMethod挂载手检测模型。solution中使用的是一个多任务模型,实际上我们只需关注手检测结果。Method配置文件如下:

{  "in_msg_type": "pyramid_image",  "net_info": {    "model_name": "personMultitask",        模型名称,需要与您编译模型时使用的相同    "model_version": "1.2.1",    "model_out_sequence": [      {        "name": "body_box_int",        "type": "invalid"                   模型输出的定点结果,目前不使用,置为invalid      },      {        "name": "body_box",                 模型输出名称,您可以自己指定        "type": "bbox"                      该输出的类型,FasterRCNNMethod中将根据这个类型进行对应后处理      },      {        "name": "face_box_int",        "type": "invalid"      },      {        "name": "face_box",        "type": "bbox"      },      {        "name": "hand_box_int",        "type": "invalid"      },      {        "name": "hand_box",        "type": "bbox"      }    ],    "model_input_width": 960,               后处理参数    "model_input_height": 540,    "pyramid_layer": 4,    "kps_pos_distance": 25,    "kps_feat_width": 16,    "kps_feat_height": 16,    "kps_points_number": 17,    "lmk_feat_width": 8,    "lmk_feat_height": 8,    "lmk_points_number": 5,    "lmk_pos_distance": 12,    "lmk_feat_stride": 16,    "3d_pose_number": 3  },  "method_outs": ["body_box", "head_box", "hand_box"],      Method的输出,后续搭建workflow时输出顺序需要与这里对应  "bpu_config_path": "../configs/bpu_config.json",  "model_file_path": "../../models/guestureMultitask_960x540.hbm"}

手部关键点检测模型和手势识别模型都挂载在CNNMethod,通过不同配置文件执行不同的运算。Method配置文件如下:

{  "model_name" : "handLMKs",  "model_version": "v1.0.0",  "in_msg_type" : "rect",           输入类型  "post_fn" : "common_lmk",         后处理方法  "lmk_num": 21,                    后处理参数  "feature_w": 32,  "feature_h": 32,  "i_o_stride": 4,  "max_handle_num": -1,  "output_size" : 1,  "model_file_path": "../../models/handLMKs.hbm",  "bpu_config_path": "../configs/bpu_config.json"}

{  "model_file_path": "../../models/gestureDet.hbm",  "bpu_config_path": "../configs/bpu_config.json",  "model_version": "v0.0.1",  "model_name" : "gestureDet",  "in_msg_type" : "lmk_seq",  "post_fn" : "act_det",  "input_shift" : 7,  "seq_len" : 32,  "kps_len" : 21,  "stride" : 0.0333333,  "max_gap" : 0.066666,  "buf_len" : 100,  "norm_kps_conf" : 0,  "kps_norm_scale" : 4.897640403536304,  "merge_groups" : "[0,1,15];[2];[3];[4];[5];[6];[7];[8];[9];[10];[11];[12];[13];[14]",  "output_size" : 1,  "output_type": "gesture",  "output_names": [    "gesture"  ]}

MotMethod和VoteMethod分别承载手跟踪和识别结果投票任务。配置文件相对简单易懂,不在此列出。

2.2 搭建workflow

上文介绍了使用XStream Method运行模型或算法策略。想要实现完整的算法功能,您需要构建Workflow,串联各Method,实现模块间的数据互通。

目前,XStream Workflow通过Json文件表示。一个简单的Workflow框架如下所示。Workflow中包含input, output, workflow三大部分,分别代表Workflow的输入,Workflow的输出,和Workflow中包含的Mehtods。“workflow”是一个Json数组,其中每一个Json对象代表一个Method。Json对象中的各键值的意义可以参考下面的注释。

{  "inputs": [    "image"  ],  "outputs": [    ...    "fall_list",    ...  ],  "workflow": [    ...    {      "method_type": "CNNMethod",               Method类型      "unique_name": "fall_det",                此Method在Workflow中的名字      "inputs": [                               Method输入列表        "body_final_box",        "kps",        "disappeared_track_id"      ],      "outputs": [                              Method输出列表        "fall_list"      ],      "method_config_file": "fall_det.json"     Method配置文件    },    ...  ]}

在Workflow中串联各Method,就是基于拓扑序连接它们的输入输出。完整的手势识别Workflow如下所示。需要注意Method输入输出的名字不能重复且要严格对应。

{  "inputs": [    "image"  ],  "outputs": [    "image",    "hand_final_box",    "hand_lmk",    "gesture_vote"  ],  "workflow": [    {      "thread_count": 1,      "method_type": "FasterRCNNMethod",      "unique_name": "multi_task",      "inputs": [        "image"      ],      "outputs": [        "body_box",        "face_box",        "hand_box"      ],      "method_config_file": "gesture_multitask.json"    },    {      "thread_count": 1,      "method_type": "MOTMethod",      "unique_name": "hand_mot",      "inputs": [        "hand_box"      ],      "outputs": [        "hand_final_box",        "hand_disappeared_track_id_list"      ],      "method_config_file": "iou_method_param.json"    },    {      "method_type": "CNNMethod",      "unique_name": "hand_lmk",      "inputs": [        "hand_final_box",        "image"      ],      "outputs": [        "hand_lmk"      ],      "method_config_file": "hand_lmk.json"    },    {      "method_type": "CNNMethod",      "unique_name": "gesture_recog",      "inputs": [        "hand_final_box",        "hand_lmk",        "hand_disappeared_track_id_list"      ],      "outputs": [        "gesture"      ],      "method_config_file": "gesture_det.json"    },    {      "method_type": "VoteMethod",      "unique_name": "gesture_voting",      "inputs": [        "hand_final_box",        "hand_disappeared_track_id_list",        "gesture"      ],      "outputs": [        "gesture_vote"      ],      "method_config_file": "gesture_voting.json"    }  ]}

2.3 利用XProto搭建智能应用

至此,您已经了解了如何实现手势识别solution的智能部分。除了智能部分,您还需要获取输入数据,解析并发送智能数据。您可以利用XProto搭建一个完整的智能应用。

本方案使用到了XProto中的VioPlugin, SmartPlugin, 和WebSocketPlugin。

VioPlugin:VioPlugin负责获取、转换图像数据并控制图像数据获取速率,然后将图像数据或丢帧消息推送给消息总线。

SmartPlugin:SmartPlugin是基于XStream通用SDK接口开发的通用智能应用运行模块。它监听XProto总线上面的VioPlugin送来的“XPLUGIN_IMAGE_MESSAGE”类型消息,并且将它送到XStream的Workflow实例里面,同时接收Workflow输出的结果,并将其序列化后产生智能消息然后送回到消息总线上。

WebSocketPlugin:WebSocketPlugin监听总线上SmartPlugin发来的消息。如果有web端发来的请求,WebSocketPlugin在处理请求后会主动发送视频数据和智能数据。智能数据基于Proto Buffer进行封装。

您需要在SmartPlugin中,根据您在workflow中指定的输出名称,以对应类型解析智能结果。手势识别solution中,识别结果以Attribute类型输出。VioPlugin和WebSocketPlugin不需要做改动。然后,只需要在主函数中进行各plugin的初始化和启动等操作即可。

2.4 编译运行

您可以根据所使用的硬件在ai-express-release包中运行“bash build.sh [x2|x3] && bash deploy.sh”进行编译和打包,命令成功执行后会生成deploy部署包。

您可以通过deploy包内的run.sh脚本运行智能应用。应用运行后,您可以在命令行看到相关智能结果的输出。AI-Express 2.2.0及以上版本包含web展示端,您也可以访问板子IP,通过web展示端查更直观地查看应用运行结果。

能提供一下2.4.0版本的开发包吗

AI-Express 2.4.0中有手势识别的参考案例吗?如何下载?

模型打包成 deploy时 出现一些错误

[zjh@localhost AI-EXPRESS]$ bash deploy.sh

+ ALL_PROJECT_DIR=/opt/AI-EXPRESS

+ RELEASE_DIR=/opt/AI-EXPRESS/deploy

+ rm /opt/AI-EXPRESS/deploy -rf

+ mkdir -p /opt/AI-EXPRESS/deploy

++ cat platform.tmp

+ ARCHITECTURE=x3

+ mkdir /opt/AI-EXPRESS/deploy/lib/

+ cp /opt/AI-EXPRESS/build/lib/libvioplugin.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/build/lib/libsmartplugin.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/build/lib/libvisualplugin.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/build/lib/libwebsocketplugin.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ ‘[’ x3 == x3 ‘]’

+ cp /opt/AI-EXPRESS/build/lib/libuvcplugin.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp ‘/opt/AI-EXPRESS/deps/x3_prebuilt/lib/libguvc*.so’ /opt/AI-EXPRESS/deploy/lib/ -rf

cp: 无法获取’/opt/AI-EXPRESS/deps/x3_prebuilt/lib/libguvc*.so’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/build/lib/libxstream-media_codec.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp ‘/opt/AI-EXPRESS/build/lib/libmulti*.so’ /opt/AI-EXPRESS/deploy/lib/ -rf

cp: 无法获取’/opt/AI-EXPRESS/build/lib/libmulti*.so’ 的文件状态(stat): No such file or directory

+ cp ‘/opt/AI-EXPRESS/build/lib/libgdcplugin*.so’ /opt/AI-EXPRESS/deploy/lib/ -rf

cp: 无法获取’/opt/AI-EXPRESS/build/lib/libgdcplugin*.so’ 的文件状态(stat): No such file or directory

+ cp ‘/opt/AI-EXPRESS/source/solution_zoo/apa/gdcplugin/deps/*.so’ /opt/AI-EXPRESS/deploy/lib/ -rf

cp: 无法获取’/opt/AI-EXPRESS/source/solution_zoo/apa/gdcplugin/deps/*.so’ 的文件状态(stat): No such file or directory

+ cp ‘/opt/AI-EXPRESS/build/lib/libdisplayplugin*.so’ /opt/AI-EXPRESS/deploy/lib/ -rf

cp: 无法获取’/opt/AI-EXPRESS/build/lib/libdisplayplugin*.so’ 的文件状态(stat): No such file or directory

+ cp ‘/opt/AI-EXPRESS/source/solution_zoo/apa/displayplugin/deps/lib/*.so’ /opt/AI-EXPRESS/deploy/lib/ -rf

cp: 无法获取’/opt/AI-EXPRESS/source/solution_zoo/apa/displayplugin/deps/lib/*.so’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/build/lib/libcanplugin.so /opt/AI-EXPRESS/deploy/lib/ -rf

cp: 无法获取’/opt/AI-EXPRESS/build/lib/libcanplugin.so’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/build/lib/libanalysisplugin.so /opt/AI-EXPRESS/deploy/lib/ -rf

cp: 无法获取’/opt/AI-EXPRESS/build/lib/libanalysisplugin.so’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/deps/bpu_predict/x3/lib/libbpu_predict.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/x3_prebuilt/lib/libhbrt_bernoulli_aarch64.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/protobuf/lib/libprotobuf.so.10 /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/opencv/lib/libopencv_world.so.3.4 /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/live555/lib/libBasicUsageEnvironment.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/live555/lib/libgroupsock.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/live555/lib/libliveMedia.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/live555/lib/libUsageEnvironment.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/uWS/lib64/libuWS.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/xwarehouse/lib/libxwarehouse.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/zeroMQ/lib/libzmq.so /opt/AI-EXPRESS/deps/zeroMQ/lib/libzmq.so.5 /opt/AI-EXPRESS/deps/zeroMQ/lib/libzmq.so.5.2.0 /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/source/common/xstream/python_api/package/lib/libmethod_factory.so /opt/AI-EXPRESS/source/common/xstream/python_api/package/lib/vision_type.so /opt/AI-EXPRESS/source/common/xstream/python_api/package/lib/xstream_internal.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/source/common/xproto/python_api/package/lib/native_xproto.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/libyuv/lib/libyuv.so /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/deps/libjpeg-turbo/lib/libturbojpeg.so /opt/AI-EXPRESS/deps/libjpeg-turbo/lib/libturbojpeg.so.0 /opt/AI-EXPRESS/deps/libjpeg-turbo/lib/libturbojpeg.so.0.1.0 /opt/AI-EXPRESS/deploy/lib/ -rf

+ cp /opt/AI-EXPRESS/run.sh /opt/AI-EXPRESS/deploy/ -rf

+ cp /opt/AI-EXPRESS/start_nginx.sh /opt/AI-EXPRESS/deploy/ -rf

+ mkdir /opt/AI-EXPRESS/deploy/configs/

+ ‘[’ x3 == x2 ‘]’

+ ‘[’ x3 == x3 ‘]’

+ cp /opt/AI-EXPRESS/output/vioplugin/config/vio /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.2610 /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.2610.hg /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.96board /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.96board.hg /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.j3dev.cam /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.j3dev.fb /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.j3dev.multi_cam_async /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.j3dev.multi_cam_sync /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.j3dev.multi_fb_sync /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.x3dev.cam /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.x3dev.fb /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.x3dev.fb_cam /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.x3dev.multi_cam_async /opt/AI-EXPRESS/output/vioplugin/config/vio_config.json.x3dev.multi_cam_sync /opt/AI-EXPRESS/output/vioplugin/config/vio_hg /opt/AI-EXPRESS/deploy/configs/ -rf

+ echo ‘copy viowrapper configs’

copy viowrapper configs

+ cp /opt/AI-EXPRESS/source/common/viowrapper/config/x3dev/hb_camera_x3.json /opt/AI-EXPRESS/source/common/viowrapper/config/x3dev/hb_vio_x3_1080_fb.json /opt/AI-EXPRESS/source/common/viowrapper/config/x3dev/hb_vio_x3_1080.json /opt/AI-EXPRESS/source/common/viowrapper/config/x3dev/hb_vio_x3_608.json /opt/AI-EXPRESS/deploy/configs/ -rf

+ cp /opt/AI-EXPRESS/output/apa /opt/AI-EXPRESS/deploy/ -rf

cp: 无法获取’/opt/AI-EXPRESS/output/apa’ 的文件状态(stat): No such file or directory

+ cp ‘/opt/AI-EXPRESS/output/apa/configs/configs/*’ /opt/AI-EXPRESS/deploy/configs/ -rf

cp: 无法获取’/opt/AI-EXPRESS/output/apa/configs/configs/*’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/output/multivioplugin/bin/multivioplugin_test /opt/AI-EXPRESS/deploy/apa/ -rf

cp: 无法获取’/opt/AI-EXPRESS/output/multivioplugin/bin/multivioplugin_test’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/output/multisourceinput /opt/AI-EXPRESS/deploy/ -rf

cp: 无法获取’/opt/AI-EXPRESS/output/multisourceinput’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/output/visualplugin/config/visualplugin_body.json /opt/AI-EXPRESS/output/visualplugin/config/visualplugin_face.json /opt/AI-EXPRESS/output/visualplugin/config/visualplugin_vehicle.json /opt/AI-EXPRESS/deploy/configs/ -rf

+ cp /opt/AI-EXPRESS/source/common/xproto/plugins/websocketplugin/configs/websocketplugin_attribute.json /opt/AI-EXPRESS/deploy/configs/ -rf

+ mkdir -p /opt/AI-EXPRESS/deploy/models

+ cp /opt/AI-EXPRESS/models/x3/faceAgeGender/so/faceAgeGender.hbm /opt/AI-EXPRESS/models/x3/faceMask/so/faceMask.hbm /opt/AI-EXPRESS/models/x3/faceMultitask/so/faceMultitask.hbm /opt/AI-EXPRESS/models/x3/faceQuality/so/faceQuality.hbm /opt/AI-EXPRESS/models/x3/personMultitask/so/personMultitask.hbm /opt/AI-EXPRESS/deploy/models/ -rf

+ ‘[’ x3 == x3 ‘]’

+ cp /opt/AI-EXPRESS/models/x3/SegmentationMultitask_1024x768/so/SegmentationMultitask_1024x768.hbm /opt/AI-EXPRESS/deploy/models/ -rf

cp: 无法获取’/opt/AI-EXPRESS/models/x3/SegmentationMultitask_1024x768/so/SegmentationMultitask_1024x768.hbm’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/models/x3/personMultitask_1024x768/so/personMultitask_1024x768.hbm /opt/AI-EXPRESS/deploy/models/personMultitask_1024x768.hbm -rf

cp: 无法获取’/opt/AI-EXPRESS/models/x3/personMultitask_1024x768/so/personMultitask_1024x768.hbm’ 的文件状态(stat): No such file or directory

+ cp ‘/opt/AI-EXPRESS/source/solution_zoo/apa/models/*’ /opt/AI-EXPRESS/deploy/models/ -rf

cp: 无法获取’/opt/AI-EXPRESS/source/solution_zoo/apa/models/*’ 的文件状态(stat): No such file or directory

+ cp /opt/AI-EXPRESS/output/face_solution /opt/AI-EXPRESS/deploy/ -rf

+ cp /opt/AI-EXPRESS/output/body_solution /opt/AI-EXPRESS/deploy/ -rf

+ cp /opt/AI-EXPRESS/output/face_body_multisource /opt/AI-EXPRESS/deploy/ -rf

+ cp /opt/AI-EXPRESS/output/xwarehouse_sample /opt/AI-EXPRESS/deploy/ -rf

cp: 无法获取’/opt/AI-EXPRESS/output/xwarehouse_sample’ 的文件状态(stat): No such file or directory

+ mkdir -p /opt/AI-EXPRESS/deploy/ssd_test/config/vio_config

+ mkdir -p /opt/AI-EXPRESS/deploy/ssd_test/config/bpu_config

+ cp /opt/AI-EXPRESS/output/face_solution/configs/bpu_config.json /opt/AI-EXPRESS/deploy/ssd_test/config/bpu_config

+ mkdir -p /opt/AI-EXPRESS/deploy/ssd_test/config/models

+ cp -r ‘/opt/AI-EXPRESS/models/x3/ssd/so/*’ /opt/AI-EXPRESS/deploy/ssd_test/config/models

cp: 无法获取’/opt/AI-EXPRESS/models/x3/ssd/so/*’ 的文件状态(stat): No such file or directory

+ cp -r /opt/AI-EXPRESS/source/solution_zoo/xstream/methods/ssd_method/config/ssd_module.json /opt/AI-EXPRESS/source/solution_zoo/xstream/methods/ssd_method/config/ssd_test_workflow.json /opt/AI-EXPRESS/deploy/ssd_test/config

+ ‘[’ x3 == x3 ‘]’

+ cp -r /opt/AI-EXPRESS/deploy/configs/hb_camera_x3.json /opt/AI-EXPRESS/deploy/configs/hb_vio_x3_1080_fb.json /opt/AI-EXPRESS/deploy/configs/hb_vio_x3_1080.json /opt/AI-EXPRESS/deploy/configs/hb_vio_x3_608.json /opt/AI-EXPRESS/deploy/ssd_test/config/vio_config

+ cp -r /opt/AI-EXPRESS/deploy/configs/vio /opt/AI-EXPRESS/deploy/ssd_test/config/vio_config

+ cp /opt/AI-EXPRESS/output/video_box /opt/AI-EXPRESS/deploy/ -rf

+ cp /opt/AI-EXPRESS/output/video_box/data/test.264 /opt/AI-EXPRESS/deploy/ -rf

+ cp -r /opt/AI-EXPRESS/build/bin/ssd_method_test /opt/AI-EXPRESS/deploy/ssd_test/

+ cp -r /opt/AI-EXPRESS/source/solution_zoo/xstream/methods/ssd_method/test/data /opt/AI-EXPRESS/deploy/ssd_test

+ mkdir -p /opt/AI-EXPRESS/deploy/python_api

+ mkdir -p /opt/AI-EXPRESS/deploy/python_api/tests

+ mkdir -p /opt/AI-EXPRESS/deploy/configs/pytest_configs

+ cp /opt/AI-EXPRESS/source/common/xstream/python_api/package/xstream /opt/AI-EXPRESS/deploy/python_api/ -rf

+ cp /opt/AI-EXPRESS/source/common/xproto/python_api/package/xproto /opt/AI-EXPRESS/deploy/python_api/ -rf

+ cp /opt/AI-EXPRESS/source/common/xstream/python_api/package/tests/test_session.py /opt/AI-EXPRESS/source/common/xstream/python_api/package/tests/test_xstream1.py /opt/AI-EXPRESS/source/common/xstream/python_api/package/tests/test_xstream2.py /opt/AI-EXPRESS/deploy/python_api/tests/ -rf

+ cp /opt/AI-EXPRESS/source/common/xproto/python_api/package/tests/test_async.py /opt/AI-EXPRESS/deploy/python_api/tests/ -rf

+ cp /opt/AI-EXPRESS/source/common/xstream/python_api/package/configs/a_filter.json /opt/AI-EXPRESS/source/common/xstream/python_api/package/configs/b_filter.json /opt/AI-EXPRESS/deploy/configs/pytest_configs/ -rf

+ cp /opt/AI-EXPRESS/webservice /opt/AI-EXPRESS/deploy/ -rf

+ mkdir -p /opt/AI-EXPRESS/deploy/vioplugin_test

+ cp -r /opt/AI-EXPRESS/build/bin/vioplugin_sample /opt/AI-EXPRESS/deploy/vioplugin_test/

[zjh@localhost AI-EXPRESS]$

无法获取是啥意思啊

板子模型运行了,但打开摄像头失败

赞赞赞!?

有成功运行,可以展示的demo视频效果吗?

AI-EXPRESS 2.4.0版本包中,有一个手势识别的示例,里面串联了手体检测、手关键点检测以及手势识别,支持实时画面输入,WEB展示端看检测以及手势识别结果

请问你的版本是哪里获取的。 github上还未开源手势识别的solution。

若是地平线提供的版本,可以告知下版本信息,我们复现下

手势识别的示例该如何运行下呢

2.4的包里有手势识别的案例嘛?

开发板的镜像是目前最新的镜像

x3sdbx3-samsung2G

打包的时候报了这个错,之后上板运行也出现了错误,你们可以看下我发的另外一个帖子

AI-EXPRESS v2.7.0

这个AI-EXPRESS不都是git下来的吗,我就git下来的AI-EXPRESS 2.7.0 开发机环境centos7.6

AI-EXPRESS 2.7.0版本目前不支持20201124的镜像,2.7.0版本,请使用20201023的镜像。

我在开发机centos7.6上打包出现了错误,这和板子的镜像无关吧

也就是说我把v2.70 的deploy包放到20201124的板子上是运行不了的

在开发机centos7.6上打包出现的错误和板子的镜像无关,是因为部分参考方案未正式开源而报的错误,这些报错不会影响到已开源的参考方案的正常运行,建议忽略掉他们即可。