yolov5模型转换中img-size 224 must be multiple of max stride 64, updating to 256如何解决

用户您好,请详细描述您所遇到的问题:

  1. 系统软件版本:ubuntu20.04
  2. 问题涉及的技术领域: 旭日X3派 ,yolov5,模型转换
  3. 问题描述:模型转换出现WARNING --img-size 224 must be multiple of max stride 64, updating to 256,无法转换为224*224
  4. 复现概率:必现
  5. 提供必要的问题日志: WARNING --img-size 224 must be multiple of max stride 64, updating to 256
  6. 软件上是否有做自定义修改:否

您好,有一些细节先和您确认一下

1、请问这些报错是来自hb_mapper makertbin的编译流程吗?

2、yolov5模型是来自我们的OE包吗,还是公版的哪个版本?

3、把yaml文件里的calibration_type改成skip,再编译看看有没有报错

export: weights=./weights/attention/best001.pt, imgsz=[672, 672], batch_size=1, device=cpu, half=False, inplace=False, keras=False, optimize=False, int8=False, dynamic=False, simplify=False, opset=11, verbose=False, workspace=4, nms=False, agnostic_nms=False, topk_per_class=100, topk_all=100, iou_thres=0.45, conf_thres=0.25, include=[‘onnx’]-
YOLOv5 2023-6-8 Python-3.10.11 torch-2.0.1+cpu CPU-

Fusing layers…-
YOLOv5s6 summary: 280 layers, 12312052 parameters, 0 gradients-
WARNING --img-size 672 must be multiple of max stride 64, updating to 704-
WARNING --img-size 672 must be multiple of max stride 64, updating to 704-
Traceback (most recent call last):-
File “D:\yolov5-master\yolov5-master\export.py”, line 818, in -
main(opt)-
File “D:\yolov5-master\yolov5-master\export.py”, line 813, in main-
run(**vars(opt))-
File “D:\yolov5-master\yolov5-master\venv\lib\site-packages\torch\utils\_contextlib.py”, line 115, in decorate_context-
return func(*args, **kwargs)-
File “D:\yolov5-master\yolov5-master\export.py”, line 712, in run-
y = model(im) # dry runs-
File “D:\yolov5-master\yolov5-master\venv\lib\site-packages\torch\nn\modules\module.py”, line 1501, in _call_impl-
return forward_call(*args, **kwargs)-
File “D:\yolov5-master\yolov5-master\models\yolo.py”, line 209, in forward-
return self._forward_once(x, profile, visualize) # single-scale inference, train-
File “D:\yolov5-master\yolov5-master\models\yolo.py”, line 121, in _forward_once-
x = m(x) # run-
File “D:\yolov5-master\yolov5-master\venv\lib\site-packages\torch\nn\modules\module.py”, line 1501, in _call_impl-
return forward_call(*args, **kwargs)-
File “D:\yolov5-master\yolov5-master\models\yolo.py”, line 73, in forward-
xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4)-
File “D:\yolov5-master\yolov5-master\venv\lib\site-packages\torch\_tensor.py”, line 803, in split-
return torch._VF.split_with_sizes(self, split_size, dim)-
IndexError: Dimension out of range (expected to be in range of [-4, 3], but got 4)-

进程已结束,退出代码1-