centerpoint使用kitti数据集过滤后数据集没了

天工开物开发包OpenExplorer版本:J5_OE_1.1.60

最开始希望使用pointpillars进行多类别3D检测,但是在pointpillars那篇文章下建议使用centerpoint。现在我修改了相应的参数,使用kitti类型的数据集进行centerpoint训练,发现经过过滤后数据全没了。这种情况应该怎么修改呢?

2023-09-04 14:55:51,051 INFO [logger.py:176] Node[0] ==================================================BEGIN FLOAT STAGE==================================================

2023-09-04 14:55:51,088 INFO [logger.py:176] Node[0] init torch_num_thread is `12`,opencv_num_thread is `12`,openblas_num_thread is `12`,mkl_num_thread is `12`,omp_num_thread is `12`,

1234561232

2023-09-04 14:55:51,529 WARNING [logger.py:107] Node[0] wrap usage has been changed, please pass necessary args

2023-09-04 14:55:51,553 WARNING [registry.py:182] Node[0] No module named ‘torchdynamo’. Some objects in hat.utils.compile_backends are not registered!

[-1]

{‘car’: 5, ‘truck’: 5, ‘pedestrian’: 5}

load 10619 truck database infos

load 2526 car database infos

load 4502 pedestrian database infos

load 1233 unknown database infos

After filter database:

load 0 truck database infos

load 0 car database infos

load 0 pedestrian database infos

load 0 unknown database infos

NCCL version 2.14.3+cuda11.6

1234561232

2023-09-04 14:55:52,335 INFO [loop_base.py:444] Node[0] Start DistributedDataParallelTrainer loop from epoch 0, num_epochs=20

2023-09-04 14:55:52,336 INFO [grad_scale.py:54] Node[0] [GradScale]

2023-09-04 14:55:52,337 INFO [monitor.py:135] Node[0] Epoch[0] Begin ==================================================

2023-09-04 14:55:52,337 INFO [lr_updater.py:191] Node[0] Epoch[0] Step[0] GlobalStep[0] lr=0.000200

`fx_force_duplicate_shared_convbn` will be set False by default after plugin 1.9.0. If you are not loading old checkpoint, please set `fx_force_duplicate_shared_convbn` False to train your new model.

`aidisdk` dependency is not available.

2023-09-04 14:55:53,298 ERROR [ddp_trainer.py:419] Node[0] Traceback (most recent call last):

File “/usr/local/lib/python3.8/dist-packages/hat/engine/ddp_trainer.py”, line 415, in _with_exception

fn(*args)

File “/open_explorer/ddk/samples/ai_toolchain/horizon_model_train_sample/scripts/tools/train.py”, line 186, in train_entrance

trainer.fit()

File “/usr/local/lib/python3.8/dist-packages/hat/engine/loop_base.py”, line 501, in fit

_, (batch, _is_last_batch) = next(self.data_loader_pr)

File “/usr/local/lib/python3.8/dist-packages/hat/profiler/profilers.py”, line 103, in profile_iterable

value = next(iterator)

File “/usr/local/lib/python3.8/dist-packages/hat/utils/generator.py”, line 22, in prefetch_iterator

last = next(it)

File “/root/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 628, in __next__

data = self._next_data()

File “/root/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1333, in _next_data

return self._process_data(data)

File “/root/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py”, line 1359, in _process_data

data.reraise()

File “/root/.local/lib/python3.8/site-packages/torch/_utils.py”, line 543, in reraise

raise exception

ValueError: Caught ValueError in DataLoader worker process 0.

Original Traceback (most recent call last):

File “/root/.local/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py”, line 302, in _worker_loop

data = fetcher.fetch(index)

File “/root/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py”, line 58, in fetch

data = [self.dataset[idx] for idx in possibly_batched_index]

File “/root/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py”, line 58, in

data = [self.dataset[idx] for idx in possibly_batched_index]

File “/usr/local/lib/python3.8/dist-packages/hat/data/datasets/kitti3d.py”, line 715, in __getitem__

sample_info = self.transforms(sample_info)

File “/root/.local/lib/python3.8/site-packages/torchvision/transforms/transforms.py”, line 95, in __call__

img = t(img)

File “/usr/local/lib/python3.8/dist-packages/hat/data/transforms/lidar_utils/lidar_transform_3d.py”, line 140, in __call__

sampled_dict = self.db_sampler.sample_all(

File “/usr/local/lib/python3.8/dist-packages/hat/data/transforms/lidar_utils/sample_ops.py”, line 196, in sample_all

sampled_cls = self.sample_class_v2(

File “/usr/local/lib/python3.8/dist-packages/hat/data/transforms/lidar_utils/sample_ops.py”, line 360, in sample_class_v2

sp_boxes = np.stack([i[“box3d_lidar”] for i in sampled], axis=0)

File “<__array_function__ internals>”, line 5, in stack

File “/usr/local/lib/python3.8/dist-packages/numpy/core/shape_base.py”, line 423, in stack

raise ValueError(‘need at least one array to stack’)

ValueError: need at least one array to stack

ERROR:__main__:train failed! process 0 terminated with exit code 1

Traceback (most recent call last):

File “tools/train.py”, line 287, in

raise e

File “tools/train.py”, line 273, in

train(

File “tools/train.py”, line 254, in train

launch(

File “/usr/local/lib/python3.8/dist-packages/hat/engine/ddp_trainer.py”, line 384, in launch

mp.spawn(

File “/root/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py”, line 240, in spawn

return start_processes(fn, args, nprocs, join, daemon, start_method=‘spawn’)

File “/root/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py”, line 198, in start_processes

while not context.join():

File “/root/.local/lib/python3.8/site-packages/torch/multiprocessing/spawn.py”, line 149, in join

raise ProcessExitedException(

torch.multiprocessing.spawn.ProcessExitedException: process 0 terminated with exit code 1

您好,可以发一下config看下吗?

import copy-
import os-
from functools import partial-

import numpy as np-
import torch-
from horizon_plugin_pytorch.quantization import March-

from hat.data.collates.collates import collate_lidar3d-
from hat.utils.config import ConfigVersion-
from hat.visualize.lidar_det import lidar_det_visualize-

VERSION = ConfigVersion.v2-
training_step = os.environ.get(“HAT_TRAINING_STEP”, “float”)-

task_name = “centerpoint_pointpillar_nuscenes”-
batch_size_per_gpu = 4-
device_ids = [0]-

ckpt_dir = f"/open_explorer/tmp_models/{task_name}"-
# datadir settings-
data_rootdir = “./tmp_data/nuscenes/lidar_seg/v1.0-trainval”-
meta_rootdir = “./tmp_data/nuscenes/meta”-
gt_data_root = “./tmp_nuscenes/lidar”-
log_loss_show = 200-

cudnn_benchmark = True-
seed = None-
log_rank_zero_only = True-
march = March.BAYES-
norm_cfg = None-
qat_mode = “fuse_bn”-
convert_mode = “fx”-

# Voxelization cfg-
point_cloud_range = [-51.2, -51.2, -5.0, 51.2, 51.2, 3.0]-
voxel_size = [0.2, 0.2, 8]-
max_num_points = 20-
max_voxels = (30000, 40000)-

class_names = [-
“car”,-
“truck”,-
“pedestrian”,-
]-
tasks = [-
dict(num_class=1, class_names=[“car”]),-
dict(num_class=1, class_names=[“truck”]),-
dict(num_class=1, class_names=[“pedestrian”]),-
]-
common_heads = dict(-
reg=(2, 2), height=(1, 2), dim=(3, 2), rot=(2, 2), vel=(2, 2)-
)-
with_velocity = “vel” in common_heads.keys()-

def get_feature_map_size(point_cloud_range, voxel_size):-
point_cloud_range = np.array(point_cloud_range, dtype=np.float32)-
voxel_size = np.array(voxel_size, dtype=np.float32)-
grid_size = (point_cloud_range[3:] - point_cloud_range[:3]) / voxel_size-
grid_size = np.round(grid_size).astype(np.int64)-
return grid_size-

# model settings-
model = dict(-
type=“CenterPointDetector”,-
feature_map_shape=get_feature_map_size(point_cloud_range, voxel_size),-
pre_process=dict(-
type=“CenterPointPreProcess”,-
pc_range=point_cloud_range,-
voxel_size=voxel_size,-
max_voxels_num=max_voxels,-
max_points_in_voxel=max_num_points,-
norm_range=[-51.2, -51.2, -5.0, 0.0, 51.2, 51.2, 3.0, 255.0],-
norm_dims=[0, 1, 2, 3],-
),-
reader=dict(-
type=“PillarFeatureNet”,-
num_input_features=5,-
num_filters=(64,),-
with_distance=False,-
pool_size=(max_num_points, 1),-
voxel_size=voxel_size,-
pc_range=point_cloud_range,-
bn_kwargs=norm_cfg,-
quantize=True,-
use_4dim=True,-
use_conv=True,-
hw_reverse=True,-
),-
backbone=dict(-
type=“PointPillarScatter”,-
num_input_features=64,-
use_horizon_pillar_scatter=True,-
quantize=True,-
),-
neck=dict(-
type=“SECONDNeck”,-
in_feature_channel=64,-
down_layer_nums=[3, 5, 5],-
down_layer_strides=[2, 2, 2],-
down_layer_channels=[64, 128, 256],-
up_layer_strides=[0.5, 1, 2],-
up_layer_channels=[128, 128, 128],-
bn_kwargs=norm_cfg,-
quantize=True,-
use_relu6=False,-
),-
head=dict(-
type=“CenterPointHead”,-
in_channels=sum([128, 128, 128]),-
tasks=tasks,-
share_conv_channels=64,-
share_conv_num=1,-
common_heads=common_heads,-
head_conv_channels=64,-
init_bias=-2.19,-
final_kernel=3,-
),-
targets=dict(-
type=“CenterPointLidarTarget”,-
grid_size=[512, 512, 1],-
voxel_size=voxel_size,-
point_cloud_range=point_cloud_range,-
tasks=tasks,-
dense_reg=1,-
max_objs=500,-
gaussian_overlap=0.1,-
min_radius=2,-
out_size_factor=4,-
norm_bbox=True,-
with_velocity=with_velocity,-
),-
loss=dict(-
type=“CenterPointLoss”,-
loss_cls=dict(type=“GaussianFocalLoss”, loss_weight=1.0),-
loss_bbox=dict(-
type=“L1Loss”,-
reduction=“mean”,-
loss_weight=0.25,-
),-
with_velocity=with_velocity,-
code_weights=[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.2, 0.2],-
),-
postprocess=dict(-
type=“CenterPointPostProcess”,-
tasks=tasks,-
norm_bbox=True,-
bbox_coder=dict(-
type=“CenterPointBBoxCoder”,-
pc_range=point_cloud_range[:2],-
post_center_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0],-
max_num=100,-
score_threshold=0.1,-
out_size_factor=4,-
voxel_size=voxel_size[:2],-
),-
# test_cfg-
max_pool_nms=False,-
score_threshold=0.1,-
post_center_limit_range=[-61.2, -61.2, -10.0, 61.2, 61.2, 10.0],-
min_radius=[4, 12, 10, 1, 0.85, 0.175],-
out_size_factor=4,-
nms_type=“rotate”,-
pre_max_size=1000,-
post_max_size=83,-
nms_thr=0.2,-
box_size=9,-
),-
)-

# model settings-
deploy_model = dict(-
type=“CenterPointDetector”,-
feature_map_shape=get_feature_map_size(point_cloud_range, voxel_size),-
is_deploy=True,-
reader=dict(-
type=“PillarFeatureNet”,-
num_input_features=5,-
num_filters=(64,),-
with_distance=False,-
pool_size=(max_num_points, 1),-
voxel_size=voxel_size,-
pc_range=point_cloud_range,-
bn_kwargs=norm_cfg,-
quantize=True,-
use_4dim=True,-
use_conv=True,-
hw_reverse=True,-
),-
backbone=dict(-
type=“PointPillarScatter”,-
num_input_features=64,-
use_horizon_pillar_scatter=True,-
quantize=True,-
),-
neck=dict(-
type=“SECONDNeck”,-
in_feature_channel=64,-
down_layer_nums=[3, 5, 5],-
down_layer_strides=[2, 2, 2],-
down_layer_channels=[64, 128, 256],-
up_layer_strides=[0.5, 1, 2],-
up_layer_channels=[128, 128, 128],-
bn_kwargs=norm_cfg,-
quantize=True,-
use_relu6=False,-
),-
head=dict(-
type=“CenterPointHead”,-
in_channels=sum([128, 128, 128]),-
tasks=tasks,-
share_conv_channels=64,-
share_conv_num=1,-
common_heads=common_heads,-
head_conv_channels=64,-
init_bias=-2.19,-
final_kernel=3,-
),-
)-

deploy_inputs = dict(-
features=torch.randn((1, 5, 20, 40000), dtype=torch.float32),-
coors=torch.zeros([40000, 4]).int(),-
)-

# deploy_inputs = dict(-
# points=[-
# torch.randn(150000, 4),-
# ],-
# )-

db_sampler = dict(-
type=“DataBaseSampler”,-
enable=True,-
root_path=“/open_explorer/tmp_data/kitti3d/”,-
db_info_path=“/open_explorer/tmp_data/kitti3d/kitti3d_dbinfos_train.pkl”,-
sample_groups=[-
dict(car=2),-
dict(truck=3),-
dict(pedestrian=2),-
],-
db_prep_steps=[-
dict(-
type=“DBFilterByDifficulty”,-
filter_by_difficulty=[-1],-
),-
dict(-
type=“DBFilterByMinNumPoint”,-
filter_by_min_num_points=dict(-
car=5,-
truck=5,-
pedestrian=5,-
),-
),-
],-
global_random_rotation_range_per_object=[0, 0],-
rate=1.0,-
)-
# train_dataset = dict(-
# type=“NuscenesLidarDataset”,-
# num_sweeps=9,-
# data_path=os.path.join(data_rootdir, “train_lmdb”),-
# info_path=os.path.join(gt_data_root, “nuscenes_infos_train.pkl”),-
# load_dim=5,-
# use_dim=[0, 1, 2, 3, 4],-
# pad_empty_sweeps=True,-
# remove_close=True,-
# use_valid_flag=True,-
# classes=class_names,-
# transforms=[-
# dict(-
# type=“PointCloudPreprocess”,-
# mode=“train”,-
# current_mode=“train”,-
# class_names=class_names,-
# shuffle_points=True,-
# min_points_in_gt=-1,-
# flip_both=True,-
# global_rot_noise=[-0.3925, 0.3925],-
# global_scale_noise=[0.95, 1.05],-
# db_sampler=db_sampler,-
# ),-
# dict(-
# type=“ObjectRangeFilter”,-
# point_cloud_range=point_cloud_range,-
# ),-
# dict(type=“LidarReformat”, with_gt=True),-
# ],-
# )-

data_loader = dict(-
type=torch.utils.data.DataLoader,-
dataset=dict(-
type=“Kitti3D”,-
data_path=“/open_explorer/tmp_data/kitti3d/train_lmdb”,-
transforms=[-
dict(-
type=“ObjectSample”,-
class_names=class_names,-
remove_points_after_sample=False,-
db_sampler=db_sampler,-
),-
dict(-
type=“ObjectNoise”,-
gt_rotation_noise=[-0.15707963267, 0.15707963267],-
gt_loc_noise_std=[0.25, 0.25, 0.25],-
global_random_rot_range=[0, 0],-
num_try=100,-
),-
dict(-
type=“PointRandomFlip”,-
probability=0.5,-
),-
dict(-
type=“PointGlobalRotation”,-
rotation=[-0.78539816, 0.78539816],-
),-
dict(-
type=“PointGlobalScaling”,-
min_scale=0.95,-
max_scale=1.05,-
),-
dict(-
type=“ShufflePoints”,-
shuffle=True,-
),-
dict(-
type=“ObjectRangeFilter”,-
# point_cloud_range=pc_range,-
point_cloud_range=point_cloud_range,-
),-
dict(type=“LidarReformat”),-
],-
),-
sampler=dict(type=torch.utils.data.DistributedSampler),-
batch_size=batch_size_per_gpu,-
shuffle=False,-
num_workers=1,-
pin_memory=False,-
collate_fn=collate_lidar3d,-
)-

# val_dataset = dict(-
# type=“NuscenesLidarDataset”,-
# test_mode=True,-
# num_sweeps=9,-
# data_path=os.path.join(data_rootdir, “val_lmdb”),-
# load_dim=5,-
# use_dim=[0, 1, 2, 3, 4],-
# pad_empty_sweeps=True,-
# remove_close=True,-
# classes=class_names,-
# transforms=[-
# dict(type=“LidarReformat”, with_gt=False),-
# ],-
# )-

val_data_loader = dict(-
type=torch.utils.data.DataLoader,-
dataset=dict(-
type=“Kitti3D”,-
data_path=“/open_explorer/tmp_data/kitti3d/val_lmdb”,-
transforms=[-
dict(type=“LidarReformat”),-
],-
),-
batch_size=batch_size_per_gpu,-
shuffle=False,-
num_workers=1,-
pin_memory=False,-
collate_fn=collate_lidar3d,-
)-

def loss_collector(outputs: dict):-
losses = -
for _, loss in outputs.items():-
losses.append(loss)-
return losses-

batch_processor = dict(-
type=“MultiBatchProcessor”,-
need_grad_update=True,-
loss_collector=loss_collector,-
)-
val_batch_processor = dict(-
type=“MultiBatchProcessor”,-
need_grad_update=False,-
loss_collector=None,-
)-

def update_metric(metrics, batch, model_outs):-
for metric in metrics:-
metric.update(batch, model_outs)-

def update_loss(metrics, batch, model_outs):-
for metric in metrics:-
metric.update(model_outs)-

val_metric_updater = dict(-
type=“MetricUpdater”,-
metric_update_func=update_metric,-
step_log_freq=10000,-
epoch_log_freq=1,-
log_prefix="Validation " + task_name,-
)-
loss_show_update = dict(-
type=“MetricUpdater”,-
metric_update_func=update_loss,-
step_log_freq=log_loss_show,-
epoch_log_freq=1,-
log_prefix=“loss_” + task_name,-
)-

stat_callback = dict(-
type=“StatsMonitor”,-
log_freq=log_loss_show,-
)-

ckpt_callback = dict(-
type=“Checkpoint”,-
save_dir=ckpt_dir,-
name_prefix=training_step + “-”,-
strict_match=True,-
save_interval=1,-
# mode=“max”,-
mode=None,-
)-

val_callback = dict(-
type=“Validation”,-
data_loader=val_data_loader,-
batch_processor=val_batch_processor,-
callbacks=[val_metric_updater],-
val_model=None,-
val_on_train_end=True,-
val_interval=30,-
log_interval=200,-
)-

trace_callback = dict(-
type=“SaveTraced”,-
save_dir=ckpt_dir,-
trace_inputs=deploy_inputs,-
)-

grad_callback = dict(-
type=“GradScale”,-
module_and_scale=,-
clip_grad_norm=35,-
clip_norm_type=2,-
)-

# val_nuscenes_metric = dict(-
# type=“NuscenesMetric”,-
# data_root=meta_rootdir,-
# version=“v1.0-trainval”,-
# use_lidar=True,-
# classes=class_names,-
# save_prefix=“./WORKSPACE/results” + task_name,-
# )-

float_trainer = dict(-
type=“distributed_data_parallel_trainer”,-
model=model,-
data_loader=data_loader,-
optimizer=dict(-
type=torch.optim.AdamW,-
betas=(0.95, 0.99),-
lr=2e-4,-
weight_decay=0.01,-
),-
batch_processor=batch_processor,-
num_epochs=20,-
device=None,-
callbacks=[-
stat_callback,-
loss_show_update,-
dict(-
type=“CyclicLrUpdater”,-
target_ratio=(10, 1e-4),-
cyclic_times=1,-
step_ratio_up=0.4,-
step_log_interval=200,-
),-
grad_callback,-
val_callback,-
ckpt_callback,-
],-
sync_bn=True,-
train_metrics=dict(-
type=“LossShow”,-
),-
# val_metrics=[val_nuscenes_metric],-
val_metrics=dict(-
type=“Kitti3DMetricDet”,-
compute_aos=True,-
current_classes=class_names,-
difficultys=[0, 1, 2],-
),-
)-

calibration_data_loader = copy.deepcopy(data_loader)-
calibration_data_loader.pop(“sampler”) # Calibration do not support DDP or DP-
calibration_batch_processor = copy.deepcopy(val_batch_processor)-

calibration_trainer = dict(-
type=“Calibrator”,-
model=model,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
converters=[-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=os.path.join(-
ckpt_dir, “float-checkpoint-last.pth.tar”-
),-
allow_miss=True,-
verbose=True,-
),-
dict(type=“Float2Calibration”, convert_mode=convert_mode),-
],-
),-
data_loader=calibration_data_loader,-
batch_processor=calibration_batch_processor,-
num_steps=100,-
device=None,-
callbacks=[-
stat_callback,-
val_callback,-
ckpt_callback,-
],-
# val_metrics=[val_nuscenes_metric],-
val_metrics=dict(-
type=“Kitti3DMetricDet”,-
compute_aos=True,-
current_classes=class_names,-
difficultys=[0, 1, 2],-
),-
log_interval=20,-
)-

qat_trainer = dict(-
type=“distributed_data_parallel_trainer”,-
model=model,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
qconfig_params=dict(-
activation_qat_qkwargs=dict(-
averaging_constant=0,-
),-
weight_qat_qkwargs=dict(-
averaging_constant=1,-
),-
),-
converters=[-
dict(type=“Float2QAT”, convert_mode=convert_mode),-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=os.path.join(-
ckpt_dir, “calibration-checkpoint-last.pth.tar”-
),-
),-
],-
),-
data_loader=data_loader,-
optimizer=dict(-
type=torch.optim.SGD,-
weight_decay=0.0,-
lr=2e-4,-
momentum=0.9,-
),-
batch_processor=batch_processor,-
num_epochs=10,-
device=None,-
callbacks=[-
stat_callback,-
loss_show_update,-
dict(-
type=“CyclicLrUpdater”,-
target_ratio=(10, 1e-4),-
cyclic_times=1,-
step_ratio_up=0.4,-
step_log_interval=200,-
),-
grad_callback,-
val_callback,-
ckpt_callback,-
],-
train_metrics=dict(-
type=“LossShow”,-
),-
# val_metrics=[val_nuscenes_metric],-
val_metrics=dict(-
type=“Kitti3DMetricDet”,-
compute_aos=True,-
current_classes=class_names,-
difficultys=[0, 1, 2],-
),-
)-

# just for saving int_infer pth and pt-
int_infer_trainer = dict(-
type=“Trainer”,-
model=deploy_model,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
converters=[-
dict(type=“Float2QAT”, convert_mode=convert_mode),-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=os.path.join(-
ckpt_dir, “qat-checkpoint-last.pth.tar”-
),-
),-
dict(type=“QAT2Quantize”, convert_mode=convert_mode),-
],-
),-
data_loader=None,-
optimizer=None,-
batch_processor=None,-
num_epochs=0,-
device=None,-
callbacks=[ckpt_callback, trace_callback],-
)-

compile_dir = os.path.join(ckpt_dir, “compile”)-
compile_cfg = dict(-
march=march,-
name=task_name,-
out_dir=compile_dir,-
hbm=os.path.join(compile_dir, “model.hbm”),-
layer_details=True,-
input_source=[“ddr”],-
opt=“O3”,-
output_layout=“NHWC”,-
)-

# predictor-
float_predictor = dict(-
type=“Predictor”,-
model=model,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
converters=[-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=os.path.join(-
ckpt_dir, “float-checkpoint-last.pth.tar”-
),-
),-
],-
),-
data_loader=[val_data_loader],-
batch_processor=val_batch_processor,-
device=None,-
# metrics=[val_nuscenes_metric],-
metrics=dict(-
type=“Kitti3DMetricDet”,-
compute_aos=True,-
current_classes=class_names,-
difficultys=[0, 1, 2],-
),-
callbacks=[-
val_metric_updater,-
],-
log_interval=100,-
)-

calibration_predictor = dict(-
type=“Predictor”,-
model=model,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
converters=[-
dict(type=“Float2QAT”, convert_mode=convert_mode),-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=os.path.join(-
ckpt_dir, “calibration-checkpoint-last.pth.tar”-
),-
),-
],-
),-
data_loader=[val_data_loader],-
batch_processor=val_batch_processor,-
device=None,-
# metrics=[val_nuscenes_metric],-
metrics=dict(-
type=“Kitti3DMetricDet”,-
compute_aos=True,-
current_classes=class_names,-
difficultys=[0, 1, 2],-
),-
callbacks=[-
val_metric_updater,-
],-
log_interval=100,-
)-

qat_predictor = dict(-
type=“Predictor”,-
model=model,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
converters=[-
dict(type=“Float2QAT”, convert_mode=convert_mode),-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=os.path.join(-
ckpt_dir, “qat-checkpoint-last.pth.tar”-
),-
),-
],-
),-
data_loader=[val_data_loader],-
batch_processor=val_batch_processor,-
device=None,-
# metrics=[val_nuscenes_metric],-
metrics=dict(-
type=“Kitti3DMetricDet”,-
compute_aos=True,-
current_classes=class_names,-
difficultys=[0, 1, 2],-
),-
callbacks=[-
val_metric_updater,-
],-
log_interval=100,-
)-

int_infer_predictor = dict(-
type=“Predictor”,-
model=model,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
converters=[-
dict(type=“Float2QAT”, convert_mode=convert_mode),-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=os.path.join(-
ckpt_dir, “qat-checkpoint-last.pth.tar”-
),-
),-
dict(type=“QAT2Quantize”, convert_mode=convert_mode),-
],-
),-
data_loader=[val_data_loader],-
batch_processor=val_batch_processor,-
device=None,-
# metrics=[val_nuscenes_metric],-
metrics=dict(-
type=“Kitti3DMetricDet”,-
compute_aos=True,-
current_classes=class_names,-
difficultys=[0, 1, 2],-
),-
callbacks=[-
val_metric_updater,-
],-
log_interval=100,-
)-

infer_ckpt = int_infer_trainer[“model_convert_pipeline”][“converters”][1][-
“checkpoint_path”-
]-

align_bpu_predictor = dict(-
type=“Predictor”,-
model=model,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
converters=[-
dict(type=“Float2QAT”, convert_mode=convert_mode),-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=infer_ckpt,-
),-
dict(type=“QAT2Quantize”, convert_mode=convert_mode),-
],-
),-
data_loader=val_data_loader,-
# metrics=[val_nuscenes_metric],-
metrics=dict(-
type=“Kitti3DMetricDet”,-
compute_aos=True,-
current_classes=class_names,-
difficultys=[0, 1, 2],-
),-
callbacks=[-
val_metric_updater,-
],-
log_interval=1,-
)-

def process_inputs(infer_inputs, transforms=None):-
points = np.fromfile(-
infer_inputs[“input_points”], dtype=np.float32-
).reshape((-1, 5))-

points = torch.from_numpy(points)-
model_input = {-
“points”: [points],-
}-

if transforms is not None:-
model_input = transforms(model_input)-

return model_input, points-

def process_outputs(model_outs, viz_func, vis_inputs):-
preds = model_outs[0]-
viz_func(vis_inputs, preds)-
return None-

infer_cfg = dict(-
model=model,-
infer_inputs=dict(-
input_points=“/open_explorer/tmp_orig_data/kitti3d/training/velodyne/000000.bin”,-
),-
process_inputs=process_inputs,-
viz_func=partial(-
lidar_det_visualize, score_thresh=0.4, is_plot=True, reverse=True-
),-
process_outputs=process_outputs,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
converters=[-
dict(type=“Float2QAT”, convert_mode=convert_mode),-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=infer_ckpt,-
),-
dict(type=“QAT2Quantize”, convert_mode=convert_mode),-
],-
),-
)-

onnx_cfg = dict(-
model=deploy_model,-
stage=“qat”,-
inputs=deploy_inputs,-
model_convert_pipeline=dict(-
type=“ModelConvertPipeline”,-
qat_mode=“fuse_bn”,-
converters=[-
dict(type=“Float2QAT”),-
dict(-
type=“LoadCheckpoint”,-
checkpoint_path=os.path.join(-
ckpt_dir, “qat-checkpoint-last.pth.tar”-
),-
),-
],-
),-
)-

您好,# Voxelization cfg相关参数需要按照pointpillars的config做修改,该参数由数据集绑定。

那请问你们过滤数据集的代码具体在什么位置呢,我这里没有找到

链接: 百度网盘-链接不存在 提取码: 2sai

这是我的部分数据集和config文件,可以帮忙看看这个问题应该怎么解决吗,谢谢

你好,点云的过滤在voxelization过程中,该过程被封装为.so了,在/usr/local/lib/python3.8/dist-packages/horizon_plugin_pytorch/nn/quantized/functional_impl.py的_voxelization。实现为c++,见网盘链接:https://pan.horizon.ai/index.php/s/ZnNHb8a6WWxEbH7-
请按照建议修改配置文件。-

你好,你那边有修改好的config吗?可以正常训练kitti数据集的centerpoint

当前参考算法没有的,根据评论里的建议,除了dataloader还需要把涉及点云前处理的部分替换,涉及的参数:-
# Voxelization cfg-
point_cloud_range = [-51.2, -51.2, -5.0, 51.2, 51.2, 3.0]-
voxel_size = [0.2, 0.2, 8]-
max_num_points = 20-
max_voxels = (30000, 40000)

你们那边可以给修改一版吗?基于kitti数据集的,我们根据你这边说的修改了,但还是有一些错误

请问这个错误怎么解决,看着是配置文件db_sampler的原因

你好,根据您的config,有以下问题:pc_range、point_cloud_range、voxel_size、max_num_points、max_voxels需要使用kitti3d的数据配置;class_names需要首字母大写,且kitti3d的类别仅Car、Pedestrian、Cyclist,sample_groups值均为15;CenterPointPreProcess的norm_range和norm_dims都需要对齐PP模型的配置;在target部分由于kitti3d无velocity,gt_box3d维度相比nuscenes少一维,需要修改取值的索引;后处理中需要对齐PP模型的post_center_limit_range=post_center_range

我们现在需要训练的是我们自己的数据集,类别是正确的,一共三个类别,只不过我们自己的数据集是kitti格式的

上面图片的报错怎么处理呢

可以根据报错进入代码查看约束,sample_groups的dict目前只支持len为1 ,即:

sample_groups=[

dict(Car=15),

不支持len>1:

sample_groups=[

dict(Car=15,Cyclist=15),

自己的数据集也是需要修改以上的项的,很多数据都还是nuscenes的参数,是不适用其他数据集的。

修改触发报错可以根据路径debug的哦,这样自查效率更高一些~

上面的数据会更改的。但现在还应该不是数据的问题,好像还是配置文件其他部分的问题,现在改好一个出现一个,咱们能不能提供一下基于centerpoint训练kitti数据集的config文件或者提供一个pointpillars多类别检测的config文件,急求呀

适配其他数据集需要自己调试,您可以根据报错来调试,这个报错是输入的数据缺少关键字,原交付代码是example[“gt_boxes”], example[“gt_classess”]

对应的是dataset处理时候的关键字,在hat/data/datasets/kitti3d

是否可以将pointpillars模型修改为多类别检测的任务模型?

可以啊,你可以在自己的框架上import hat的模块搭建,不一定必须在hat里做开发的

实施起来,这两个方案(1.centerpoint适配kitti数据类型;2.pointpillars适配多类别检测)那个更简单一些呢?