Type: Dataset
Tags: deep learning, Roads, traversability, laser radar, point cloud, robot vision systems, tallgrass, urban environments, LiDaR point clouds, point cloud compression, point cloud data, RGB images, autonomous vehicles, bright light, cloud images, conditional random field, daylight, dense depth, depth images, depth information, farmland, free space, global information, grassland, image features, image information, light conditions, receptive field, road environment, surface normals, training set, trajectory planning, transformer architecture, transformer encoder, twilight, aggregates
Bibtex:
Tags: deep learning, Roads, traversability, laser radar, point cloud, robot vision systems, tallgrass, urban environments, LiDaR point clouds, point cloud compression, point cloud data, RGB images, autonomous vehicles, bright light, cloud images, conditional random field, daylight, dense depth, depth images, depth information, farmland, free space, global information, grassland, image features, image information, light conditions, receptive field, road environment, surface normals, training set, trajectory planning, transformer architecture, transformer encoder, twilight, aggregates
Bibtex:
@article{, title= {ORFD: A Dataset and Benchmark for Off-Road Freespace Detection}, journal= {}, author= {Min, Chen and Jiang, Weizhong and Zhao, Dawei and Xu, Jiaolong and Xiao, Liang and Nie, Yiming and Dai, Bin}, year= {}, url= {https://github.com/chaytonmin/Off-Road-Freespace-Detection}, abstract= {Freespace detection is an essential component of autonomous driving technology and plays an important role in trajectory planning. In the last decade, deep learning based freespace detection methods have been proved feasible. However, these efforts were focused on urban road environments and few deep learning based methods were specifically designed for off-road freespace detection due to the lack of off-road dataset and benchmark. In this paper, we present the ORFD dataset, which, to our knowledge, is the first off-road freespace detection dataset. The dataset was collected in different scenes (woodland, farmland, grassland and countryside), different weather conditions (sunny, rainy, foggy and snowy) and different light conditions (bright light, daylight, twilight, darkness), which totally contains 12,198 LiDAR point cloud and RGB image pairs with the traversable area, non-traversable area and unreachable area annotated in detail. We propose a novel network named OFF-Net, which unifies Transformer architecture to aggregate local and global information, to meet the requirement of large receptive fields for freespace detection task. We also propose the cross-attention to dynamically fuse LiDAR and RGB image information for accurate off-road freespace detection. Dataset and code are publicly available at https://github.com/chaytonmin/OFF-Net.}, keywords= {deep learning, Roads, traversability, laser radar, point cloud, robot vision systems, tallgrass, urban environments, LiDaR point clouds, point cloud compression, point cloud data, RGB images, aggregates, autonomous vehicles, bright light, cloud images, conditional random field, daylight, dense depth, depth images, depth information, farmland, free space, global information, grassland, image features, image information, light conditions, receptive field, road environment, surface normals, training set, trajectory planning, transformer architecture, transformer encoder, twilight}, terms= {}, license= {MIT: https://github.com/chaytonmin/Off-Road-Freespace-Detection/blob/main/LICENSE}, superseded= {} }