Type: Dataset
Tags: semantic segmentation, traversability, navigation, point cloud, robot navigation, autonomous systems, calibration target, camera images, classification datasets, color camera, high grass, image segmentation, LiDaR point clouds, LiDaR scans, neural architecture search, ontologies, point cloud compression, point cloud data, point cloud segmentation, raw sensor data, RGB images, robot sensing systems, robotic platform, segmentation model, semantic segmentation models, sensor data, synchronization, terrain, test split, types of obstacles, unstructured environments, training, 3D point cloud
Bibtex:
Tags: semantic segmentation, traversability, navigation, point cloud, robot navigation, autonomous systems, calibration target, camera images, classification datasets, color camera, high grass, image segmentation, LiDaR point clouds, LiDaR scans, neural architecture search, ontologies, point cloud compression, point cloud data, point cloud segmentation, raw sensor data, RGB images, robot sensing systems, robotic platform, segmentation model, semantic segmentation models, sensor data, synchronization, terrain, test split, types of obstacles, unstructured environments, training, 3D point cloud
Bibtex:
@article{, title= {Excavating in the Wild: The GOOSE-Ex Dataset for Semantic Segmentation}, author= {Hagmanns, Raphael and Mortimer, Peter and Granero, Miguel and Luettel, Thorsten and Petereit, Janko}, journal= {arXiv preprint arXiv:2409.18788}, year= {2024}, url= {https://goose-dataset.de/}, abstract= {The successful deployment of deep learning-based techniques for autonomous systems is highly dependent on the data availability for the respective system in its deployment environment. Especially for unstructured outdoor environments, very few datasets exist for even fewer robotic platforms and scenarios. In an earlier work, we presented the German Outdoor and Offroad Dataset (GOOSE) framework along with 10000 multimodal frames from an offroad vehicle to enhance the perception capabilities in unstructured environments. In this work, we address the generalizability of the GOOSE framework. To accomplish this, we open-source the GOOSE-Ex dataset, which contains additional 5000 labeled multimodal frames from various completely different environments, recorded on a robotic excavator and a quadruped platform. We perform a comprehensive analysis of the semantic segmentation performance on different platforms and sensor modalities in unseen environments. In addition, we demonstrate how the combined datasets can be utilized for different downstream applications or competitions such as offroad navigation, object manipulation or scene completion. The dataset, its platform documentation and pre-trained state-of-the-art models for offroad perception will be made available on https://goose-dataset.de/.}, keywords= {training, semantic segmentation, traversability, navigation, point cloud, robot navigation, 3D point cloud, autonomous systems, calibration target, camera images, classification datasets, color camera, high grass, image segmentation, LiDaR point clouds, LiDaR scans, neural architecture search, ontologies, point cloud compression, point cloud data, point cloud segmentation, raw sensor data, RGB images, robot sensing systems, robotic platform, segmentation model, semantic segmentation models, sensor data, synchronization, terrain, test split, types of obstacles, unstructured environments}, terms= {}, license= {CC BY-SA: https://creativecommons.org/licenses/by-sa/4.0/}, superseded= {} }