Name | DL | Torrents | Total Size | Joe's Recommended Mirror List [edit] | 233 | 8.28TB | 2487 | 0 | Self Driving Cars [edit] | 20 | 433.61GB | 85 | 0 | New Collection [edit] | 18 | 5.64TB | 168 | 0 |
Didi-Training-Release-1.tar.gz | 32.80GB |
Type: Dataset
Tags:
Bibtex:
Tags:
Bibtex:
@article{, title= {Udacity Didi $100k Challenge Dataset 1}, keywords= {}, journal= {}, author= {Udacity and Didi}, year= {}, url= {https://challenge.udacity.com/home/}, license= {}, abstract= {First Full Dataset Release - Udacity/Didi $100k Challenge One of the most important aspects of operating an autonomous vehicle is understanding the surrounding environment in order to make safe decisions. Udacity and Didi Chuxing are partnering together to provide incentive for students to come up with the best way to detect obstacles using camera and LIDAR data. This challenge will allow for pedestrian, vehicle, and general obstacle detection that is useful to both human drivers and self-driving car systems. Competitors will need to process LIDAR and Camera frames to output a set of obstacles, removing noise and environmental returns. Participants will be able to build on the large body of work that has been put into the Kitti datasets and challenges, using existing techniques and their own novel approaches to improve the current state-of-the-art. Specifically, students will be competing against each other in the Kitti Object Detection Evaluation Benchmark. While a current leaderboard exists for academic publications, Udacity and Didi will be hosting our own leaderboard specifically for this challenge, and we will be using the standard object detection development kit that enables us to evaluate approaches as they are done in academia and industry. IMPORTANT NOTICE There are some major differences between this Udacity dataset and the Kitti datasets. It is important to note that recorded positions are recorded with respect to the base station, not the capture vehicle. The NED positions in the ‘rtkfix’ topic are therefore in relation to a FIXED POINT, NOT THE CAPTURE OR OBSTACLE VEHICLES. The relative positions can be calculated easily, as the NED frame is cartesian space, not polar. The XML tracklet files will, however, be in the frame of the capture vehicle. This means that the capture vehicle is also included in the recorded positions, and is denoted by the ROS topic '/gps/rtkfix' in this first dataset. The single obstacle vehicle in this dataset is located in the 'obs1/' topic namespace, but this will be changed to '/obstacles/obstacle_name' in future releases to accommodate the creation of XML tracklet files for multiple obstacles. Orientation of obstacles are not evaluated in Round 1, but will be evaluated in Round 2. The pose section of the ROS bags included in this release IS NOT A VALID QUATERNION, and does not represent either the pose of the capture vehicle or the obstacle. There is no XML tracklet file included with these datasets. They will be released as soon as they are available, in conjunction with the opening of the online leaderboard.}, superseded= {}, terms= {} }