HD1K Benchmark Suite

We are excited to be part of the Robust Vision Challenge 2018. Check out the challenge website and our submission instructions for further details on how to participate. We are looking forward to seeing you at CVPR!


For any questions or suggestions, please contact us:


When using our data, benchmarks, or metrics, please cite the corresponding papers:

For the benchmark:
[1] The HCI Benchmark Suite: Stereo And Flow Ground Truth With Uncertainties for Urban Autonomous Driving.
D. Kondermann, R. Nair, K. Honauer, K. Krispin, J. Andrulis, A. Brock, B. Güssefeld, M. Rahimimoghaddam, S. Hofmann, C. Brenner, and B. Jähne.
In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2016.

For the data:
[2] Stereo Ground Truth with Error Bars.
D. Kondermann, R. Nair, S. Meister, W. Mischler, B. Güssefeld, K. Honauer, S. Hofmann, C. Brenner, and B. Jähne.
In Asian Conference on Computer Vision (ACCV), 2014.

For the evaluation metrics:
[3] The HCI Stereo Metrics: Geometry-Aware Performance Analysis of Stereo Algorithms.
K. Honauer, L. Maier-Hein, and D. Kondermann.
In IEEE International Conference on Computer Vision (ICCV), 2015.

Further related references:
[4] Noise equalisation and quasi loss-less image data compression–or how many bits needs an image sensor?
B. Jähne and M. Schwarzbauer. tm-Technisches Messen, 2016.

[5] Analysis and Modeling of Passive Stereo and Time-of-Flight Imaging.
R. Nair. PhD Thesis, 2015.

[6] Bootstrapping of Batch Sequence-to-Pointcloud Registration for Groundtruth Generation.
K. Krispin. Master Thesis, 2016.

[7] Ground Truth Accuracy and Performance of the Matching Pipeline.
J. Maier, M. Humenberger, O. Zendel, M. Vincze.
In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017.


  • 16.02.2018 The stereo and optical flow benchmarks are open for submissions.
  • 01.02.2018 Training data for stereo and optical flow is available for download.


We thank Wolfgang Niehsen and his Team at Robert Bosch GmbH, Computer Vision Research Lab, Hildesheim, for supplying the test car, camera mount and tons of input regarding meaningful content of the scenes we recorded.

We further thank Jens Taupadel, Jakob Knauer and Moritz Wandsleb at Hannover University for acquiring and processing the scans. Finally, we thank our lab members Alexandro Sanchez-Bach, Ekaterina Melnik, Felix Braham Stern, for their assistance in data processing, Florian Becker and Frank Lenzen for helpful discussions as well as AEON Verlag & Studio GmbH for the organization of all helpers and facilities.