(45-3) 10 * << * >> * Russian * English * Content * All Issues

Automatic calibration of multiple cameras and lidars for autonomous vehicles
Y.B. Blokhinov 1, E.E. Andrienko 1, K.K. Kazakhmedov 1, B.V. Vishnyakov 1

State Research Institute of Aviation Systems,
125167, Moscow, Russia, Victorenko Str., 7

 PDF, 5477 kB

DOI: 10.18287/2412-6179-CO-812

Pages: 382-393.

Full text of article: Russian language.

Autonomous navigation of unmanned vehicles (UVs) is currently one of the most interesting scientific and technical problems, and this is even more true for UVs moving across rough terrain. As a rule, such vehicles are equipped with different sensors operating simultaneously, and sophisticated software is being developed to collect and analyze miscellaneous data. For the joint use of data from multiple cameras and lidars, all of them need to be mutually referenced in a single coordinate system. This problem is solved in a process of system calibration. The general idea is to place an object of special type in the field of view of the sensors, so that its characteristic points can be automatically detected from different points of view by all sensors. Then the repeated survey of the object allows you to create the required number of tie points for mutual alignment of the sensors. This work presents a technique for automatic calibration of a system of cameras and lidars using an original calibration object. The presented results of the experiments show that the calibration accuracy is sufficiently high.

unmanned vehicle, autonomous navigation, data collection and analysis, cameras, lidars, automatic system calibration, calibration stand, markers, reflective tags.

Blokhinov YB, Andrienko EE, Kazakhmedov KK, Vishnyakov BV. Automatic calibration of multiple cameras and LIDARs for autonomous vehicles. Computer Optics 2021; 45(3): 382-393. DOI: 10.18287/2412-6179-CO-812.

The work was supported by the Russian Science Foundation (Project No. 16-11-00082).


  1. Pusztai Z, Hajder L. Accurate calibration of LiDAR-camera systems using ordinary boxes. IEEE International Conference on Computer Vision Workshops 2017: 394-402. DOI: 10.1109/ICCVW.2017.53.
  2. Park Y, Yun S, Won CS, Cho K, Um K, Sim S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014; 14(3): 5333-5353. DOI: 10.3390/s140305333.
  3. Pereira M, Silva D, Santos V, et al. Self calibration of multiple LIDARs and cameras on autonomous vehicles. Rob Auton Syst 2016; 83(C): 326-337. DOI: 10.1016/j.robot.2016.05.010.
  4. Xu Z, Li X. A method of extrinsic calibration between a four-layer laser range finder and a camera. Proc 33rd Chinese Control Conference 2014: 7450-7455. DOI: 10.1109/ChiCC.2014.6896239.
  5. Abramenko AA. Extrinsic calibration of stereo camera and three-dimensional laser scanner. Computer Optics 2019; 43(2): 220-230. DOI: 10.18287/2412-6179-2019-43-2-220-230.
  6. Guindel C, Beltrán J, Martín D, García F. Automatic extrinsic calibration for lidar-stereo vehicle sensor setups. IEEE International Conference on Intelligent Transportation Systems 2017: 674-679.
  7. Chai Z, Sun Y, Xiong Z. A novel method for LiDAR camera calibration by plane fitting. Proc 2018 IEEE/ASME International Conference on Advanced Intelligent Mechatronics 2018: 286-291.
  8. Dhall A, Chelani K, Radhakrishnan V. LiDAR-camera calibration using 3D-3D point correspondences. 2017. Source: <https://arxiv.org/pdf/1705.09785.pdf>.
  9. Garrido-Jurado S, Muñoz-Salinas R, Madrid-Cuevas FJ, Marín-Jiménez MJ. Automatic generation and detection of highly reliable fiducial markers under occlusion. Patt Recogn 2014; 47(6): 2280-2292. DOI: 10.1016/j.patcog.2014.01.005.
  10. Romero-Ramirez FJ, Muñoz-Salinas R, Medina-Carnicer R. Speeded up detection of squared fiducial markers. Image Vis Comput 2018; 76: 38-47.
  11. Vishnyakov B, Blokhinov Y, Sgibnev I, Sheverdin V, Sorokin A, Nikanorov A, Masalov P, Kazakhmedov K, Brianskiy S, Andrienko Е, Vizilter Y. Semantic scene understanding for the autonomous platform. Int Arch Photogramm Remote Sens Spatial Inf Sci 2020; XLIII(B2): 637-644.
  12. Sgibnev I, Sorokin A, Vishnyakov B, Vizilter Y. Deep semantic segmentation for the off-road autonomous driving. Int Arch Photogramm Remote Sens Spatial Inf Sci 2020; XLIII(B2): 617-622.
  13. The image processing library OpenCV. Source: <http://opencv.org/>.
  14. Canny JA. Computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell 1986; 8(6): 679-698.
  15. Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 1981; 24: 381-395.
  16. Point cloud library. Source: <http://pointclouds.org/documentation/>.
  17. Kneip L, Scaramuzza D, Siegwart R. A novel parametrization of the perspective-three-point problem for a direct computation of absolute camera position and orientation. Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition 2011: 2969-2976.
  18. OpenMVG libraries. Source: <https://github.com/openMVG>.
  19. Umeyama S. Least-squares estimation of transformation parameters between two point patterns. IEEE Trans Pattern Anal Mach Intell 1991; 13(4): 376-380.
  20. Eigen Library. Source: <http://eigen.tuxfamily.org/index.php?title=Main_Page>.
  21. Triggs B, McLauchlan P, Hartley R, Fitzgibbon A. Bundle adjustment – A modern synthesis. ICCV'99: Proceedings of the International Workshop on Vision Algorithms 1999: 298-372.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: ko@smr.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20