Article

Article title OBJECT DETECTION AND GROUND TYPE CLASSIFICATION WITH COMBINED COMPUTER VISION SYSTEM
Authors A.V. Vazaev, V.P. Noskov, I.V. Rubtsov, S.G. Tsarichenko
Section SECTION II. VISION SYSTEM AND ONBOARD COMPUTERS
Month, Year 02, 2016 @en
Index UDC 007:621.865.8
DOI
Abstract Due to current remote controlled mobile robotic systems principal shortcomings caused by communication channel further development of mobile robotics is connected with autonomous control system utilization. One of the most important problems for autonomous robot control system is generating environment model based on on-board sensors that is accurate enough for planning future movement and behavior and for providing robot navigation. Model construction for industrial environments can be done by using only geometrical data, but rough terrain may contain passable obstacles and impassable flat areas. In this paper using of combined computer vision system data including mutually calibrated LiDAR sensor, color camera and thermovision camera images is proposed. Such a sensor combination provides geometrical environment model with color and thermal information, which provides more accurate and simple solution of object recognition and working area classification tasks not only by geometrical parameters, but also by ground passability criterions. Mathematical tool for ground type classification (by the example of four ground types: vegetation, asphalt, sand, gravel) and object detection (by the example of water surface and naked flame) is provided. Created software efforts is provided in: image segmentation for vegetation and asphalt detection; detection in case of improper single sensor operation or in case of distinct sensors’ fields of view; water surface and naked flame detection on combined image from computer vision system. Full-scale experiment results given in this paper admit to make a conclusion that using of combined color-thermal-distance images allows to significantly expand range of tasks that can be solved by computer vision system and to increase solving efficiency.

Download PDF

Keywords Mobile robot; autonomous control system; combined computer vision system; environment model; recognition; classification.
References 1. Kalyaev A.V., Chernukhin Yu.V., Noskov V.P., Kalyaev I.A. Odnorodnye upravlyayushchie struktury adaptivnykh robotov [Homogeneous control structures for adaptive robots]. Moscow: Nauka, 1990, 147 p.
2. Noskov V.P., Rubtsov I.V. Opyt resheniya zadachi avtonomnogo upravleniya dvizheniem mobil'nykh robotov [The experience of solving the problem of Autonomous motion control of mobile robots], Mekhatronika, avtomatizatsiya, upravlenie [Mechatronics, Automation, Control], 2005, No. 12, pp. 21-24.
3. Ji J., Chen X. From structured task instructions to robot task plans, IC3K 2013; KEOD 2013 – 5th International Conference on Knowledge Engineering and Ontology Development, Proceedings, 2013, pp. 237-244.
4. Lakota N.A., Noskov V.P., Rubtsov I.V., Lundgren Ya.-O. Moor F. Opyt ispol'zovaniya elementov iskusstvennogo intellekta v sisteme upravleniya tsekhovogo transportnogo robota [Experience in the use of elements of artificial intelligence in the control system of the transport robot shop], Mekhatronika [Mechatronics], 2000, No. 4, pp. 44-47.
5. Buyvolov G.A., Noskov V.P., Rurenko A.A., Raspopin A.N. Apparatno-algoritmicheskie sredstva formirovaniya modeli problemnoy sredy v usloviyakh peresechennoy mestnosti [Hardware and algorithmic means for forming a model of the problem environment in rough terrain], Sb. nauchn. tr. “Upravlenie dvizheniem i tekhnicheskoe zrenie avtonomnykh
transportnykh robotov” [Collection of scientific papers "motion Control and machine vision Autonomous robot vehicles"]. Moscow: IFTP, 1989, pp. 61-69.
6. Noskov A.V., Rubtsov I.V., Romanov A.Yu. Formirovanie ob"edinennoy modeli vneshney sredy na osnove informatsii videokamery i dal'nomera [The formation of a unified model of the external environment on the basis of information of the video camera and the range finder], Mekhatronika,
avtomatizatsiya, upravlenie [Mechatronics, Automation, Control], 2007, No. 8, pp. 2-5.
7. Noskov V. P., Rubtsov I. V., Vazaev A.V. Ob effektivnosti modelirovaniya vneshney sredy po dannym kompleksirovannoy STZ [On the efficiency modelling of the external environment according to complexed VS], Robototekhnika i tekhnicheskaya kibernetika [Robotics and technical Cybernetics], 2015, No. 2 (7), pp. 51-55.
8. Milella A. et al. Combining radar and vision for self-supervised ground segmentation in outdoor environments, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011, pp. 255-260.
9. Galindo C., Fernandez-Madrigal J.-A., Gonzalez J. Improving efficiency in mobile robot task planning through world abstraction, IEEE Transactions on Robotics, 2004, Vol. 20, No. 4, pp. 677-690.
10. Milella A., Reina G., Foglia M.M. A multi-baseline stereo system for scene segmentation in natural environments, 2013 IEEE International Conference on Technologies for Practical Robot Applications (TePRA), 2013, pp. 1-6.
11. Milella A., Reina G., Underwood J. A Self-learning Framework for Statistical Ground Classification using Radar and Monocular Vision, J. Field Robotics, 2015, Vol. 32, No. 1, pp. 20-41.
12. Slavkovikj V. et al. Image-Based Road Type Classification. IEEE, 2014, pp. 2359-2364.
13. Posada L.F. et al. Semantic classification of scenes and places with omnidirectional vision, 2013 European Conference on Mobile Robots (ECMR), 2013, pp. 113-118.
14. Arbeiter G. et al. Efficient segmentation and surface classification of range images, 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 5502-5509.
15. Noskov A.V., Noskov V.P. Raspoznavanie orientirov v dal'nometricheskikh izobrazheniyakh [The recognition of landmarks in telematicheskikh images], Sb. «Mobil'nye roboty i mekhatronnye sistemy» [Proc. "Mobile robots and mechatronic systems"]. Moscow: Iz-vo MGU, 2001, pp. 179-192.
16. Anand A. et al. Contextually Guided Semantic Labeling and Search for Three-Dimensional Point Clouds, The International Journal of Robotics Research. January 2013, Vol. 32, Issue 1.
17. Al-Moadhen A. et al. Improving the Efficiency of Robot Task Planning by Automatically Integrating Its Planner and Common-Sense Knowledge Base, Knowledge-Based Information Systems in Practice / ed. Tweedale J.W. et al. Springer International Publishing, 2015, pp. 185-199.
18. Stuckler J., Biresev N., Behnke S. Semantic mapping using object-class segmentation of RGB-D images, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2012, pp. 3005-3010.
19. Mashkov K.Yu., Rubtsov V.I., Shtifanov N.V. Avtomaticheskaya sistema obespecheniya opornoy prokhodimosti mobil'nogo robota [Automatic system of providing support patency of the mobile robot], Vestnik MGTU im. N.E. Baumana. Ser. Mashinostroenie. Vyp. Spetsial'naya robototekhnika [Herald of the Bauman Moscow State Technical University. Series Mechanical
Engineering], 2012, pp. 95-106.
20. Zhang A. Flexible New Technique for Camera Calibration, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000, No. 22 (11), pp. 1330-1334.

Comments are closed.