|Article title||DEVELOPMENT OF THE ADAPTIVE OBJECTS DESCRIPTIONS FOR THE COMPUTER VISION OF ROBOTIC SYSTEMS|
|Authors||N.V. Kim, N.E. Bodunkov, D.V. Klestov|
|Section||SECTION IV. VISION SYSTEMS AND ONBOARD COMPUTERS|
|Month, Year||01, 2015 @en|
|Abstract||Problem of technical vision systems (TVS) of autonomous robotic complex (RTC) functioning in uncertain and changing conditions is considered in the paper. It is shown that the solution of the observation tasks, such as detection and recognition of objects of interest (OI), is based on a comparison of pre-stored reference object descriptions and actual (current) images received by the TVS. The decision making effectiveness in a changing environment is determined by the need to prepare in advance a large number of reference objects descriptions (for different observation conditions). To reduce the required initial set of reference descriptions a new approach to the development of adaptive descriptions of the OI using the neuro-fuzzy systems is suggested. Fuzzy system consists of fuzzy rules. The rule establishes a correspondence between certain observation conditions and reference descriptions. Set of references forms an appropriate knowledge base (KB). This approach can provide enhanced functionality of existing TVS of RTC in uncertain environments and provides an opportunity to additional training of the already formed descriptions. Use of adaptive descriptions in the navigation task of the RTC, structure of the adaptive descriptions, a way of KB representation the article deals with. It is shown that frame is the most appropriate way of knowledge representation to form a KB of adaptive TVS. The test KB based on the frame descriptions was built. In example of this KB efficiency of the proposed approach was shown.|
|Keywords||Autonomous robotic complex; technical vision systems; adaptive descriptions; neuro-fuzzy systems, frames.|
|References||1. Mubarakshin R.V., Kim N.V., Krasil'shchikov M.N., Sablin Yu.A., Shingiriy I.P. Bortovye informatsionno-upravlyayushchie sredstva osnashcheniya letatel'nykh apparatov [On-Board information and control the means of equipping aircraft]: Uchebnik, Pod red. Krasil'shchikova
M.N. [Textbook, Under the edit. krasilschikova M.N.]. Moscow: Izd-vo MAI, 2003, 380 p.
2. Veremeenko K.K., Zheltov S.Yu., Kim N.V. i dr. Sovremennye informatsionnye tekhnologii v zadachakh navigatsii i navedeniya bespilotnykh manevrennykh letatel'nykh apparatov [Modern information technologies in problems of navigation and guidance maneuverable unmanned aircraft], Pod red. M.N. Krasil'shchikova, Sebryakova G.G. [Edited by Krasilschikova M.N., Sebrjakov G.G.]. Moscow: Fizmatlit, 2009, 556 p.
3. Leishman RC, McLain TW, Beard RW. Relative Navigation Approach for Vision-Based Aerial GPS-Denied Navigation, J of Intelligent & Robotic Systems, 2014, No. 74 (1–2), pp. 97-111.
4. Vizil'ter Yu.V., Zheltov S.Yu. Obrabotka i analiz izobrazheniy v zadachakh mashinnogo zreniya [Processing and image analysis in machine vision]. Moscow: Fizmatkniga, 2010, 689 p.
5. Lin F., Lum K.Y., Chen B.M., Lee T.H. Development of a vision-based ground target detection and tracking system for a small unmanned helicopter, Science in China Series F: Information Sciences, 2009, No. 52 (11), pp. 2201-2215.
6. Sigal1, Ying Zhu3, Dorin Comaniciu, Michael Black. Tracking Complex Objects Using Graphical Object Models, Complex Motion, 2007, pp. 223-234.
7. Cesetti A., Frontoni E., Mancini A., Zingaretti P., Longhi S. A Vision-Based Guidance System for UAV Navigation and Safe Landing using Natural Landmarks, J of Intelligent and Robotic Systems, 2010, No. 57 (1–4), pp. 233-257.
8. Yilmaz A., Javed O., Shah M. Object tracking: A survey, J ACM Computing Surveys CSUR, 2006, No. 38 (4): article 13.
9. Forsyth D., Ponce J. Computer Vision: A Modern Approach. Prentice Hall. 2004.
10. Kim N.V., Kuznetsov A.G. Avtonomnaya navigatsiya BLA na osnove obrabotki i analiza vidovoy informatsii [Autonomous navigation of UAVS based on processing and analysis of imagery], Izvestiya KBNTs RAN [Izvestiya OF the Kabardino-Balkarian Scientific Centre of the RAS], 2011, No. 1.
11. Osipov G.S., Smirnov I.V., Tikhomirov I.A. Formal methods of situational analysis: Experience from their use, Automatic Documentation and Mathematical Linguistics. ACM Press, 2012, No. 46 (5), pp. 183-194.
12. Kim N.V., Bodunkov N.E. Adaptive Surveillance Algorithms Based on the Situation Analysis, Computer Vision in Control Systems, 2 edit. Margarita N. Favorskaya, Lakhim C. Jain. Springer, 2015, pp. 169-200.
13. Li X., Hu W., Shen C., Zhang Z., Dick A., Hengel AVD. A survey of appearance models in visual object tracking, J ACM Transactions on Intelligent Systems and Technology (TIST), 2013, No. 4 (4): article 58.
14. Ulman S. High-Level Vision: Object Recognition and Visual Cognition. MIT Press, 1996.
15. Tulum K., Durak U., Yder S.K. Situation aware UAV mission route planning.Aerospace conference, 2009 IEEE: 1-12.
16. Dietterich T.G. Ensemble learning. The Handbook of Brain Theory and Natural Networks, 2nd edit. Cambridge, MA: MIT Press. 2002.
17. Forman G. Tackling concept drift by temporal inductive transfer, 29th annual Int ACM SIGIR Conf on Research and development in information retrieval, 2006, pp. 252-259.
18. Forman G. A pitfall and solution in multi-class feature selection for text classification, 21st Int Conf on Machine learning ICML '04 38, 2004.
19. Liedtke C.E., Grau O., Growe S. Use of explicit knowledge for the reconstruction of 3-D object geometry, Int. Conf. on Computer analysis of images and patterns, 1995, pp. 580-587.
20. Lidtke C.E., Buckner J., Grau O. A system for the knowledge based interpretation of remote sensing data, 3-d Airborne Remote Sensing Conference and Exhibition, 1997, Vol. 2, pp. 313-320.