|Article title||MULTIMODAL INTERFACES FOR AUTONOMOUS MOBILE ROBOTIC SYSTEMS|
|Authors||A.L. Ronzhin, R.M. Yusupov|
|Section||SECTION V. SYSTEM AND CONTROL POINTS|
|Month, Year||01, 2015 @en|
|Abstract||The peculiarities of the development of multimodal user interfaces for robotic systems, which provide the perception and contextual analysis of the current environment and interaction with untrained end users, are considered. Development of user interface for professional operator of workstation of industrial or special purpose information management system is based on regulated techniques and scenarios of the system control. The main problems in the development of multimodal user interfaces are natural variability of signals and modes of information transmission, as well as the heterogeneity of software and hardware of mobile intelligent robotic system. The aim of the research is the application of multimodal interfaces, processing natural for human modes of communication (speech, gestures, body movements, head, hand-written sketches, gaze direction, facial expressions, etc.), in order to provide intuitive interaction between users and intelligent modules civil and special purpose, which are embedded in the surrounding objects and mobile systems. Now the presence of a natural way of interaction is so important feature of the device as its functionality. Also note that the physical constraints and personal preferences affect the choices of available or the most convenient way to interact. Construction of context-aware system allow robot with multimodal interface to take into account the preferences and the user experience, as well as to adapt their work, analyzing the conditions of the physical environment and the state of the available computing and network resources. In the framework of the proposed approach used set of natural input and output modalities is determined at the development stage of interactive multimodal applications. The results of the development of a multimodal interface for information navigation robotic system are presented.|
|Keywords||Multimodal interfaces; robotic systems; context; heterogeneous mobile devices; intelligent environments.|
|References||1. Ducatel K., Bogdanowicz M., Scapolo F., Leijten J., Burgelma, J-C. ISTAG - Scenarios of Ambient Intelligence in 2010, European Commission Community Research, Feb. 2001, 58 p.
2. Yusupov R.M., Ronzhin A.L. Ot umnykh priborov k intellektual'nomu prostranstvu, Vestnik Rossiyskoy akademii nauk, 2010, Vol. 80, No. 1, pp. 45-51.
3. Gorodetskiy V.I., Karsaev O.V., Samoylov V.V., Serebryakov S.V. Agentskaya platforma dlya povsemestnykh vychisleniy [Agent platform for ubiquitous computing], Informatsionnye tekhnologii i vychislitel'nye sistemy [Information technology and computer systems], 2008, Issue 4, pp. 51-60.
4. Breazeal C.L. Designing Sociable Robots. MIT Press (2002).
5. Foster M. E., Giuliani M., Knoll A. Comparing Objective and Subjective Measures of Usability in a Human-Robot Dialogue System, In Proc. of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, 2009, pp. 879-887.
6. Kobozeva I.M., Sidorov G.O., Tsimmerling A.V. Modul' upravleniya dialogom v sisteme obshcheniya pol'zovatelya s podvizhnym robotom-gidom [The module management dialog in the system interacting with a mobile robotic guide], Trudy SPIIRAN [Proceedings of SPIIRAS], 2014, Issue 33, pp. 186-206.
7. Nieuwenhuisen M., Stuckler J., Behnke S. Intuitive Multimodal Interaction for Service Robots, In Proc. of HRI’2010, 2010, pp. 177-178.
8. Budkov V., Prischepa M., Ronzhin A. Dialog Model Development of a Mobile Infor-mation and Reference Robot // Pattern Recognition and Image Analysis, Pleiades Publishing, 2011, No. 21 (3), pp. 458-461.
9. Lee J. K., Breazeal C. Human Social Response Toward Humanoid Robot’s Head and Facial Features, In Proc. of CHI’2010, 2010, pp. 4237-4242.
10. Roy N., Roy A., Das S. Context-aware resource management in multi-inhabitant smart homes: A nash H-learning based approach, Pervasive and Mobile Computing Journal, November 2006, Vol. 2, Issue 4, pp. 372-404.
11. TalebiFard P., Leunga V.A Data Fusion Approach to Context-Aware Service Delivery in Heterogeneous Network Environments, Procedia Computer Science, 2011, Vol. 5, pp. 312-319.
12. Boytsov A., Zaslavsky A. Extending context spaces theory by proactive adaptation. Berlin: Springer, S. Balandin et al. (Eds.): NEW2AN/ruSMART 2010, LNCS 6294, 2010, pp. 1-12.
13. Dai P., Tao L., Xu G. Audio-Visual Fused Online Context Analysis Toward Smart Meeting Room. Berlin: Springer, J. Indulska et al. (Eds.): UIC 2007, LNCS 4611, 2007, pp. 868-877.
14. Glazkov S.V., Ronzhin A.L. Metody analiza konteksta prilozheniy v mobil'nykh geterogennykh ustroystvakh [Methods of analysis of the context of applications in heterogeneous mobile devices], Doklady TUSURa [Reports of Tomsk University], 2012, Vol. 3, No. 1, pp. 236-240.
15. Yusupov R.M., Ronzhin A.L., Prishchepa M.V., Ronzhin Al.L. Modeli i programmno-apparatnye resheniya avtomatizirovannogo upravleniya intellektual'nym zalom [Models and software and hardware solutions for the automated management of intellectual hall], Avtomatika i telemekhanika [Automation and remote control], 2011, No. 7, pp. 39-49.
16. Ronzhin Al.L., Ronzhin An.L. Sistema audiovizual'nogo monitoringa uchastnikov sove-shchaniya v intellektual'nom zale [The audiovisual system of monitoring of participants in the smart room], Doklady TUSURa [Reports of Tomsk University], 2011, No. 1 (22), Part 1, pp. 153-157.
17. Ronzhin A.L., Budkov V.Yu. Tekhnologii podderzhki gibridnykh e-soveshchaniy na osnove metodov audiovizual'noy obrabotki [Technology support hybrid e-meetings on the basis of the audiovisual processing], Vestnik komp'yuternykh i informatsionnykh tekhnologiy [Journal of
computer and information technology], 2011, No. 4, pp. 31-35.
18. Prishchepa M.V., Ronzhin A.L. Modeli interaktivnogo vzaimodeystviya s podvizhnym informatsionno-navigatsionnym kompleksom [Models of interactive communication with a mobile information and navigation system], Doklady TUSURa [Reports of Tomsk University], 2013, No. 2, pp. 136-141.
19. Ronzhin A., Prischepa M., Budkov V. Development of Means for Support of Comfortable Conditions for Human-Robot Interaction in Domestic Environments, Workshop Proceedings of the 8th International Conference on Intelligent Environments. J.A. Botia et al. (Eds.), IOS Press, 2012, pp. 221-230.
20. Prischepa M., Budkov V. Structural Model and Behavior Scenarios of Information Navigation Mobile Robot, Springer International Publishing Switzerland. A. Ronzhin et al. (Eds.): SPECOM 2014, LNAI 8773, 2014, pp. 444-451.
21. Budkov V., Prischepa M., Ronzhin A. Dialog Model Development of a Mobile Information and Reference Robot, Pattern Recognition and Image Analysis, Pleiades Publishing, 2011, Vol. 21, No. 3, pp. 458-461.
22. Ronzhin A., Budkov V. Speaker Turn Detection Based on Multimodal Situation Analysis, Springer International Publishing Switzerland. M. Zelezny et al. (Eds.): SPECOM 2013, LNAI 8113, 2013, pp. 302-309.
23. Ronzhin A., Budkov V., Kipyatkova I. PARAD-R: Speech Analysis Software for Meeting Support, In Proc. of the 9th International Conference on Information, Communications and Signal Processing ICICS-2013, Tainan, Taiwan, 2013.
24. Yusupov R.M., Kryuchkov B.I., Karpov A.A., Ronzhin A.L., Usov V.M. Vozmozhnosti primeneniya mnogomodal'nykh interfeysov na pilotiruemom kosmicheskom komplekse dlya podderzhaniya kommunikatsii kosmonavtov s mobil'nym robotom – pomoshchnikom ekipazha [The application possibilities of multimodal interfaces for manned space complex to maintain communication astronauts with a mobile robot assistant crew], Pilotiruemye polety v kosmos [Manned flights into space], 2013, No. 3 (8), pp. 23-34.