Prof. Kurosh Madani,
Paris-Est Créteil Val-de-Marne University, France
If the main challenge of robotics during the 19th century has consisted of automating repetitive tasks, and then, sophistication of these machines through digitization (computerization) of these robots throughout the 20th century, the challenge of robotics in the current century will be to make robots cohabit with humans, to share with them the “life space”, cooperating and evolving with them within complex environments. An example of application, among other potential appealing and numerous applications, is autonomous personal-companion robots for elderly people assistance witnessing nowadays a great increase of interest.
However, robots would not succeed in seamlessly integrate the humans’ universe without developing an ability of perceiving similarly to humans the environment that they are supposed to share with them. Thus, fitting the skills of the natural vision is an appealing perspective for autonomous robotics applications dealing with visual perception of the surrounding environment where robots and humans are asked for mutually evolve, cooperate and interacte.
In this context, a foremost and necessary skill for accosting the agility of the natural vision relates the “saliency detection”. In fact, adjacent to human’s cognitive (i.e. knowledge-based) exploration cleverness, this primary reflexive skill contributes to the efficiency of human’s visual attention in detecting relevant items of complex environment in which he evolves. Another leading and requisite ability for reaching humans’ visual suppleness relates the visual-attention making them to focus specific items of a scenery and guiding their actions accordingly to primacy of visual events stimulating their interest. Thus, a system endeavoring to approach the natural vision’s dexterity has to be able to detect visually-salient items of its surrounding and to develop a selective visual awareness of the backdrop in which it evolves.
Motivated by the progressive humanization of robots and the challenge of autonomous assistive robotics, the main goal of the present key-note paper is to highlight the SYANPSE research division’s (of LISSI EA 3956 laboratory) investigations relating human-like robot-vision system developed by the team: a hybrid approach combining human-inspired vision, Machine-Learning paradigms and visual saliency detection techniques.
Kurosh Madani is Professor at Senart-FB Institute of Technology of University PARIS-EST Creteil (UPEC), France and since 2018 Dean of Electrical Engineering and Industrial Informatics department (GEII department) of this instiyute. Graduated in fundamental physics in June 1985 from PARIS 7 – Jussieu University, he received his MSc. in Microelectronics and Systems’ Architecture from University PARIS 11 (PARIS-SUD), Orsay, France, in 1986. He received his Ph.D. in Electrical Engineering and Computer Sciences from University PARIS 11 (PARIS-SUD), Orsay, France, in February 1990. In 1995, he received the DHDR Doctor Hab. degree (senior research doctorate degree) from University PARIS 12 – Val de Marne. From 1992 to 2000 he has been creator and head of DRN (Neural Networks Division) research division of LERISS laboratory of UPEC. From 2001 to 2004 he has been director of Intelligence in Instrumentation and Systems Laboratory (I2S / JE 2353) of UPEC. Co-creator of Images, Signals and Intelligent Systems Laboratory (LISSI / EA 3956) of UPEC in 2005, he is Vice-director of LISSI and head of SYNAPSE research division, one of the four research components of this laboratory. He has worked on both digital and analog implementation of massively parallel processors arrays for image processing, electro-optical random numbers’ generation, and both analog and digital Artificial Neural Networks (ANN) implementation. Author and coauthor of more than 350 publications (in international journals, books, conferences’ and symposiums’ proceedings), he has been regularly invited as key-note and invited-lecture by international conferences and symposiums. His current research interests include: perception and machine-awareness, complex structures and behaviors modeling, self-organizing systems, cognitive information processing systems and their real-world and industrial applications, cognitive robotics and collective robotics. Since 1996 he is a permanent member (elected Academician) of International Informatization Academy. In 1997, he was also elected as permanent Academician of International Academy of Technological Cybernetics. In 2017 he has been awarded by Academic Palm of French Government (Chevalier de l’Ordre des Palmes Académiques) for his academic and scientific achievements.
Prof. Jürgen Sieck,
University of Applied Sciences HTW Berlin, Germany
The development of information and communication technology during the past 40 years is characterised through continued technical (r)evolution. These technical developments raise the possibility of new applications and application areas. It is important for the acceptance of new technologies that new applications create additional value, use the advantages of basic technologies and are adapted to the needs of the user. By combining the advantages of established technologies with these new approaches, and furthermore adapting those criteria to the different user needs and application scenarios, we are able to extend existing applications with new ICT components and services.
Many Augmented Reality (AR)- and Virtual Reality (VR)-based applications have been developed for cultural institutions. These applications are used for interactive entertainment, games, interactive storytelling, visual and sound art installations as well as interactive opera, architecture or digital archives. AR and VR allow for example totally new opera and concert experiences. Famous examples are Mozart’s “Zauberflöte” at the Komische Oper Berlin and Mozart’s “Jupiter Symphony” at the Konzerthaus Berlin.
We can find many additional examples of AR and VR installations in museums as well in industry and daily live. Well-known examples are “Jurascope” in the Naturkundemuseum Berlin, “Speaking Cubes” and “Magic Mirror” in the Pergamonmuseum Berlin or the AR guide in the British Museum. To present content only in the form of artefacts with texts, films and stories does no longer match the requirements of the audience. Many users do not like to consume only, they want to participate, communicate and interact with the exhibition and the staff behind the exhibition. AR, VR and new interaction methods can help to solve these problems. AR and VR are challenges for the computer industry as well as for cultural workers.
The presentation will take a look at the past of AR and VR and discuss different approaches to create AR and VR applications as well as some best practice examples. We will describe several technical aspects of mobile devices, context sensitive services in information systems for museums and concert halls developed at the INKA research group at the HTW Berlin or at the NUST Windhoek.
From the long list of examples we will take a closer look, for example to the augmented children's book "The Dancing Tortoise and the San Hunters of the Kalahari", the AR recipe book of the Corona Guest Farm, the season magazine of the Konzerthaus Berlin, the Virtual Konzerthaus Quartett and a virtual tour of the Konzerthaus Berlin, including a visit to the 4th (Italian) Symphony by Mendelssohn Bartholdy.
Jürgen Sieck received his degree in mathematics in 1981 and his PhD in computer science in 1989 from the Humboldt University zu Berlin, Germany. Now he is the head of the research group "Informations- und Kommunikationsanwendungen" (INKA) and professor for computer sciences with a specialisation on algorithms, mobile Application, Augmented and Virtual Reality at the degree programme Applied Computer Science at the University of Applied Sciences HTW Berlin. Previously, he was visiting professor at, Johannes Kepler Universität Linz, Austria, at Monash University Melbourne, Australia, at the University of Cape Town, South Africa and at Old Dominion University Norfolk in Virginia, USA. In February 2013 he was awarded an honorary doctorate from Odessa National Polytechnic University, Ukraine. From 2013 - 2018, he was Principal Investigator of the cluster of excellence “Bild Wissen Gestaltung” at the Humboldt-University zu Berlin. Since 2015, he is also a professor of computer science at Namibia University of Science and Technology in Windhoek. In April 2018 he was awarded an honorary doctorate from Ternopil National Economic University, Ukraine. Since 2019, he is Principal Investigator of the cluster of excellence “Matters of Activity. Image Space Material” at the Humboldt-University Berlin. Jürgen Sieck is the founder and chair of the conference series Culture and Computer Science.
Notification of acceptance:
10 May 2019
10 June 2019
Camera ready paper:
15 July 2019
22 July 0019
17 June 2019 - 12 August 2019
Late Paper Submission:
21 June 2019