Prof. Richard J. Duro
Grupo Integrado de Ingeniería
Centro de Investigación en Tecnologías de la Inforamación y las Comunicaciones (CITIC).
Universidade da Coruña, Spain.
Lifelong Open-ended Learning Autonomy (LOLA) refers to the fact that a robot must be able to operate and learn in domains that are unknown at design time as well as reuse knowledge learnt in one domain to facilitate learning in others throughout its lifetime. Achieving LOLA goes beyond specific learning algorithms and puts us squarely in the realm of cognitive architectures. However, most cognitive architectures were not built to address the LOLA problem, and thus, lack components and capabilities that would be required. This talk will present an overview of the problems involved and possible ways to address them. It will include some examples over real robots operating in real domains.
Richard J. Duro is a Full Professor of Computer Science and Artificial Intelligence at the University of Coruña in Spain and the Coordinator of the Integrated Group for Engineering Research at this university since 1999. His teaching and research interests are in intelligent systems and autonomous robotics and his current work concentrates on motivational systems and developmental cognitive robotic architectures. He is now involved in several projects related to autonomous robotics including the PILLAR-Robots Horizon project, which he coordinates.
Prof. Inna Skarga-Bandurova
Senior Lecturer in AI,
Oxford Brookes University, UK
This talk delves into the multidimensional concept of acceptability, encompassing trust in AI systems, transparency of algorithms and decision-making processes, and the alignment of AI solutions with user needs and values. It examines the role of interpretability and comprehensibility in facilitating collaboration between humans and AI in different areas, such as routine choices, medical diagnosis, and autonomous systems. Most human-machine joint decision-making methods primarily consider the confidence of AI decisions while underestimating the significance of human trust in the final decision-making process. The talk discusses real-world examples of explainable AI employed, emphasizing its potential to address biases, improve accountability, and drive ethical AI practices, as well as risks of a false sense of understanding or overconfidence in AI system decisions. It delves into user perceptions, concerns, and expectations regarding AI technologies and showcases strategies to enhance acceptability through improved user experience, effective communication, and privacy-preserving practices.
Prof. Inna Skarga-Bandurova is a Senior Lecturer in AI at Oxford Brookes University and Visiting Professor within the Cybersecurity Department at Ternopil Ivan Puluj National Technical University and Department of Mathematical and Econometric Modelling at G.E. Pukhov Institute for Modelling in Energy Engineering. She served as tech lead/PM in multiply European projects, leading teams from basic research to product development and implementation. Her work encompasses both fundamental and applied research to advance decisive analytics and knowledge representation formalisms, with a strong emphasis on reasoning for information extraction and its practical applications for robotics, cyber security, environmental studies and medical research.
Assoc. Prof. Danilo Pelusi
Department of Communication Sciences,
University of Teramo, Italy
The issue of solving complex problems in a reasonable time is a hard task. Sometime, exact methods are not suitable for such kind of problems. Thus, the design of nondeterministic algorithms is needed. These algorithms are often used to find an approximated solution, when the exact solution would require a huge computational complexity. A suitable approach is to address the high complexity by using soft computing techniques.
This speech will illustrate the combination of intelligent techniques to solve complex optimization problems. In particular, the application of nature-inspired algorithms and fuzzy logic strategies give very good results in terms of performances and complexity.
Danilo Pelusi received the degree in Physics from the University of Bologna (Italy) and the Ph.D. degree in Computational Astrophysics from the University of Teramo (Italy). Currently, he is an Associate Professor of Computer Science at the Department of Communication Sciences, University of Teramo. Editor of Springer, Elsevier and CRS books, and Associate Editor of IEEE Transactions on Emerging Topics in Computational Intelligence (2017-2020), IEEE Access (2018-present), IEEE Transactions on Neural Networks and Learning Systems (2022-present) and IEEE Transactions on Intelligent Transportation Systems (2022-present), he is Guest Editor for Elsevier, Springer, MDPI and Hindawi journals. Keynote speaker, Guest of Honor and Chair of IEEE conferences, he is inventor of international patents on Artificial Intelligence. World’s 2% Top Scientist 2021, his research interests include Fuzzy Logic, Neural Networks, Information Theory, Machine Learning and Evolutionary Algorithms.
Extended Abstract Submission:
15 May 2023
01 June 2023
Notification of Extended Abstract Acceptance:
30 June 2023
23 July 2023
Camera Ready Papers:
20 July 2023
30 July 2023