Universität Augsburg, Germany
Title: Socially Interactive Artificial Intelligence: Challenges and Perspectives
Abstract: Recent advancements in the field of machine learning are powering a new generation of computer-based agents that support human users in their decision making or even make decisions completely on their behalf. While progress in the design of multimodal user interfaces has led to intuitive ways of interacting with such agents, the underlying algorithms are growing in complexity and therefore decreasing the system’s comprehensibility. Evidence suggests that a lack of transparency, with respect to the decisions of an autonomous agent, might have a negative impact on the trustworthiness of a system, which hurts the overall user experience in return. To avoid these negative effects, such an agent does not only have to provide audio-visual explanations, but also engage with the human user in a social interaction. This step also requires moving from the pure presentation of relevant information to multimodal explanations embedded in a narrative. In my talk, I will report on various user studies we conducted to investigate how different kinds of explanation provided by a socially interacting agent are perceived by users and to what extent they support them in building up mental models of the agent. The talk will be illustrated by examples from various international and national projects in the area of social coaching and health care.
Bio: Elisabeth André is a full professor of Computer Science and Founding Chair of Human-Centered Artificial Intelligence at Augsburg University in Germany. Elisabeth André has a long track record in multimodal human-machine interaction, embodied conversational agents, social robotics, affective computing and social signal processing. Her work has won many awards including a RoboCup Scientific Award, an Award for Most Innovative Idea at International Conference on Tangible and Embedded Interaction (TEI) or the Most Participative Demo Award at the User Modelling, Adaptation and Personalization Conference (UMAP). In 2010, Elisabeth André was elected a member of the prestigious Academy of Europe, and the German Academy of Sciences Leopoldina. In 2017, she was elected to the CHI Academy, an honorary group of leaders in the field of Human-Computer Interaction. To honor her achievements in
bringing Artificial Intelligence techniques to Human-Computer
Interaction, she was awarded a EurAI fellowship (European Coordinating Committee for Artificial Intelligence) in 2013. Most recently, she was named one of the 10 most influential figures in the history of AI in Germany by National Society for Informatics (GI). Since 2019, she is serving as the Editor-in-Chief of IEEE Transactions on Affective Computing. She is currently serving a co-speaker of the Bavarian Research Association ForDigitHealth.
Google Research, Mountain View, CA, USA
Title: On User Utility and Social Welfare in Recommender Ecosystems
Abstract: An important goal for recommender systems is to make recommendations that maximize some form of user utility over (ideally, extended periods of) time. While reinforcement learning has started to find limited application in recommendation settings, for the most part, practical recommender systems remain “myopic” (i.e., focused on immediate user responses). Moreover, they are “local” in the sense that they rarely consider the impact that a recommendation made to one user may have on the ability to serve other users. These latter “ecosystem effects” play a critical role in optimizing long-term user utility. In this talk, I describe some recent work we have been doing to optimize user utility and social welfare using reinforcement learning and equilibrium modeling of the recommender ecosystem; draw connections between these models and notions such as fairness and incentive design; and outline some future challenges for the community.
Bio: Craig Boutilier is a Principal Scientist at Google. He received
his Ph.D. in Computer Science from U. Toronto (1992), and has held
positions at U. British Columbia and U. Toronto (where he served as
Chair of the Dept. of Computer Science). He co-founded Granata
Decision Systems, served as a technical advisor for CombineNet, Inc.,
and has held consulting/visiting professor appointments at Stanford,
Brown, CMU and Paris-Dauphine. Boutilier’s current research focuses
on various aspects of decision making under uncertainty, including:
recommender systems; user modeling; MDPs, reinforcement learning and bandits; preference modeling and elicitation; mechanism design, game theory and multi-agent decision processes; and related areas. Past research has also dealt with: knowledge representation, belief
revision, default reasoning and modal logic; probabilistic reasoning
and graphical models; multi-agent systems; and social choice.
Boutilier served as Program Chair for IJCAI-09 and UAI-2000, and as
Editor-in-Chief of the Journal of AI Research (JAIR). He is a Fellow
of the Royal Society of Canada (FRSC), the Association for Computing
Machinery (ACM) and the Association for the Advancement of Artificial Intelligence (AAAI). He also received the 2018 ACM/SIGAI Autonomous Agents Research Award.
Department of Systems Innovation, Osaka University and ATR Hiroshi Ishiguro Laboratories, Japan
Title: Studies on avatars and our future society
Abstract: The speaker has been involved in research on tele-operated robots, that is, avatars, since around 2000. In particular, research on Geminoid modeled on oneself is not only scientific research that understands the feeling of presence of human beings, but also practical research that allows one to move one’s existence to a remote place and work in a remote place. In this lecture, the speaker will introduce a series of research and development of tele-operated robots such as Geminoid, and discuss in what kind of society humans and robots will coexist in the future.
Bio: Hiroshi Ishiguro received a D. Eng. in systems engineering from the Osaka University, Japan in 1991. He is currently Professor of
Department of Systems Innovation in the Graduate School of Engineering Science at Osaka University (2009-) and Distinguished Professor of Osaka University (2017-). He is also visiting Director (2014-) (group leader: 2002-2013) of Hiroshi Ishiguro Laboratories at the Advanced Telecommunications Research Institute and an ATR fellow. His research interests include sensor networks, interactive robotics, and android science. He received the Osaka Cultural Award in 2011. In 2015, he received the Prize for Science and Technology (Research Category) by the Minister of Education, Culture, Sports, Science and Technology (MEXT). In 2020, he received Tateisi Prize.
Stanford University, USA
Title: Automated Decision Making for Safety Critical Applications
Abstract: Building robust decision making systems is challenging, especially for safety critical systems such as unmanned aircraft and driverless cars. Decisions must be made based on imperfect information about the environment and with uncertainty about how the environment will evolve. In addition, these systems must carefully balance safety with other considerations, such as operational efficiency. Typically, the space of edge cases is vast, placing a large burden on human designers to anticipate problem scenarios and develop ways to resolve them. This talk discusses major challenges associated with ensuring computational tractability and establishing trust that our systems will behave correctly when deployed in the real world. We will outline some methodologies for addressing these challenges.
Bio: Mykel Kochenderfer is an associate professor of aeronautics and
astronautics at Stanford University. He is the director of the
Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. In addition, he is the director of the
SAIL-Toyota Center for AI Research at Stanford and a co-director of
the Center for AI Safety. He received a Ph.D. in informatics from the
University of Edinburgh and B.S. and M.S. degrees in computer science from Stanford University. Prof. Kochenderfer is an author of the textbooks “Decision Making under Uncertainty: Theory and Application” and “Algorithms for Optimization”, both from MIT Press.
Collège de France, PSL University, Paris, France
Title: A Mathematics View of Deep Neural Networks
Abstract: The efficiency of deep learning is not only a surprise from an engineering point of view, it is also from a mathematical
standpoint. Standard mathematical tools have up to now failed to
capture the essence of this field. It is thus important to understand
numerical experiments rather than beginning from a priori mathematical perspectives. In classification problems, striking experiments indicate that deep networks classifiers progressively separate and concentrate the probability distribution of each class, to achieve linear separability in the last layer. The presentation outlines
mathematical challenges to explain this progressive transport, which
relies on hierarchical compositional architectures, convolutional
operators and pointwise non-linearities. We introduce elementary
operators to perform this non-linear transport, and derive simplified
multiscale scattering neural networks, which capture important
mathematical properties, while reaching high image classification and regression accuracies.
Bio: Stéphane Mallat is an applied mathematician, Professor at the Collège de France on the chair of Data Sciences. He is a member of the French Academy of sciences, and a foreign member of the US National Academy of Engineering. He was a Professor at the Courant Institute of NYU in New York for 10 years, then at Ecole Polytechnique and Ecole Normale Supérieure in Paris. He also was the co-founder and CEO of a semiconductor start-up company. Stéphane Mallat’s research interests include machine learning, signal processing and harmonic analysis. He developped the multiresolution wavelet theory with applications to image processing, and sparse rerpesentation in dictionaries with matching pursuits. He now works on mathematical understanding of deep neural networks, and their applications.
The University of Tokyo, Japan
Title: Does Predictive Coding Provide a Unified Theory of Artificial Intelligence?
Abstract: A theoretical framework called predictive coding suggests that the human brain works as a predictive machine. That is, the brain tries to minimize prediction errors by updating the internal model and/or by affecting the environment. We have been investigating to what extent the predictive coding theory accounts for human intelligence and whether it provides a unifying principle for the design of artificial intelligence. This talk presents computational neural networks we designed to examine how the process of minimizing prediction errors lead to cognitive development in robots. Our experiments demonstrated that both non-social and social cognitive abilities such as goal-directed action, imitation, estimation of others’ intentions, and altruistic behavior emerged as observed in infants. Not only the characteristics of typical development but also those of developmental disorders such as autism spectrum disorder were generated as a result of aberrant prediction abilities. These results suggest that predictive coding provides a unified computational theory for cognitive development (Nagai, Phil Trans B 2019).
Bio: Yukie Nagai is a Project Professor at the International Research
Center for Neurointelligence, the University of Tokyo. She received
her Ph.D. in Engineering from Osaka University in 2004 and worked at
the National Institute of Information and Communications Technology, Bielefeld University, and then Osaka University. Since 2019, she leads Cognitive Developmental Robotics Lab at the University of Tokyo. Her research interests include cognitive developmental robotics, computational neuroscience, and assistive technologies for developmental disorders. Her research achievements have been widely reported in the media as novel techniques to understand and support human intelligence. She also serves as the research director of JST CREST Cognitive Mirroring.
The University of Texas at Austin and Sony AI, USA
Title: Ad Hoc Autonomous Agent Teams: Collaboration without Pre-Coordination
Abstract: As autonomous agents proliferate in the real world, both in software and robotic settings, they will increasingly need to band together for cooperative activities with previously unfamiliar teammates. In such “ad hoc” team settings, team strategies cannot be developed a priori. Rather, an agent must be prepared to cooperate with many types of teammates: it must collaborate without pre-coordination. This talk will cover past and ongoing research on the challenge of building autonomous agents that are capable of robust ad hoc teamwork.
Bio: I am the founder and director of the Learning Agents Research Group (LARG) within the Artificial Intelligence Laboratory in the Department of Computer Science at The University of Texas at Austin, as well as associate department chair and Director of Texas Robotics. I was a co-founder of Cogitai, Inc. and am now Executive Director of Sony AI America. My main research interest in AI is understanding how we can best create complete intelligent agents. I consider adaptation, interaction, and embodiment to be essential capabilities of such agents. Thus, my research focuses mainly on machine learning, multiagent systems, and robotics. To me, the most exciting research topics are those inspired by challenging real-world problems. I believe that complete successful research includes both precise, novel algorithms and fully implemented and rigorously evaluated applications. My application domains have included robot soccer, autonomous bidding agents, autonomous vehicles, and human-interactive agents.
KU Leuven, Belgium
Title: The Quest for the Perfect Image Representation
Abstract: Throughout my research career, I’ve always been looking for the ‘optimal’ image representation: a representation that captures all relevant information for making sense of the depicted scene, including scene composition, 3D information, illumination and other cues; a representation that can easily generalize and adapt to new tasks; a representation that can be updated over time with new information, without forgetting what was learned before; a representation that is explicit in the sense that it can easily be interpreted or explained; a representation, in short, that supports true understanding of the image content, ultimately allowing the machine to reason and communicate about it in natural language. In this talk, I will describe a few recent efforts in this direction.
Bio: Tinne Tuytelaars is a full professor at KU Leuven, Belgium, working on computer vision and, in particular, topics related to image
representations, vision and language, continual learning and more. She has been program chair for ECCV14 and CVPR21, and general chair for CVPR16. She also served as associate-editor-in-chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence over the last four years. She was awarded an ERC Starting Grant in 2009 and received the Koenderink test-of-time award at ECCV16.