Keynote Speakers
Talk: Mastering the Game of Stratego with Model-Free Multiagent Reinforcement Learning
In this talk I will introduce DeepNash, an autonomous agent capable of learning to play the imperfect information game Stratego from scratch, up to a human expert level. Stratego is one of the few iconic board games that Artificial Intelligence (AI) had not yet mastered. This game has an enormous game tree, orders of magnitude bigger than that of Go and Texas hold’em poker. It has the additional complexity of requiring decision-making under imperfect information, similar to Texas hold’em poker. Decisions in Stratego are made over a large number of discrete actions with no obvious link between action and outcome. Episodes are long, with often hundreds of moves before a player wins, and situations in Stratego can not easily be broken down into manageable-sized sub-problems as in poker. For these reasons, Stratego has been a grand challenge for the field of AI for decades, and existing AI methods barely reach an amateur level of play. DeepNash uses a game-theoretic, model-free deep reinforcement learning method, without search, that learns to master Stratego via self-play and converges to an approximate Nash equilibrium, instead of ‘cycling’ around it, by directly modifying the underlying multi-agent learning dynamics. DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly and all-time top-3 rank on the Gravon games platform, competing with human expert players.
Short bio
Talk: Continual learning: Beyond solving datasets
Coming soon
Short bio
Tinne Tuytelaars is professor at KU Leuven, Belgium, working on computer vision and, in particular, topics related to image representations, vision and language, incremental learning, image generation and more. She has been program chair for ECCV14, general chair for CVPR16, and will again be program chair for CVPR21. She also served as associate-editor-in-chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence over the last four years. She was awarded an ERC Starting Grant in 2009 and received the Koenderink test-of-time award at ECCV16.
Talk: When responsible AI research needs to meet reality
Responsible AI topics are now firmly established in the research community, with explainable AI or AI fairness as flagships. On the other end of the spectrum, those topics are now seriously attracting attention in organizations and companies, especially through the lens of AI governance, a discipline that aims to analyze and address the challenges arising from a widespread use of AI in practice, as AI regulations are around the corner.
This leads companies to focus on the compliance aspects of AI projects. In practice, data scientists and organizations are in a fog, missing adequate guidance and solutions from research to achieve responsible AI.
In this talk, we will discuss how large companies, like AXA, currently see the responsible AI topic and why the current research output only provides partially actionable methodologies and solutions. We will discuss and illustrate with some concrete examples how the research community could better address the scientific challenges of this new applied responsible AI practice.