Skip to content

Keynote Speakers

Karl Tuyls
DeepMind, France
University of Liverpool, UK

In this talk I will introduce DeepNash, an autonomous agent capable of learning to play the imperfect information game Stratego from scratch, up to a human expert level. Stratego is one of the few iconic board games that Artificial Intelligence (AI) had not yet mastered. This game has an enormous game tree, orders of magnitude bigger than that of Go and Texas hold’em poker. It has the additional complexity of requiring decision-making under imperfect information, similar to Texas hold’em poker. Decisions in Stratego are made over a large number of discrete actions with no obvious link between action and outcome. Episodes are long, with often hundreds of moves before a player wins, and situations in Stratego can not easily be broken down into manageable-sized sub-problems as in poker. For these reasons, Stratego has been a grand challenge for the field of AI for decades, and existing AI methods barely reach an amateur level of play. DeepNash uses a game-theoretic, model-free deep reinforcement learning method, without search, that learns to master Stratego via self-play and converges to an approximate Nash equilibrium, instead of ‘cycling’ around it, by directly modifying the underlying multi-agent learning dynamics. DeepNash beats existing state-of-the-art AI methods in Stratego and achieved a yearly and all-time top-3 rank on the Gravon games platform, competing with human expert players.

Short bio

Karl Tuyls (FBCS) is a team lead at DeepMind, an honorary professor of Computer Science at the University of Liverpool, UK, and a Guest Professor at the University of Leuven, Belgium. Previously, he held academic positions at the Vrije Universiteit Brussel, Hasselt University, Eindhoven University of Technology, and Maastricht University.

He is a fellow of the British Computer Society (BCS), is on the editorial board of the Journal of Autonomous Agents and Multi-Agent Systems, and is editor-in-chief of the Springer briefs series on Intelligent Systems. Prof. Tuyls is also an emeritus member of the board of directors of the International Foundation for Autonomous Agents and Multiagent Systems.

Prof. Tuyls has received several awards with his research, amongst which: the Information Technology prize 2000 in Belgium, best demo award at AAMAS’12, winner of various Robocup@Work competitions (’13, ’14), and he was a co-author of the runner-up best paper award at ICML’18. Furthermore, his research has received substantial attention from national and international press and media, most recently his work on Sports Analytics featured in Wired UK.

Tinne Tuytelaars
KU Leuven, Belgium

Coming soon

Short bio

Tinne Tuytelaars is professor at KU Leuven, Belgium, working on computer vision and, in particular, topics related to image representations, vision and language, incremental learning, image generation and more. She has been program chair for ECCV14, general chair for CVPR16, and will again be program chair for CVPR21. She also served as associate-editor-in-chief of the IEEE Transactions on Pattern Analysis and Machine Intelligence over the last four years. She was awarded an ERC Starting Grant in 2009 and received the Koenderink test-of-time award at ECCV16.

Marcin Detyniecki
AXA, France

Responsible AI topics are now firmly established in the research community, with explainable AI or AI fairness as flagships. On the other end of the spectrum, those topics are now seriously attracting attention in organizations and companies, especially through the lens of AI governance, a discipline that aims to analyze and address the challenges arising from a widespread use of AI in practice, as AI regulations are around the corner.

This leads companies to focus on the compliance aspects of AI projects. In practice, data scientists and organizations are in a fog, missing adequate guidance and solutions from research to achieve responsible AI.

In this talk, we will discuss how large companies, like AXA, currently see the responsible AI topic and why the current research output only provides partially actionable methodologies and solutions. We will discuss and illustrate with some concrete examples how the research community could better address the scientific challenges of this new applied responsible AI practice.

Short bio

Marcin Detyniecki is Group Chief Data Scientist & Head of AI Research and Thought Leadership at insurance global leader AXA. He leverages his expertise to help AXA deliver value and overcome AI and ML-related business challenges and enable the group to achieve its transformation as a tech-led company. He leads the artificial intelligence R&D activity at group level. His team works on setting a framework enabling fair, safe and explainable ML to deliver value.

Marcin is also active in several think and do tanks, including a role of vice-president and board member of Impact AI, member of the Consultative Expert Group on Digital Ethics in Insurance for EIOPA and technical expert at Institut Montaigne. He has been involved in several academic roles including Research Scientist at both CNRS and Sorbonne University. He holds a Ph.D. in Computer Science from Université Pierre et Marie Curie.