Handling Multiple Objectives in Single and Multi-Agent Reinforcement Learning

Ann Nowé, Roxana Rădulescu
Monday 6 May 2024

Tutorial slides available here.

Brief description

Many, if not most, real-world decision problems have more than a single objective and, often, more than one agent. As the multi-objective aspect fundamentally changes everything you thought you knew about decision-making in the context of reinforcement learning, in this tutorial, we start from what it means to care about more than one aspect of the solution, and why you should consider this when modelling your problem domain. Then we go into what agents should optimise for in multi-objective settings, and discuss different assumptions, culminating in the corresponding taxonomies for both multi-objective single and multi-agent systems, and accompanying solution concepts. We advocate and present a utility-based approach as a framework for such settings and also discuss how this framework can support and address additional ethical concerns such as transparency and accountability. We then follow up with a few initial multi-objective reinforcement learning

Expected gained skills

  • Understanding of the theory on multi-objective decision-making, in both single and multi-agent settings
  • Overview of multi-objective approaches and tools for single and multi-agent settings

Expected background:

Previous experience (however brief) in either game theory, reinforcement learning, or utility theory is desirable but not required.