Many, if not most, real-world decision problems have more than a single objective and, often, more than one agent. As the multi-objective aspect fundamentally changes everything you thought you knew about decision-making in the context of reinforcement learning, in this tutorial, we start from what it means to care about more than one aspect of the solution, and why you should consider this when modelling your problem domain. Then we go into what agents should optimise for in multi-objective settings, and discuss different assumptions, culminating in the corresponding taxonomies for both multi-objective single and multi-agent systems, and accompanying solution concepts. We advocate and present a utility-based approach as a framework for such settings and also discuss how this framework can support and address additional ethical concerns such as transparency and accountability. We then follow up with a few initial multi-objective reinforcement learning
Previous experience (however brief) in either game theory, reinforcement learning, or utility theory is desirable but not required.