To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure email@example.com
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We introduce novel multi-agent interaction models of entropic spatially inhomogeneous evolutionary undisclosed games and their quasi-static limits. These evolutions vastly generalise first- and second-order dynamics. Besides the well-posedness of these novel forms of multi-agent interactions, we are concerned with the learnability of individual payoff functions from observation data. We formulate the payoff learning as a variational problem, minimising the discrepancy between the observations and the predictions by the payoff function. The inferred payoff function can then be used to simulate further evolutions, which are fully data-driven. We prove convergence of minimising solutions obtained from a finite number of observations to a mean-field limit, and the minimal value provides a quantitative error bound on the data-driven evolutions. The abstract framework is fully constructive and numerically implementable. We illustrate this on computational examples where a ground truth payoff function is known and on examples where this is not the case, including a model for pedestrian movement.
We introduce the concept of mean-field optimal control which is the rigorous limit process connecting finite dimensional optimal control problems with ODE constraints modeling multi-agent interactions to an infinite dimensional optimal control problem with a constraint given by a PDE of Vlasov-type, governing the dynamics of the probability distribution of interacting agents. While in the classical mean-field theory one studies the behavior of a large number of small individuals freely interacting with each other, by simplifying the effect of all the other individuals on any given individual by a single averaged effect, we address the situation where the individuals are actually influenced also by an external policy maker, and we propagate its effect for the number N of individuals going to infinity. On the one hand, from a modeling point of view, we take into account also that the policy maker is constrained to act according to optimal strategies promoting its most parsimonious interaction with the group of individuals. This will be realized by considering cost functionals including L1-norm terms penalizing a broadly distributed control of the group, while promoting its sparsity. On the other hand, from the analysis point of view, and for the sake of generality, we consider broader classes of convex control penalizations. In order to develop this new concept of limit rigorously, we need to carefully combine the classical concept of mean-field limit, connecting the finite dimensional system of ODE describing the dynamics of each individual of the group to the PDE describing the dynamics of the respective probability distribution, with the well-known concept of Γ-convergence to show that optimal strategies for the finite dimensional problems converge to optimal strategies of the infinite dimensional problem.
Email your librarian or administrator to recommend adding this to your organisation's collection.