Skip to main content Accessibility help
  • Print publication year: 2017
  • Online publication date: February 2018

9 - Evolutionary Global Optimization via Change of Measures: A Martingale Route



The efficacy of the concept of change of measures was demonstrated in the last few chapters in the context of non-linear stochastic filtering—a tool that also has considerable scientific usefulness in developing numerical schemes for system identification problems. This chapter also concerns an application of the same notion leading to a paradigm [Sarkar et al. 2014] on global optimization problems, wherein solutions are guided mainly through derivative-free directional information computable from the sample statistical moments of the design (state) variables within a MC setup. Before the ideas on this approach are presented in some detail, it is advisable to first focus on some of the available methodologies/strategies for solving such optimization problems.

In most cases of practical interest, the cost or objective functional, whose extremization solves the optimization problem, could be non-convex, non-separable and even non-smooth. Here separability means that the cost function can be additively split in terms of the component functions and the optimization problem may actually be split into a set of sub-problems. An optimization problem is convex if it involves minimization of a convex function (or maximization of a concave function) where the admissible state variables are in a convex set. For a convex problem, a fundamental result is that a locally optimal solution is also globally optimal. The classical methods [Fletcher and Reeves 1964, Fox 1971, Rao 2009] that mostly use directional derivatives are particularly useful in solving convex problems (Fig. 9.1). Non-convex problems, on the other hand, may have many local optima, and choosing the best one (i.e., the global extremum) could be an extremely hard task. In global optimization, we seek, in the design or state or parameter space, the extremal locations of nonconvex functions subject to (possibly) nonconvex constraints. Here the objective functional could be multivariate, multimodal and even non-differentiable, which together precludes applying a gradient-based Newton–step whilst solving the optimization problem.

Related content

Powered by UNSILO