Trust-region methods are a broad class of methods for continuous
optimization that finds application in a variety of problems
and contexts. The basic principle consists of iteratively optimizing
a model of the objective function in a restricted region. In particular,
they have been studied and applied for problems without using derivatives,
where models are built solely by sampled function values.
Trust-region derivative-free methods are guaranteed to
converge for deterministic smooth functions, in the sense
of generating a sequence or run of iterates converging to criticality
(of first and second order type). Such a convergence is called
global as there is no assumption on the starting point.
It has also been proved that the order of complexity, or the
global rate in which criticality decays, matches the
In the deterministic non-smooth case, and given some knowledge
of the non-smooth structure, it is possible to design globally
convergent approaches with appropriated complexity, either by
smoothing the original function in some parametrized way or by
moving a compositive non-smooth structure directly to the
Trust-region methods can be based as well on probabilistic models
of the objective function, thus considering the derivative-free
case where sample points are randomly generated. Such methods
exhibit similar properties of convergence and complexity as those
using deterministic models, now not for all runs but for a
a set of those that occurs with probability one.
Finally, we are observing now the first attempts for the
stochastic case, where the objective function can only be
observed, and possibly approximated by sample averaging.
This talk will attempt to overview all such developments and
point out what still remains unknown.