Jean-Christophe PESQUET, LIGM, Univ. Paris Est
In a wide range of application fields (inverse problems,
machine learning, computer vision,
data analysis, networking,...), large scale optimization problems need
to be solved.
The objective of this course is to introduce the theoretical
background which makes
it possible to develop efficient algorithms to successfully address
these problems
by taking advantage of modern multicore or distributed computing
archtectures.
This course will be mainly focused on nonlinear optimization tools
for dealing with convex problems. Proximal tools, splitting techniques
and Majorization-Minimization strategies which are now
very popular for processing massive datasets will be presented.
Illustrations of these methods on various applicative examples
will be provided.
Course outline. The course consists of
eight sessions (3h each) combining lectures and
exercices. The following concepts will be presented:
1. Introduction to optimization problems
- 1.1. Unconstrained optimization
- 1.2. Constrained optimization
- 1.3. Examples
2. Background on convex analysis
- 2.1. Convex sets and functions
- 2.2. Differentiability and subdifferentiability
- 2.3. Proximity operator
- 2.4. Conjugate function
- 2.5. Duality results
3. Parallel proximal splitting methods
- 3.1. Fixed poind algorithm and Fejér sequences
- 3.2. Minimization of a sum of two convex functions
- 3.3. Primal-dual proximal algorithms
- 3.4. Consensus-based techniques
4. Majorization-Minimization approaches
- 4.1. Majorization-Minimization principle
- 4.2. Majorization techniques
- 4.3. Half-Quadratic methods
- 4.4. Variable metric Forward-Backward algorithm
- 4.5. Subspace algorithms (memory gradient, BFGS, ...)
- 4.6. Block coordinate descent approaches
5. Stochastic methods
- 5.1. Stochastic gradient descent algorithm
- 5.2. Acceleration stochastic approximation techniques
- 5.3. Random projections