Optimization Methods 2022

Материал из Wiki - Факультет компьютерных наук
Версия от 10:47, 14 февраля 2022; NATab (обсуждение | вклад)

(разн.) ← Предыдущая | Текущая версия (разн.) | Следующая → (разн.)
Перейти к: навигация, поиск

Abstract

The course gives a comprehansive foundation for theory, methods and algorithms of mathematical optimization. The prerequisites are linear algebra and calculus.

Learning Objectives

- Students will study main concepts of optimization theory and develop methodology for theoretical investigaion of optimization problems.

- Students will obtain an understanding of creation, effectiveness and application optimization methods and algorithms on practice.

- The course will give students the possibility of solving standadrd and nonstandard mathematical problems connected to finding optimal solutions.

Expected Lerning Outcomes

- Students should be able to classify optimization problems according to their mathematical properties.

- Students should be able to perform a theoretical investigation of a given optimization problem in order to assess its complexity.

- Students should be able to write down first and second order optimality cobditions.

- Students should be able to provide a dual analysis of linear and convex optimization problems.

- Students should be able to asses the rate of convergence of the first and second order optimization methods.

- Students should be able to solve simple optimization problems without computer.

- Students should be able to implement different optimization codes in a computer enviroment.

- Students should be able to analyse the obtained solutions.


Course content

- Unvariate optimization: unimodal (invex) functions, golden section method, Lipschitzian optimization.

- Unconstrained quadratic optimization: algebraic approach, complete characterization of stationary points, gradient method with exact line search, conjugate gradient method.

- General unconstrained optimization: first and second order optimality conditions, descent directions, steepest descent method, conjugate gradient methods, Newton method, quasi-Newton methods, inexact line search, rate of convergence.

- Optimization problems with linear equality constraints: first and second order optimality conditions, Lagrange function.

- General smooth optimization problems with equality and inequality constraints: first and second order optimality conditions, dual problems,

 penalty and augmented Lagrangian methods, sequantial quadratic programming.

- Convex optimization: optimality conditions, duality, subgradients and subdifferential, cutting planes and bundle methods, complexity of convex optimization.

- Linear programming: duality, decomposition.

- Interior-point methods.


Recommended Bibliography

- Nocedal J., Wright S.J. Numerical optimization. Second Edition. Springer, 2006

- Bazaraa M. S., Sherali H.D, Shetty C. M. Nonlinear Programming: Theory and Algorithms 3rd Edition, John Wiley & Sons, 2006

- Beck, A. First-Order Methods in Optimization, MOS-SIAM Series on Optimization, 2017

- Ben-Tal A., Arkadi Nemirovski A. Lectures on Modern Convex Optimization: Analysis, Algorithms, and Engineering Applications, MOS-SIAM Series on Optimization, 2001

- Nesterov Yu. Introductory Lectures on Convex Optimization, Springer US, 2004