Statistical learning theory 2022
Содержание
General Information
Lectures: Friday 16h20 -- 17h40, Bruno Bauwens, Maxim Kaledin, room M202 and on zoom
Seminars: Saturday 14h40 -- 16h00, Artur Goldman, room M202 and on zoom (the link will be in telegram)
To discuss the materials, join the telegram group The course is similar to last year.
Homeworks
Email to brbauwens-at-gmail.com. Start the subject line with SLT-HW.
Deadline before the start of the lecture, every other lecture.
Sat. 17 Sept 18h10: problems 1.7, 1.8, 2.9, and 2.11
Sat. 01 Oct 18h10: see lists 3 and 4, and 2.10
Fri. 14 Oct 16h20: see problem lists 5 and 6
Sat. 05 Nov 20h00: see problem lists 7 and 8
Sat. 29 Nov 20h00: see problem lists 9 and 10
Sat. 03 Dec 20h00: see problem lists 11 and 12
Course materials
Video | Summary | Slides | Lecture notes | Problem list | Solutions |
---|---|---|---|---|---|
Part 1. Online learning | |||||
02 Sept | Philosophy. The online mistake bound model. The halving and weighted majority algorithms movies | sl01 | ch00 ch01 | list 1 update 05.09 | solutions 1 |
09 Sept | The perceptron algorithm. The standard optimal algorithm. | sl02 | ch02 ch03 | list 2 update 25.09 | solutions 2 |
16 Sept | Kernels and the kernel perceptron algorithm. Prediction with expert advice. Recap probability theory. | sl03 | ch04 ch05 | list 3 | solutions 3 |
Part 2. Distribution independent risk bounds | |||||
23 Sept | Sample complexity in the realizable setting, simple examples and bounds using VC-dimension | sl04 | ch06 | list 4 | solutions 4 |
30 Sept | Growth functions, VC-dimension and the characterization of sample comlexity with VC-dimensions | sl05 | ch07 ch08 | list 5 | solutions 5 |
07 Oct | Risk decomposition and the fundamental theorem of statistical learning theory | sl06 | ch09 | list 6 | solutions 6 |
14 Oct | Bounded differences inequality, Rademacher complexity, symmetrization, contraction lemma, quiz | sl07 | ch10 ch11 | list 7 update 15.10 | solutions 7 |
Part 3. Margin risk bounds with applications | |||||
21 Oct | Simple regression, support vector machines, margin risk bounds, and neural nets | sl08 | ch12 ch13 | list 8 | |
04 Nov | Kernels: RKHS, representer theorem, risk bounds | sl09 | ch14 | list 9 | |
11 Nov | AdaBoost and the margin hypothesis | sl10 | Mohri et al, chapt 7 | list 10 | |
18 Nov | Implicit regularization of stochastic gradient descent in neural nets | list 11 | |||
Part 4. Other topics | |||||
25 Nov | Regression I: classic noise assumption, sub-Guassian and sub-exponential noise | list 12 | |||
02 Dec | Regression II: Ridge and Lasso regression | list 13 | |||
09 Dec | Multiarmed bandids | list 14 | |||
16 Dec | Colloquium |
The lectures in October and November are based on the book: Foundations of machine learning 2nd ed, Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalker, 2018. This book can be downloaded from Library Genesis (the link changes sometimes and sometimes vpn is needed).
Problems exam
The exam will happen in the computer room. During the exam
-- You may use handwritten notes, lecture materials from this wiki (either printed or through your PC, Mohri's book
-- You may not search on the internet or interact with other humans (e.g. by phone, forums, etc)
Grading formula
Final grade = 0.35 * [score of homeworks] + 0.35 * [score of colloquium] + 0.3 * [score on the exam] + bonus from quizzes.
All homework questions have the same weight. Each solved extra homework task increases the score of the final exam by 1 point.
There is no rounding except on the final grade. Grades fractional grades above 5/10 are rounded up, those below 5/10 are rounded down.
There are no auto-grades. But because of extra questions and bonuses from quizzes, it might happen that there is no reason to attend the exam. Example: HW = 9/10, 5 extra problems are correct, colloquium=10/10, bonus quizzes = 0.09, exam 0/10. Then the final score is 9.05/10, which is rounded to 10/10.
Office hours
Person | Monday | Tuesday | Wednesday | Thursday | Friday | |
---|---|---|---|---|---|---|
Bruno Bauwens | 15-20h | 18-20h | ||||
Maxim Kaledin |
It is always good to send an email in advance. Questions and feedback are welcome.