Statistical learning theory 2022 — различия между версиями
Bbauwens (обсуждение | вклад) |
Bbauwens (обсуждение | вклад) |
||
Строка 35: | Строка 35: | ||
|| ''Part 1. Online learning'' | || ''Part 1. Online learning'' | ||
|- | |- | ||
− | | | + | | 02 Sept |
|| Lecture: philosophy. The online mistake bound model, the weighted majority, and perceptron algorithms [https://drive.google.com/drive/folders/1NXiLbhmO2Ml7jFmnLtjqhOgCoHg7yn9T?usp=sharing movies] | || Lecture: philosophy. The online mistake bound model, the weighted majority, and perceptron algorithms [https://drive.google.com/drive/folders/1NXiLbhmO2Ml7jFmnLtjqhOgCoHg7yn9T?usp=sharing movies] | ||
|| [https://www.dropbox.com/s/uk9awkfa827pmtf/01allSlides.pdf?dl=0 sl01] | || [https://www.dropbox.com/s/uk9awkfa827pmtf/01allSlides.pdf?dl=0 sl01] | ||
Строка 73: | Строка 73: | ||
|| | || | ||
|- | |- | ||
− | | [https://drive.google.com/file/d/17zynIg_CZ6cCNBig5QXmBx7VFS8peyuU/view?usp=sharing | + | | [https://drive.google.com/file/d/17zynIg_CZ6cCNBig5QXmBx7VFS8peyuU/view?usp=sharing 07 Oct] |
|| Risk decomposition and the fundamental theorem of statistical learning theory | || Risk decomposition and the fundamental theorem of statistical learning theory | ||
|| [https://www.dropbox.com/s/jxijka88vfanv5n/06slides.pdf?dl=0 sl06] | || [https://www.dropbox.com/s/jxijka88vfanv5n/06slides.pdf?dl=0 sl06] | ||
Строка 81: | Строка 81: | ||
|- | |- | ||
| 14 Oct | | 14 Oct | ||
− | || Bounded differences inequality | + | || Bounded differences inequality, Rademacher complexity, symmetrization, contraction lemma |
|| [https://www.dropbox.com/s/kfithyq0dgcq6h8/07slides.pdf?dl=0 sl07] | || [https://www.dropbox.com/s/kfithyq0dgcq6h8/07slides.pdf?dl=0 sl07] | ||
|| [https://www.dropbox.com/s/5quc1jfkrvm3t71/10book_measureConcentration.pdf?dl=0 ch10] [https://www.dropbox.com/s/km0fns8n3aihauv/11book_RademacherComplexity.pdf?dl=0 ch11] | || [https://www.dropbox.com/s/5quc1jfkrvm3t71/10book_measureConcentration.pdf?dl=0 ch10] [https://www.dropbox.com/s/km0fns8n3aihauv/11book_RademacherComplexity.pdf?dl=0 ch11] | ||
Строка 97: | Строка 97: | ||
|| | || | ||
|- | |- | ||
− | | [https://youtu.be/9FhFxLHR4eE | + | | [https://youtu.be/9FhFxLHR4eE 04 Nov] |
− | || Kernels: | + | || Kernels: RKHS, representer theorem, risk bounds |
|| [https://www.dropbox.com/s/nhqtbekclekf6k7/09slides.pdf?dl=0 sl09] | || [https://www.dropbox.com/s/nhqtbekclekf6k7/09slides.pdf?dl=0 sl09] | ||
|| [https://www.dropbox.com/s/bpb9ijn2p7k19j3/14book_kernels.pdf?dl=0 ch14] | || [https://www.dropbox.com/s/bpb9ijn2p7k19j3/14book_kernels.pdf?dl=0 ch14] | ||
Строка 105: | Строка 105: | ||
|- | |- | ||
| [https://youtu.be/ZBHe5RhTuzI 11 Nov] | | [https://youtu.be/ZBHe5RhTuzI 11 Nov] | ||
− | || AdaBoost and | + | || AdaBoost and the margin hypothesis |
|| [https://www.dropbox.com/s/umum3kd9439dt42/10slides.pdf?dl=0 sl10] | || [https://www.dropbox.com/s/umum3kd9439dt42/10slides.pdf?dl=0 sl10] | ||
|| Mohri et al, chapt 7 | || Mohri et al, chapt 7 | ||
|| | || | ||
|| | || | ||
− | + | |- | |
− | + | ||
− | + | ||
− | |- | + | |
| 18 Nov | | 18 Nov | ||
− | || | + | || Implicit regularization of stochastic gradient descent in neural nets |
|| | || | ||
|| | || | ||
|| | || | ||
|| | || | ||
+ | |- | ||
+ | | | ||
+ | || ''Part 4. Other topics'' | ||
|- | |- | ||
| 25 Nov | | 25 Nov | ||
− | || Regression | + | || Regression I: classic noise assumption, sub-Guassian and sub-exponential noise |
+ | || | ||
|| | || | ||
|| | || | ||
|| | || | ||
− | |||
|- | |- | ||
− | | | + | | 02 Dec |
− | || | + | || Regression II: Ridge and Lasso regression |
− | || | + | || |
− | || | + | || |
− | || | + | || |
|| | || | ||
|- | |- | ||
− | | | + | | 09 Dec |
− | || Multiarmed bandids | + | || Multiarmed bandids |
|| | || | ||
|| | || |
Версия 17:11, 30 августа 2022
General Information
Lectures: Friday 16h20 -- 17h40, Bruno Bauwens, Maxim Kaledin
Seminars: Friday 18h10 -- 19h30, Artur Goldman,
For discussions of the materials, join the telegram group
The course is similar to last year.
Homeworks
Email to brbauwens-at-gmail.com. Start the subject line with SLT-HW.
Deadline before the lecture, every other lecture.
23 Sept: see problem lists 1 and 2
07 Oct: see problem lists 3 and 4
21 Oct: see problem lists 5 and 6
04 Nov: see problem list 7
18 Nov: see problem lists 8 and 9
02 Dec: see problem lists 10 and 11
Course materials
Video | Summary | Slides | Lecture notes | Problem list | Solutions |
---|---|---|---|---|---|
Part 1. Online learning | |||||
02 Sept | Lecture: philosophy. The online mistake bound model, the weighted majority, and perceptron algorithms movies | sl01 | ch00 ch01 | ||
9 Sept | The perceptron algorithm in the agnostic setting. Kernels. The standard optimal algorithm. | sl02 | ch02 ch03 | ||
16 Sept | Prediction with expert advice and the exponentially weighted majority algorithm. Recap probability theory. | sl03 | ch04 ch05 | ||
Part 2. Distribution independent risk bounds | |||||
23 Sept | Sample complexity in the realizable setting, simple examples and bounds using VC-dimension | sl04 | ch06 | ||
30 Sept | Growth functions, VC-dimension and the characterization of sample comlexity with VC-dimensions | sl05 | ch07 ch08 | ||
07 Oct | Risk decomposition and the fundamental theorem of statistical learning theory | sl06 | ch09 | ||
14 Oct | Bounded differences inequality, Rademacher complexity, symmetrization, contraction lemma | sl07 | ch10 ch11 | ||
Part 3. Margin risk bounds with applications | |||||
21 Oct | Simple regression, support vector machines, margin risk bounds, and neural nets | sl08 | ch12 ch13 | ||
04 Nov | Kernels: RKHS, representer theorem, risk bounds | sl09 | ch14 | ||
11 Nov | AdaBoost and the margin hypothesis | sl10 | Mohri et al, chapt 7 | ||
18 Nov | Implicit regularization of stochastic gradient descent in neural nets | ||||
Part 4. Other topics | |||||
25 Nov | Regression I: classic noise assumption, sub-Guassian and sub-exponential noise | ||||
02 Dec | Regression II: Ridge and Lasso regression | ||||
09 Dec | Multiarmed bandids | ||||
16 Dec | Colloquium |
The lectures in October and November are based on the book: Foundations of machine learning 2nd ed, Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalker, 2018. This book can be downloaded from http://gen.lib.rus.ec/ .
Problems exam
Dates, problems TBA
During the exam
-- You may consult notes, books and search on the internet
-- You may not interact with other humans (e.g. by phone, forums, etc)
Office hours
Person | Monday | Tuesday | Wednesday | Thursday | Friday | |
---|---|---|---|---|---|---|
Bruno Bauwens | 14h--20h | |||||
Maxim Kaledin |
It is always good to send an email in advance. Questions and feedback are welcome.