Statistical learning theory 2024/25 — различия между версиями
Bauwens (обсуждение | вклад) (Новая страница: « == General Information == Lectures: on TBA in room TBA and in [https://us02web.zoom.us/j/82300259484?pwd=NWxXekxBeE5yMm9UTmwvLzNNNGlnUT09 zoom] by [https://www…») |
Bauwens (обсуждение | вклад) |
||
Строка 24: | Строка 24: | ||
|| [https://www.dropbox.com/s/oncvg4mxulbt56d/00book_intro.pdf?dl=0 ch00] [https://www.dropbox.com/s/i9pc4kf0zsdeksb/01book_onlineMistakeBound.pdf?dl=0 ch01] | || [https://www.dropbox.com/s/oncvg4mxulbt56d/00book_intro.pdf?dl=0 ch00] [https://www.dropbox.com/s/i9pc4kf0zsdeksb/01book_onlineMistakeBound.pdf?dl=0 ch01] | ||
|| [https://www.dropbox.com/scl/fi/qs5wqr97qoyh3l2gfju48/01sem.pdf?rlkey=6lvzcbfkw6lj9y77ep64nq7lk&dl=0 prob01] | || [https://www.dropbox.com/scl/fi/qs5wqr97qoyh3l2gfju48/01sem.pdf?rlkey=6lvzcbfkw6lj9y77ep64nq7lk&dl=0 prob01] | ||
− | || [https://www.dropbox.com/scl/fi/kksvt6ttgf06u8uce6g9z/01sol.pdf?rlkey=ldcqaewvg7cqdlfqkt7ltckej&dl=0 sol01] | + | || <--! [https://www.dropbox.com/scl/fi/kksvt6ttgf06u8uce6g9z/01sol.pdf?rlkey=ldcqaewvg7cqdlfqkt7ltckej&dl=0 sol01] --> |
|- | |- | ||
| [https://www.youtube.com/watch?v=gQm1G3Ep-5s ?? Sept] | | [https://www.youtube.com/watch?v=gQm1G3Ep-5s ?? Sept] | ||
Строка 31: | Строка 31: | ||
|| [https://www.dropbox.com/s/p3auugqwc89132b/02book_sequentialOptimalAlgorithm.pdf?dl=0 ch02] [https://www.dropbox.com/s/b00dcqk1rob7rdz/03book_perceptron.pdf?dl=0 ch03] | || [https://www.dropbox.com/s/p3auugqwc89132b/02book_sequentialOptimalAlgorithm.pdf?dl=0 ch02] [https://www.dropbox.com/s/b00dcqk1rob7rdz/03book_perceptron.pdf?dl=0 ch03] | ||
|| [https://www.dropbox.com/scl/fi/di1k87aq44ss07mq4s6pi/02sem.pdf?rlkey=yu476v8z77bal6ma029frnilm&dl=0 prob02] | || [https://www.dropbox.com/scl/fi/di1k87aq44ss07mq4s6pi/02sem.pdf?rlkey=yu476v8z77bal6ma029frnilm&dl=0 prob02] | ||
− | || [https://www.dropbox.com/scl/fi/d2wuka77bu18j9plivwl5/02sol.pdf?rlkey=yp2eprgxpc7r2antyidjd8qiw&dl=0 sol02] | + | || <--! [https://www.dropbox.com/scl/fi/d2wuka77bu18j9plivwl5/02sol.pdf?rlkey=yp2eprgxpc7r2antyidjd8qiw&dl=0 sol02] --> |
|- | |- | ||
| [https://www.youtube.com/watch?v=H7kvz2rxX4o ?? Sept] | | [https://www.youtube.com/watch?v=H7kvz2rxX4o ?? Sept] |
Версия 15:26, 13 сентября 2024
Содержание
General Information
Lectures: on TBA in room TBA and in zoom by Bruno Bauwens
Seminars: on TBA in room TBA and in TBA by Nikita Lukianenko.
To discuss the materials and practical issues, join the telegram group The course is similar to last year.
Course materials
Video | Summary | Slides | Lecture notes | Problem list | Solutions |
---|---|---|---|---|---|
Part 1. Online learning | |||||
?? Sept | Philosophy. The online mistake bound model. The halving and weighted majority algorithms. | sl01 | ch00 ch01 | prob01 | <--! sol01 --> |
?? Sept | The perceptron algorithm. Kernels. The standard optimal algorithm. | sl02 | ch02 ch03 | prob02 | <--! sol02 --> |
?? Sept | Prediction with expert advice. Recap probability theory (seminar). | sl03 | ch04 ch05 | prob03 | sol03 |
Part 2. Distribution independent risk bounds | |||||
?? Oct | Necessity of a hypothesis class. Sample complexity in the realizable setting, examples: threshold functions and finite classes. | sl04 | ch06 | prob05 | sol05 |
?? Oct | Growth functions, VC-dimension and the characterization of sample comlexity with VC-dimensions | sl05 | ch07 ch08 | prob06 | sol06 |
?? Oct | Risk decomposition and the fundamental theorem of statistical learning theory | sl06 | ch09 | prob07 | sol07 |
?? Oct | Bounded differences inequality, Rademacher complexity, symmetrization, contraction lemma. | sl07 | ch10 ch11 | prob08 | sol08 |
Part 3. Margin risk bounds with applications | |||||
?? Nov | Simple regression, support vector machines, margin risk bounds, and neural nets with dropout regularization | sl08 | ch12 ch13 | prob09 | sol09 |
?? Nov | Kernels: RKHS, representer theorem, risk bounds | sl09 | ch14 | prob10 | sol10 |
?? Nov | AdaBoost and the margin hypothesis | sl10 | ch15 | prob11 | sol11 |
?? Nov | Implicit regularization of stochastic gradient descent in overparameterized neural nets (recording with many details about the Hessian) | ch16 ch17 | |||
?? Dec | Part 2 of previous lecture: Hessian control and stability of the NTK. |
Background on multi-armed bandits: A. Slivkins, [Introduction to multi-armed bandits https://arxiv.org/pdf/1904.07272.pdf], 2022.
The lectures in October and November are based on the book: Foundations of machine learning 2nd ed, Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalker, 2018.
Grading formula
Final grade = 0.35 * [score of homeworks] + 0.35 * [score of colloquium] + 0.3 * [score on the exam] + bonus from quizzes.
All homework questions have the same weight. Each solved extra homework task increases the score of the final exam by 1 point.
There is no rounding except on the final grade. Arithmetic rounding is used.
Autogrades: if you only need 6/10 on the exam to pass with maximal final score, it will be given automatically. This may happen because of extra questions and bonuses from quizzes.
Homeworks
Deadline every 2 weeks, before the seminar at 16h00. Homework problems from
seminars 1 and 2 on September 25, seminars 3 and 4 on October 9, seminars 5 and 6 on November 6, seminars 7 and 8 on November 13, seminars 9 and 10 on November 27 December 4, seminar 11 before the start of the exam.
Email to brbauwens-at-gmail.com. Start the subject line with SLT-HW. Results will be here.
Late policy: 1 homework can be submitted at most 24 late without explanations.
Colloquium
Rules and questions from last year.
Date: TBA
Problems exam
TBA
-- You may use handwritten notes, lecture materials from this wiki (either printed or through your PC), Mohri's book
-- You may not search on the internet or interact with other humans (e.g. by phone, forums, etc)
Office hours
Bruno Bauwens: TBA
Nikita Lukianenko: Write in Telegram, the time is flexible