Icef-dse-2024-fall — различия между версиями
Bdemeshev (обсуждение | вклад) (→Log Book or Tentative Plan) |
Bdemeshev (обсуждение | вклад) (→Log Book or Tentative Plan) |
||
Строка 58: | Строка 58: | ||
2024-10-24, lecture 8: Cross validation: leave-one-out, k-fold. Importance for random forest: mean decrease of impurity. Permutation based importance. | 2024-10-24, lecture 8: Cross validation: leave-one-out, k-fold. Importance for random forest: mean decrease of impurity. Permutation based importance. | ||
− | 2024-11- | + | 2024-11-07, lecture 9: Differential in a matrix form, derivation of beta hat in multivariate regression. |
+ | |||
+ | 2024-11-07: Midterm | ||
==Past courses== | ==Past courses== |
Версия 20:07, 8 ноября 2024
General course info
Fall grade = 0.2 Small HAs + 0.2 Group project + 0.3 Midterm + 0.3 Final
We expect 3 practice HA and 3 theory HA.
Lecturer: Boris Demeshev
Class teachers: Yana Khassan, Shuana Pirbudagova
Lecture video recordings
Telegram group
Log Book or Tentative Plan
2024-09-05, lecture 1: Entropy, conditional entropy, joint entropy, mutual information, cross-entropy.
- Cristopher Olah, Visual Information Theory, https://colah.github.io/posts/2015-09-Visual-Information/
- Grand Sanderson, Solving Wordle using information theory, youtube.
- Artem Kirsanov, Key equation behind probability, youtube. Be careful, Artem uses notation H(P, Q) for Cross entropy (we use CE(P||Q)).
- Конспект аналогичной лекции на фкн на русском.
- Keith Conrad, Maximal entropy distributions.
2024-09-12, lecture 2: Expected value of log-likelihood is zero. Kullback-Leibler divergence definition. Expected value calculation example. Optimizing long-run profit. Horse betting: optimal bet under private signal.
- Marcin Anforowicz, Just one more paradox youtube
- Wikipedia, [Kelly Criterion https://en.wikipedia.org/wiki/Kelly_criterion]: a good article
- Kelly, A new interpretation of information rate: original paper, very well written
2024-09-19, lecture 3: Horse betting: optimal bet under signal. Optimal long-term interest rate as entropy difference. How to build a tree? Entropy drop as splitting criterion. Dealing with missing values. How to stop? Tree pruning.
- R2D3, Visual introduction to machine learning: decision tree
2024-09-26, lecture 4: Random forest
- R2D3, Visual introduction to machine learning-2: bias-variance tradeoff and many trees
2024-10-03, lecture 5: Bootstrap: Naive bootstrap, t-stat bootstrap, bootstrap in bootstrap.
- Tim Hesterberg, What teachers should know about bootstrap?
2024-10-10, lecture 6: Gradient boosting for regression. Residual vector as minus gradient. Properties of logistic function.
- Alexey Natekin, Alois Knoll, Gradient boosting machines.
- Cheng Li, Gentle Introduction to Gradient Boosting
2024-10-17, lecture 7: Gradient of logit model in general form. One-to-one correspondence between probabilities and log-odds. Gradient boosting for classification.
2024-10-24, lecture 8: Cross validation: leave-one-out, k-fold. Importance for random forest: mean decrease of impurity. Permutation based importance.
2024-11-07, lecture 9: Differential in a matrix form, derivation of beta hat in multivariate regression.
2024-11-07: Midterm
Past courses
Fall 2022: wiki.