Panda-metrics-2024-25 — различия между версиями

Материал из Wiki - Факультет компьютерных наук
Перейти к: навигация, поиск
(Classes)
 
(не показано 26 промежуточных версии 2 участников)
Строка 1: Строка 1:
 
== What-about ==
 
== What-about ==
  
== Course goals ==
+
Course [https://github.com/bdemeshev/hse_panda_metrics_2024_2025/raw/main/whitepaper.pdf whitepaper]
 +
 
 +
=== Course goals ===
  
 
侍には目標がなく道しかない [Samurai niwa mokuhyō ga naku michi shikanai]
 
侍には目標がなく道しかない [Samurai niwa mokuhyō ga naku michi shikanai]
  
A samurai has no goal, only a path.  
+
A samurai has no goal, only a path.
  
== Communication ==
+
Telegram [https://t.me/+gBipDIgUZz9jMzUy channel], Telegram [https://t.me/+7zJSwLK_W3E0Mjky chat]
  
Telegram channel: [https://t.me/+gBipDIgUZz9jMzUy link]
+
Lecture and class hand-made (with love) [https://e.pcloud.link/publink/show?code=kZokDPZHn2baBhrf6hnACr2r9BLjHaGGsLX video recordings] + official videos [https://disk.yandex.ru/d/wI6ZO59DuHl9XQ ya-folded]
  
Telegram chat: [https://t.me/+7zJSwLK_W3E0Mjky link]
+
== Grading ==
  
 +
Semester-1 grade = 0.2 HA-1 + 0.4 Midterm-Exam1 + 0.4 Exam-Semester1.
  
== Grading ==
+
Midterm-Exam1 is scheduled in Module 2.
  
Semester-1 grade = 0.2 HA-1 + 0.4 Exam-Alpha + 0.4 Exam-Beta.
+
Grades for HA-1, Midterm-Exam1 and Exam-Semester1 are integers from 0 to 100.  
  
Semester-2 grade = 0.2 HA-2 + 0.4 Exam-Gamma + 0.4 Exam-Delta.
+
Semester-2 grade = 0.2 HA-2 + 0.4 Midterm-Exam2 + 0.4 Exam-Semester2.
 +
 
 +
Grades for HA-2, Midterm-Exam2 and Exam-Semester2 are integers from 0 to 100.  
  
 
Final course grade = 0.5 Semester-1 grade + 0.5 Semester-2 grade
 
Final course grade = 0.5 Semester-1 grade + 0.5 Semester-2 grade
 +
 +
When necessary 0-100 grades are converted into 0-10 grades using division by 10 and standard rounding.
 +
 +
'''Midterm 1: 12th November, 18:10.'''
 +
 +
[https://docs.google.com/spreadsheets/d/13BcvqNW6-ITyoV-oJ_ePg8oatWDjVj75Wkovxbj0RC8/edit?usp=sharing Actual grades]
  
 
=== Home assignments ===
 
=== Home assignments ===
Строка 26: Строка 37:
 
[https://github.com/bdemeshev/hse_panda_metrics_2024_2025/raw/main/home_assignments/home_assignments.pdf Home assignments :)]
 
[https://github.com/bdemeshev/hse_panda_metrics_2024_2025/raw/main/home_assignments/home_assignments.pdf Home assignments :)]
  
Home assignments have equal weights. You have 4 honey weeks for the entire course.
+
You have 4 honey weeks for the entire course.
 +
All home assignments of the first semester have equal weights.
 +
All home assignments of the second semester have equal weights.
  
 
=== Exams ===
 
=== Exams ===
Строка 33: Строка 46:
  
 
2024-09-02, lecture 1: Derivation of beta hat in the cases of a very simple regression and multiple regression.
 
2024-09-02, lecture 1: Derivation of beta hat in the cases of a very simple regression and multiple regression.
 +
 +
2024-09-09, lecture 2: Geometry of regression. Fitted vector is the projection of y-vector onto the Span of regressors. Hat-matrix: definition, simple properties.
 +
SST, SSE, SSR: definition, Pythagorean theorem: SST = SSE + SSR.
 +
 +
2024-09-16, lecture 3: Conditional expected value, conditional variance. Statistical assumptions for simple regression. Expected value of beta hat for simple regression.
 +
Statistical assumptions for multiple regression. Expected value of beta hat for multiple regression. Variance of beta hat for multiple regression. 
 +
 +
2024-09-23, lecture 4: Properties of conditional variance and conditional covariance in matrix form. Gauss-Markov assumptions.
 +
Hat matrix is proportional to conditional variance of forecasts. Proof of Gauss-Markov theorem through Pythagoras.
 +
 +
* Geometry in [https://raw.githubusercontent.com/olyagnilova/gauss-markov-pythagoras/master/paper.pdf econometrics]
 +
 +
2024-09-30, lecture 5: Consistency of beta hat in matrix form. Inconsistency of beta hat in a simple regression with measurement error in regressor.
 +
 +
2024-10-07, lecture 6: Estimating variance of random error: unbiasedness of SSRes / (n - k), consistency of SSRes / (n - k).
 +
 +
2024-10-14, lecture 7: Herschel-Maxwell assumptions give us normal distribution. Chi-squared distribution as squared length of projection of standard normal vector onto d-dimensional subspace.
 +
Proof that t-statistic in multivariate regression has t-distribution.
 +
 +
* 3b1b [https://www.youtube.com/watch?v=cy8r7WSuT1I Herschel-Maxweel assumptions] and multivariate normal
 +
 +
2024-10-21, lecture 8: Bootstrap before regression: naive bootstrap, t-statistic bootstrap. Regression with bootstrap: pair bootstrap, wild bootstrap (+1/-1 version).
 +
 +
* Tim Hesterberg, [https://arxiv.org/abs/1411.5279 What Teachers Should Know about the Bootstrap]: fun and enjoyable introduction to bootstrap!
 +
 +
* Russell Davidson, James G. MacKinnon, [http://qed.econ.queensu.ca/pub/faculty/mackinnon/rd-jgm-bootstrap-methods-2006.pdf Bootstrap] methods in Econometrics
 +
 +
 +
2024-11-04, lecture 9:
 +
 +
2024-11-11, lecture 10:
 +
 +
2024-11-18, lecture 11: Definition of SVD decomposition, definition and properties of orthogonal matrices. Example: SVD decomposition of a column (2, 7). PCA as consequiutive sample variance maximization.
 +
 +
2024-11-25, lecture 12:
 +
 +
2024-12-02, lecture 13: Definition of conditional heteroskedasticity, example of a WLS for Var(u_i | X) = sigma^2 / x_i^2, HC0 standard errors, HC3 standard errors from cross-validation, LM test for heterskedasticity as nR^2 in auxillary regression.
 +
 +
2024-12-09, lecture 14: White, Breusch-Pagan and Goldfeld-Quandt tests, proof that F-statistic is asympotically equivalent to nR^2.
 +
Alternative way to calculate leave-one-out residuals in multivariate regression: divide ordinary residuals by 1-Hii (without proof).
 +
 +
2024-12-16, lecture 15:
 +
 +
 +
=== Classes ===
 +
 +
Class [https://github.com/bdemeshev/hse_panda_metrics_2024_2025/tree/main/course_notes notes]
 +
 +
Maria Kirillova [https://disk.yandex.ru/d/b7XZwMnboHoF4Q notes]
 +
 +
2024-09-06, class 1: 1.1, 1.2 from [https://github.com/bdemeshev/metrics_pro/raw/master/metrics_pro_en.pdf MPro]
 +
 +
2024-09-13, class 2: 3.2, 3.10, 3.7 from [https://github.com/bdemeshev/metrics_pro/raw/master/metrics_pro_en.pdf MPro]
 +
 +
2024-09-20, class 3: 5.5 from [https://github.com/bdemeshev/metrics_pro/raw/master/metrics_pro_en.pdf MPro], derivation of variance of slope estimate for simple regression.
 +
 +
2024-09-27, class 4:
 +
 +
2024-10-04, class 5:
 +
 +
2024-10-11, class 6: confidence interval for beta, hypothesis test for beta, test of equality of two betas, confidence interval for conditional expected value of forecast.
 +
 +
2024-10-18, class 7: F-test. F-test for regression significance. Constructing restricted model. Chow test.
 +
 +
2024-11-01, class 8: Calculation probabilities, expected values, variances and covariances for naive bootstrap.
 +
 +
2024-11-08, class 9:
 +
 +
2024-11-15, class 10:
 +
 +
2024-11-22, class 11:
 +
 +
2024-11-29, class 12:
 +
 +
2024-12-06, class 13:
 +
 +
2024-12-13, class 14: explicit formula for instrumental variables, equivalence of iv and 2-stage least squares estimator
 +
 +
== Sources of Wisdom ==
 +
 +
[https://causalml-book.org/ CausML]: Causality in ML book with python and R code
 +
 +
[https://github.com/bdemeshev/metrics_pro/raw/master/metrics_pro_en.pdf MPro-en]: Problem set for classes (translation in progress)
 +
 +
[https://github.com/bdemeshev/metrics_pro/raw/master/metrics_pro.pdf MPro-ru]: Problem set for classes (in Russian)

Текущая версия на 15:52, 15 декабря 2024

What-about

Course whitepaper

Course goals

侍には目標がなく道しかない [Samurai niwa mokuhyō ga naku michi shikanai]

A samurai has no goal, only a path.

Telegram channel, Telegram chat

Lecture and class hand-made (with love) video recordings + official videos ya-folded

Grading

Semester-1 grade = 0.2 HA-1 + 0.4 Midterm-Exam1 + 0.4 Exam-Semester1.

Midterm-Exam1 is scheduled in Module 2.

Grades for HA-1, Midterm-Exam1 and Exam-Semester1 are integers from 0 to 100.

Semester-2 grade = 0.2 HA-2 + 0.4 Midterm-Exam2 + 0.4 Exam-Semester2.

Grades for HA-2, Midterm-Exam2 and Exam-Semester2 are integers from 0 to 100.

Final course grade = 0.5 Semester-1 grade + 0.5 Semester-2 grade

When necessary 0-100 grades are converted into 0-10 grades using division by 10 and standard rounding.

Midterm 1: 12th November, 18:10.

Actual grades

Home assignments

Home assignments :)

You have 4 honey weeks for the entire course. All home assignments of the first semester have equal weights. All home assignments of the second semester have equal weights.

Exams

Samurai diary

2024-09-02, lecture 1: Derivation of beta hat in the cases of a very simple regression and multiple regression.

2024-09-09, lecture 2: Geometry of regression. Fitted vector is the projection of y-vector onto the Span of regressors. Hat-matrix: definition, simple properties. SST, SSE, SSR: definition, Pythagorean theorem: SST = SSE + SSR.

2024-09-16, lecture 3: Conditional expected value, conditional variance. Statistical assumptions for simple regression. Expected value of beta hat for simple regression. Statistical assumptions for multiple regression. Expected value of beta hat for multiple regression. Variance of beta hat for multiple regression.

2024-09-23, lecture 4: Properties of conditional variance and conditional covariance in matrix form. Gauss-Markov assumptions. Hat matrix is proportional to conditional variance of forecasts. Proof of Gauss-Markov theorem through Pythagoras.

2024-09-30, lecture 5: Consistency of beta hat in matrix form. Inconsistency of beta hat in a simple regression with measurement error in regressor.

2024-10-07, lecture 6: Estimating variance of random error: unbiasedness of SSRes / (n - k), consistency of SSRes / (n - k).

2024-10-14, lecture 7: Herschel-Maxwell assumptions give us normal distribution. Chi-squared distribution as squared length of projection of standard normal vector onto d-dimensional subspace. Proof that t-statistic in multivariate regression has t-distribution.

2024-10-21, lecture 8: Bootstrap before regression: naive bootstrap, t-statistic bootstrap. Regression with bootstrap: pair bootstrap, wild bootstrap (+1/-1 version).

  • Russell Davidson, James G. MacKinnon, Bootstrap methods in Econometrics


2024-11-04, lecture 9:

2024-11-11, lecture 10:

2024-11-18, lecture 11: Definition of SVD decomposition, definition and properties of orthogonal matrices. Example: SVD decomposition of a column (2, 7). PCA as consequiutive sample variance maximization.

2024-11-25, lecture 12:

2024-12-02, lecture 13: Definition of conditional heteroskedasticity, example of a WLS for Var(u_i | X) = sigma^2 / x_i^2, HC0 standard errors, HC3 standard errors from cross-validation, LM test for heterskedasticity as nR^2 in auxillary regression.

2024-12-09, lecture 14: White, Breusch-Pagan and Goldfeld-Quandt tests, proof that F-statistic is asympotically equivalent to nR^2. Alternative way to calculate leave-one-out residuals in multivariate regression: divide ordinary residuals by 1-Hii (without proof).

2024-12-16, lecture 15:


Classes

Class notes

Maria Kirillova notes

2024-09-06, class 1: 1.1, 1.2 from MPro

2024-09-13, class 2: 3.2, 3.10, 3.7 from MPro

2024-09-20, class 3: 5.5 from MPro, derivation of variance of slope estimate for simple regression.

2024-09-27, class 4:

2024-10-04, class 5:

2024-10-11, class 6: confidence interval for beta, hypothesis test for beta, test of equality of two betas, confidence interval for conditional expected value of forecast.

2024-10-18, class 7: F-test. F-test for regression significance. Constructing restricted model. Chow test.

2024-11-01, class 8: Calculation probabilities, expected values, variances and covariances for naive bootstrap.

2024-11-08, class 9:

2024-11-15, class 10:

2024-11-22, class 11:

2024-11-29, class 12:

2024-12-06, class 13:

2024-12-13, class 14: explicit formula for instrumental variables, equivalence of iv and 2-stage least squares estimator

Sources of Wisdom

CausML: Causality in ML book with python and R code

MPro-en: Problem set for classes (translation in progress)

MPro-ru: Problem set for classes (in Russian)