Neurobayesian models 2019

Материал из Wiki - Факультет компьютерных наук
Перейти к: навигация, поиск

The page is not ready yet!

Lector: Dmitry Vetrov

Tutors: Alexander Grishin, Kirill Struminsky, Dmitry Molchanov, Kirill Neklyudov, Artem Sobolev, Arsenii Ashukha, Oleg Ivanov, Ekaterina Lobacheva.

Contacts: All the questions should be addressed to bayesml@gmail.com. Theme of any letter must contain the following tag: [HSE NBM19]. Letters without the tag will be most probably lost in the inbox.

We also have a chat in Telegram (link to it was sent to the group email). It's main language is Russian, but all the questions in English will be answered in English. All important news will be announced in English in the chat and also sent to the group e-mail.

Course description

This course is devoted to Bayesian reasoning in application to deep learning models. Attendees would learn how to use probabilistic modeling to construct neural generative and discriminative models, how to use the paradigm of generative adversarial networks to perform approximate Bayesian inference and how to model the uncertainty about the weights of neural networks. Selected open problems in the field of deep learning would also be discussed. The practical assignments will cover implementation of several modern Bayesian deep learning models.

Course syllabus

News

Grading System

The assessment consist of 3 practical assignments and a final oral exam. Practical assignments consist in programming some models/methods from the course in Python and analysing their behavior: VAE, Normalizing flows, Sparse Variational Dropout. At the final exam students have to demonstrate knowledge of the material covered during the entire course.

Final course grade is obtained from the following formula:

О_final = 0,7 * О_cumulative + 0,3 * О_exam,

where О_cumulative is an average grade for the practical assignments.

All grades are in ten-point grading scale. If О_cumulative or О_final has a fractional part greater or equal than 0.5 then it is rounded up.

Assignments

  • The course contains three practical assignments. Solutions should be submitted in anytask. To get the invite please write to course e-mail. The site has only Russian interface so foreign students may submit their solutions to course e-mail. In this case, the subject line of the letter in addition to the tag should contain your name, surname and assignment number.
  • All assignments should be coded in Python 3.
  • Students have to complete all assignments by themselves. Using a code of your colleagues or code from open implementations are prohibited and will be considered as plagiarism. All involved students (including those who share his solution) will be severely punished.
  • Assignments are scored up to 10 points. Each assignment has a deadline, a penalty is charged in the amount of 0.3 points for each day of delay, but in total not more than 6 points. Usually you will have 2 weeks to solve an assignment. Some assignments may contain bonus parts.

Approximate dates for assignments: TBA

At the end of the module before the exam there will be a hard deadline for all assignments! Exact date will be announced later.

Exam

TBA

Course Plan

Date Theme Materials
1 24 January Lecture: Stochastic Variational Inference article
31 January Seminar: Application of SVI to Latent Dirichlet Allocation model
2 31 January Lecture: Doubly Stochastic Variational Inference TBA
7 February Seminar: Doubly Stochastic Variational Inference
3 7 February Lecture: Variational autoencoders (VAE) and normalizing flows (NF) VAE article, NF article
14 February Seminar: Importance Weighted Autoencoders + more complex NF TBA
4 14 February Lecture: Implicit Variational Inference using Adversarial Training article
21 February Seminar: f-GAN article
5 21 February Lecture: Bayesian neural networks article, article, article
28 February Seminar: Local reparametrization trick article
6 28 February Lecture: Bayesian compression of neural networks article, article
7 March Seminar: Deep Markov chain Monte Carlo (MCMC) article article
7 7 March Lecture: Discrete Latent Variables and Variance Reduction rowspan="2" | article article article
14 March Seminar: Discrete Latent Variables and Variance Reduction
8 14 March Lecture: Semi-implicit variational inference article, article
21 March Seminar: VampPrior article, article

Reading List

  • Murphy K.P. Machine Learning: A Probabilistic Perspective. The MIT Press, 2012.
  • Bishop C.M. Pattern Recognition and Machine Learning. Springer, 2006.
  • Mackay D.J.C. Information Theory, Inference, and Learning Algorithms. Cambridge University Press, 2003.
  • Ian Goodfellow, Yoshua Bengio & Aaron Courville. Deep Learning. MIT Press, 2016.

Useful links

[The same course in Russian at MSU] (contains more materials in Russian).
BayesGroup page.