Stéphane Gaïffas – Teaching

Introduction to Machine Learning (Master M2MO)

Syllabus

Machine learning is a scientific discipline that is concerned with the design and development of algorithms that allow computers to learn from data. A major focus of machine learning is to automatically learn complex patterns and to make intelligent decisions based on them. The set of possible data inputs that feed a learning task can be very large and diverse, which makes modeling and prior assumptions critical problems for the design of relevant algorithms.

This course focuses on the methodology underlying supervised and unsupervised learning, with a particular emphasis on the mathematical formulation of algorithms, and the way they can be implemented and used in practice. The course will describe for instance some necessary tools from optimization theory, and explain how to use them for machine learning. Numerical illustrations and applications to datasets will be given for the methods studied in the course.

Format

  • We use moodle to centralize all teaching material (slides, notebooks, datasets, exercises)

  • Courses on slides and blackboard, all material in English

  • Practical sessions use python, jupyter, scikit-learn, tensorflow (deep learning) with the standard stack (numpy, scipy, matplotlib)

  • Practical sessions will start with a quick introduction to python and the jupyter notebook, and the necessary libraries for data science

Slack

When and where

  • Tuesdays 12-19-26 Sept., 3-10-17-24-31 Oct., 7 Nov.

  • Courses 13:30 – 16:00, room 0011

  • Practical sessions 16h15-18h45, room 2005

Evaluation

  • Practical sessions work 40% (jupyter notebooks for Tuesday of week w must be sent before Monday 23:59 of week w+1, using the moodle platform)

  • Final exam 60% Nov. 14 13:30–15:30 @Amphi 4C (Halle aux Farines)

Agenda of the course

1. Introduction to supervised learning (3 sessions)

  • Binary classification, standard metrics and recipes (overfitting, cross-validation) and regression

  • LDA / QDA for Gaussian models

  • Logistic regression, Generalized Linear Models

  • Regularization (Ridge, Lasso, etc.)

  • Support Vector Machine, the Hinge loss

  • Kernel methods

  • Decision trees, CART, Boosting

2. Optimization for Machine Learning (2 sessions)

  • Proximal gradient descent

  • Coordinate descent / coordinate gradient descent

  • Quasi-newton methods

  • Stochastic gradient descent and beyond

3. Neural Networks (1,5 session)

  • Introduction to neural networks

  • The perceptron, multilayer neural networks, deep learning

  • Adaptive-rate stochastic gradient descent, back-propagation

  • Convolutional neural networks

4. Unsupervised learning (2,5 sessions)

  • Gaussian mixtures and EM

  • Matrix Factorization, Non-negative Matrix Factorization

  • Factorization machines

  • Embeddings methods

References

  1. Machine Learning, K.M. Murphy, MIT Press

  2. Foundations of Machine Learning. M. Mohri, A. Rostamizadeh and A. Talwalkar, MIT Press

  3. Deep Learning, I. Goodfellow and Y. Bengio and A. Courville, MIT Press

  4. Python for Data Analysis: Data Wrangling with Pandas, NumPy, and IPython, W. McKinney, O'Reilly

  5. Statistics for High-Dimensional Data: Methods, Theory and Applications, P. Bühlmann, S. van de Geer, Springer-Verlag

Course material

All the material for the course is available on the moodle platform of the university.