Course Overview

Neural networks have increasingly taken over various AI tasks, and currently produce the state of the art in many AI tasks ranging from computer vision and planning for self-driving cars to playing computer games. Basic knowledge of NNs, known currently in the popular literature as "deep learning", familiarity with various formalisms, and knowledge of tools, is now an essential requirement for any researcher or developer in most AI and NLP fields. This course is a broad introduction to the field of neural networks and their "deep" learning formalisms. The course traces some of the development of neural network theory and design through time, leading quickly to a discussion of various network formalisms, including simple feedforward, convolutional, recurrent, and probabilistic formalisms, the rationale behind their development, and challenges behind learning such networks and various proposed solutions. We subsequently cover various extensions and models that enable their application to various tasks such as computer vision, speech recognition, machine translation and playing games.


This course is intended for graduate students and qualified undergraduate students with a strong mathematical and programming background. Undergraduate level training or coursework in algorithms, linear algebra, calculus, probability, and statistics is suggested. A background in programming will also be necessary for the problem sets; students are expected to be familiar with python or learn it during the course.


There will be no required textbooks, though we suggest the following to help you to study (all available online): We will provide suggested readings from these books in the schedule below.


We will use Piazza for class discussions. Please go to the course Piazza site to join the course forum (note: you must use a email account to join the forum). We strongly encourage students to post on this forum rather than emailing the course staff directly (this will be more efficient for both students and staff). Students should use Piazza to:

The course Academic Integrity Policy must be followed on the message boards at all times. Do not post or request homework solutions! Also, please be polite.

Grading Policy

The grading policy will depend on what class you are signed up for.:

Academic Integrity Policy

Group studying and collaborating on problem sets are encouraged, as working together is a great way to understand new material. Students are free to discuss the homework problems with anyone under the following conditions: Students are encouraged to read CMU's Policy on Cheating and Plagiarism.

Using LaTeX

Students are strongly encouraged to use LaTeX for problem sets. LaTeX makes it simple to typeset mathematical equations, and is extremely useful for graduate students to know. Most of the academic papers you read were written with LaTeX, and probably most of the textbooks too. Here is an excellent LaTeX tutorial and here are instructions for installing LaTeX on your machine.


This course is based in part on material developed by Bhiksha Raj (CMU) and Chinmay Hedge (NYU). The course website follows the template of 18661.

Schedule (Subject to Change)

DateTopics ReadingHW
1/16 Introduction [Slides]
1/18 MLP 1 - Linear and Multi-layer Perceptrons [Slides]
1/19 Recitation [Slides] HW 1 Release
1/23 MLP 2 - Universal function approximation [Slides]
1/25 MLP 3 - Training / Backprop [Slides]
1/26 Recitation [Slides] HW 1 Due
HW 2 Release
1/30 MLP 4 - Optimization [Slides]
2/1 MLP 5 - Bags of tricks [Slides]
2/2 Recitation [Slides]
2/6 CNN 1 - Basics [Slides]
2/8 CNN 2 - Building block, back prop
2/9 Recitation [Slides] HW 2 Due
HW 3 Release
2/13 CNN 3 - Image classification [Slides]
2/15 CNN 4 - Detection, Segmentation
2/16 Recitation
2/20 RNN 1 - Basics, LSTM [Slides]
2/22 RNN 2 - LSTM, Backprop
2/23 Recitation [Slides] HW 3 Due
HW 4 Release
2/27 RNN 3 - Transformers
2/29 RNN 4 - LLMs
3/1 Recitation Cancelled HW 4 Due (18-780)
3/5 Spring Break
3/7 Spring Break
3/8 Spring Break
3/12 Generative 1 - VAEs [Slides] Project Release
3/14 Generative 2 - GANs [Slides]
3/15 Recitation [Slides] HW 4 Due (18-786)
HW 5 Release
3/19 Generative 3 - Diffusion models [Slides]
3/21 Generative 4 - Diffusion models [Slides]
3/22 Recitation [Slides]
3/26 RL 1 - Basics and TD learning [Slides]
3/28 RL 2 - Deep Q-learning and policy gradients [Slides]
3/29 Recitation HW 5 Due
4/2 INR 1 - Basics [Slides]
4/4 INR 2 - Building features
4/5 Recitation
4/9 INR 3 - NERFs, IDR, etc.
4/11 Carnival
4/12 Carnival
4/16 Misc 1 - Deep optics
4/18 Misc 2 - Adversarial attacks
4/19 Recitation
4/23 Misc 3 - LLM efficiency
4/25 Guest Lecture TBD
4/26 Recitation