An online deep learning course for humanists
Table of Contents generated with DocToc
- Read Online
- Project Description
- Slides
- Course Outline
- Chapter 0 Background Knowledge
- Chapter 1 Introduction
- Chapter 2 Perceptron
- Chapter 3 Multilayer Perceptron (deep feedforward networks)
- Chapter 4 Forward Propagation
- Chapter 5 Learning: Training Neural Networks
- Physical Experiment
- Chapter 6 Make your own neural network to classify handwritten digitals
- Reference
- About Me
Read Online
This book is powered by Jekyll Book. So you can read it online:
Project Description
This is my Google Summer of Code 2019 Project with Red Hen Lab.
The Project goal is to design and develops an online course, to teach deep learning for students in the humanities and social sciences. The course will contain basic deep learning theory and labs case studies from multimodal communication.
Project Mentors: Francis Steen, Mark Turner and Rajesh Kasturirangan.
Slides
All the course slide can be found at this directory.
Lesson1-Introduction
Keynote
Keynote Content
- Application of deep learning
- What is Artificial Intelligence?
- What is Machine Learning?
- What is Deep Learning?
- Limitation of deep learning
Lesson2-perceptron & Multilayer perceptron
Keynote
Keynote Content
- Logic-gate neurons
- Neuron-like perceptron
- Neurons are more powerful
- Color Factory
- Multilayer perceptron
- Why is the middle layer called a hidden layer?
- Activation functions
- Commonly used activation functions
- Why must the activation functions be non-linear?
- Forward Propagation
- Example: Handwritten Digits Recognition
Lesson3-Training Neural Networks
Keynote
Content
- Loss function
- Gradient Descent
- What is Gradient Descent?
- Simple Example
- Avoid Overshooting
- Challenges: Local Minima
Lesson4-Backpropagation
Keynote
Content
- Compute Graph
- Example
- Local Compute
- Compute Graph Advantage
- The Chain Rule
- Compute graph & Chain Rule
- Back propagation
- Back propagation of addition nodes
- Back propagation of multiplication nodes
- Back propagation of ReLU
- Back propagation of Sigmoid
- Application
- Exercise 1
- Exercise 2
- Summary
Lesson5-CNNs
Keynote
Content
- A Brief History of CNNs
- Why we need CNNs?
- The Structure of CNNs
- Convolution operation
- Padding Stride
- Convolution
- Pooling
- Some typical CNNs
- Example: Dog or Cat?
Lesson6-RNNs
keynote
content
- Sequence modeling
- Deep forward neural networks vs Recurrent neural networks
- Recurrent neural networks
- The Problems Of RNNs
- LSTM networks
- Application of RNNs
Lesson7-Neural Network Zoo
keynote
Course Outline
Chapter 0 Background Knowledge
Programming
Math
- basic matrix, calculus, and statistics.
Chapter 1 Introduction
- What is artificial intelligence, machine learning, deep learning and their relationship?
- Environment Setup Anaconda, TensorFlow and Jupyter Lab.
Chapter 2 Perceptron
Perceptron: foundation block of Neural Network
- How do we learn? (Biological neuron model)
- How can machine learn? (Artificial neural->Perceptron)
Iris Classification Example
- Question and Dataset
- Linear Classifier
- Implement a perceptron
Chapter 3 Multilayer Perceptron (deep feedforward networks)
The architecture of Multilayer perceptron
- Nodes
- Input/Output
- Layer
- Input Layer
- Output Layer
- Hidden Layer: Why we call it hidden layer
- Connection
- Fully connected
- Weights
Activation function
- What is Activation function?
- The common active function
Design Output Layer
- Regression and classification
Chapter 4 Forward Propagation
Forward Process
Matrixs
- What is Matrix
- Multiplying Matrixs
Apply Matrix to Neural Network computation
- Apply Matrix to Neural Network computation
Design the Output Layer
- Design the Output Layer
Why we use None-linearities activation function?
Chapter 5 Learning: Training Neural Networks
Loss function
- How well does the neural network predict: Loss Function
- Example
- Loss function: Mean Squared Error
-
Why we use squared error instead of raw error?
The empirical loss measures the total loss over the dataset. Loss function is a function of the Weight.
Learning to minimize error: Gradient Descent method
- Gradient Descent
- Minimize error
- What is Gradient Descent?
- Greedy algorithm
- Like Hiking Down a Mountain
- Simple Example
- Local minimum
Back Propagation
- Compute Graph
- Chain Rule
- Back Propagation
The error is propagated backwards to the other layers.
Physical Experiment
- Each student acted as a Neuron
- Mock Forward Propagation
- Mock Backward Propagation
Chapter 6 Make your own neural network to classify handwritten digitals
In this chapter, the student will learn how to teach the computer to classify handwritten digits by using MNIST dataset in Python.
DataSet: The dataset I choose for this part is MNIST(Modified National Institute of Standards and Technology) dataset, which has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST(National Institute of Standards and Technology) which gives data set of over 800,000 images of handwritten digits from 3,600 writers. The digits have been size-normalized and centered in a fixed-size image.
Reference
Dataset and Content
- Kaggle – MNIST sign language
- OpenML – datasets, tasks, flows, results
- Speed Dating – Covertype satellite images
- Awesome Public Datasets on GitHub
- Wikipedia’s list of datasets for machine-learning research
- Medium’s 50 best public datasets for machine learning
- 25 Open Datasets for Deep Learning
- UC Irvine Machine Learning Repository
- 1996 English Broadcast News Speech
- Grokking Deep Learning Andrew W. Trask
- Make Your Own Neural Network by Tariq Rashid
- Introduction to Deep Learning animation videos created by 3Blue1Brown
- http://introtodeeplearning.com
- http://deeplearning.net/tutorial/
- https://nndl.github.io
- http://zh.gluon.ai/index.html
Broader Discussion
- Critique of Pure Learning (2019)
- Geoffrey Hinton on capsule networks: https://www.youtube.com/watch?v=rTawFwUvnLE
- Yann LeCun on the limits of deep learning (2016, Quora)
- Artificial Intelligence Pioneer Says We Need to Start Over (2017)
- Why is Geoffrey Hinton suspicious of backpropagation and wants AI to start over?
- Geoffrey Hinton talk “What is wrong with convolutional neural nets?”
About Me
- Name: Xinyu You
- Email: yxydiscovery@gmail.com
- Github: yogayu
- Website: data2art
- Resume: youxinyu.me
If there are any problems, please feel free to contact me.