Course1 - WeiliangGuo/deepleaning_studies GitHub Wiki

C1W1L04.pdf

When training sets are small, traditional algorithms may outperform neural networks due to dedicated feature engineering.

C1W2L01 slides.pdf

Feature matrix X with column feature vectors is usually better than the one with transposed feature vectors.

C1W2L02 slides.pdf

Choosing a Machine Learning Classifier is a short and highly readable comparison of logistic regression, Naive Bayes, decision trees, and Support Vector Machines.

C1W2L03 notes.pdf

What are loss function, cost function and objective function?

More detailed walkthrough of logistic regression

C1W2L09 slides.pdf

Remember that gradient descent is used for finding a global minimum for the cost function.

C1W2L15 slides.pdf

axis=0 tells numpy to perform vertical operations on matrix whereas axis=1 do the horizontal ones.

C3W1L01 notes.pdf

People might get confused what's the difference between feed forward neural net and backward propagation, and what backward propagation actually does.

Reference 1: Difference between back-propagation and feed-forward neural networks

Reference 2: Difference between back-propagation and feed-forward neural networks

Reference 3: A Step by Step Back-propagation Example

C1W3L011 slides.pdf

Review on Normal Distribution

Different numpy.random functions