Page Index - SoojungHong/MachineLearning GitHub Wiki
53 page(s) in this GitHub Wiki:
- Home
- AdaBoost vs Gradient Boosting
- Attention
- Autoregressive Model & Stochastic Model
- Bagging (Boostrap Aggregating)
- batch normalization vs. layer normalization
- batch, epoch
- Bayes Theorem
- Bias and Variance trade off ( three types of error in terms of model generalization)
- Boosting
- CART (Classification and Regression Tree)
- Clustering K Means vs EM (Expectation Maximization)
- code location
- Cross Validation
- Data Preparation
- Evaluating Classifier
- Expectation
- Good reference on Attention and Transformer
- Gradient Descent
- Hinge Loss
- How to calculate parameter numbers in Neural Network
- How to choose Kernel
- How to resume training with saved model
- Hyperparameter C in SVM
- K means algorithm
- L1 and L2 Regularization
- l1 norm vs l2 norm
- Lasso Regression (Least Absolute Shrinkage and Selection Operator Regression)
- Limitation of Decision Tree
- LSTM vs GRU comparison with good explantion
- Machine Learning model selection cheat sheet
- Math & ML questions
- Matrix Factorization
- ML questions
- MLlib in Spark
- NN experimental place
- Non linear Classifiers Kernels (e.g. Regularized SVM classifier with RBF kernel)
- Overfit vs. Underfit
- pasting ensemble, boosting ensemble
- Q Learning
- Random Forest
- Reference : Google Machine Learning learning materials (very good)
- Reference : PyTorch
- ReLU (Rectified Linear Unit)
- RNN and basic understanding of Neural Network (very good)
- significance of logistic regression coefficients if two predictors are co related?
- Stochastic Gradient Descent
- SVD (Singular Value Decomposition)
- Text preprocessing in Topic Modeling
- Topic Modelling
- ToWatch : Fundamental about behind Deep NN
- Vanishing Gradient Problem
- Well defined Negative Log Likelihood