Topic Subtopic Category
Neural Networks Perceptron Basics Both
Gradient Descent Both
Backpropagation Both
Activation Functions (ReLU, Sigmoid, Tanh) Both
Cost Functions (MSE, MAE, Cross Entropy, etc.) Both
Deep Learning Multi-Layer Perceptrons (MLP) Both
Stochastic Gradient Descent (SGD), MiniBatch Gradient Descent Both
Momentum Methods (Adam, AdamW) Practice
Adaptive Learning Rates Practice
Convergence and Learning Rates Both
Weight Regularization Practice
Early Stopping Practice
Dropout, Gaussian Noise Practice
Weight Initialization Practice
Batch Normalization Practice
Autoencoders and Sparse Encoders Practice