Part 6: Neural Networks and Deep Learning

Network Training & Architecture

Deep learning success depends on effective training and the right architectural choice. This section focuses on how forward and backward propagation update weights, along with suitable loss functions and optimizers. It also explores the main neural network types—MLP, CNN, RNN—and how to evaluate them for various data modalities.

TRAINING NEURAL NETWORKS

Learning Objectives

  • Explain forward propagation (computing outputs) and backpropagation (updating weights)

  • Choose loss functions (MSE, cross-entropy) and optimizers (SGD, Adam)

  • Monitor convergence and tune hyperparameters (learning rate, batch size)

Indicative Content

  • Forward Pass

    • Weighted sums, activation layers

  • Backpropagation

    • Chain rule, gradient descent

  • Optimization

    • Mini-batch SGD, Adam, scheduling learning rates

FEEDFORWARD, CNN, AND RNN ARCHITECTURES

Learning Objectives

  • Implement MLP for tabular data, CNN for images, RNN for sequences

  • Evaluate using classification metrics or custom domain criteria

  • Understand specialized layers (convolution/pooling, LSTM/GRU)

Indicative Content

  • MLP

    • Dense layers, hidden units, dropout

  • CNN

    • Filters, strides, pooling, deeper networks (ResNet)

  • RNN

    • LSTM gates, GRU units, sequence modeling for text/time series

TOOLS & METHODOLOGIES (NETWORK TRAINING & ARCHITECTURES)

  • Training Libraries

    • Built-in modules for forward/backprop, dynamic computation graphs

  • Loss & Optimization

    • Cross-entropy for classification, MSE for regression, advanced optimizers (Adam)

  • Evaluation

    • Accuracy, precision/recall, domain-specific metrics (e.g., BLEU for language models)

    • Freezing layers, partial training