← Back to Projects

CNN Image Classification

Built and trained a custom deep CNN with strong data augmentation and tracked performance over long training. The goal was to push accuracy through iteration and disciplined evaluation.

Python PyTorch CNN CIFAR-10

Project Summary

Goal

Train a deep CNN and improve accuracy using augmentation + tuning

Dataset

CIFAR-10 (32×32 RGB, 10 classes)

Training (final run)

400 epochs • long training with tracked metrics

Best accuracy

~96.6% (training accuracy)

Model Architecture

  • Deep CNN with stacked convolution blocks
  • Conv blocks use BatchNorm + LeakyReLU + MaxPool
  • Dropout used to reduce overfitting
  • Final classifier head outputs 10 classes

Training Setup

  • Loss: CrossEntropyLoss
  • Optimizer: SGD + momentum (stable training for CNNs)
  • Regularization: weight decay + dropout
  • Scheduler: learning-rate decay (for late-stage refinement)

Data Augmentation

Used to improve generalization and reduce overfitting.

  • RandomHorizontalFlip
  • RandomRotation
  • RandomResizedCrop
  • Normalize

Results

  • Peak training accuracy: ~96.6%
  • Training remained stable across long runs (hundreds of epochs)
  • Saved curves + logs to track loss/accuracy over time

What I Learned

  • Why augmentation boosts robustness
  • How BatchNorm stabilizes deeper networks
  • How LR scheduling affects late-epoch improvements
  • How to debug training using curves + logs

Next Improvements

  • Report test accuracy + per-class accuracy
  • Add confusion matrix and error analysis
  • Try stronger architectures (ResNet/EfficientNet)
  • Make evaluation transforms deterministic (no random augments in test)

Training Log (evidence)

Screenshot from late-stage training showing high accuracy around epoch 250+.

CNN training log showing accuracy

Curves

CNN loss curve
Loss curve
CNN accuracy curve
Accuracy curve