MNIST Image Classification on FPGA

Neural Network Inference on FPGA (Verilog)

I implemented a compact feed-forward neural network for MNIST digit recognition that runs entirely on an FPGA. The model is trained in Python and deployed to hardware with quantized weights stored on-chip. The design avoids high-level loops and vendor IP multipliers—everything is built from basic RTL, including a custom multiply-by-add/shift datapath.

Highlights

How it works

  1. Train & quantize (Python): Train the network, then quantize weights/activations to fixed-point. Export weights as hex arrays.
  2. HDL integration (Verilog): Include the weight arrays as initial memory contents in BRAM-synthesizable modules.
  3. Datapath: Time-multiplexed MAC unit (add/shift multiplier + accumulator) iterates over inputs and neurons; saturation + shift handle scaling.
  4. Control: A finite-state machine orchestrates load → accumulate → activate → next neuron/layer → argmax.

Testing & Validation

What I built/learned

▶︎ See it running on hardware: Demo video