Neuromorphic Simulation Lab

Interactive exploration of brain-inspired computing

Parameters

Threshold (Vth) 35 mV
Time Constant (τ) 20 ms
Refractory Period 5 ms
Input Rate 45 Hz
Synaptic Weight 0.70

Live Simulation

SPIKES: 0
AVG FR: 0 Hz
MEMBRANE POTENTIAL V(t) Vth = 35mV

LIF Neuron Model

The Leaky Integrate-and-Fire model captures core neuron dynamics: current flows in through synapses, building membrane potential. When V exceeds threshold (Vth), the neuron fires a spike, then resets and enters a refractory period. This is the basic unit of neuromorphic computation.

STDP Learning Curve

Weight change (Δw) vs. spike timing difference (Δt)

How STDP Works

LTP — Long-Term Potentiation
Pre before Post: If the presynaptic neuron fires before the postsynaptic neuron, the synapse strengthens. This is "fire together, wire together."
LTD — Long-Term Depression
Post before Pre: If the postsynaptic neuron fires before the presynaptic neuron, the synapse weakens. Unused connections fade.

Weight Heatmap

Spike-Timing-Dependent Plasticity

STDP is the biological learning rule that shaped your brain. It's a form of Hebbian learning refined by precise spike timing. Neuromorphic chips like Loihi implement STDP directly in hardware, enabling on-chip, unsupervised learning — no backpropagation required.

Power Consumption

POWER
Current Draw
4.2 W

Spike Event Visualization

Dense (traditional) vs. sparse (neuromorphic) firing patterns

● Traditional: Always active ● Neuromorphic: Event-driven

Event-Driven Efficiency

In traditional GPUs, every transistor toggles every clock cycle — even if doing nothing useful. neuromorphic chips are event-driven: transistors sleep until input arrives. This is why the brain runs on 20 watts while training GPT-scale models requires megawatts. Sparsity = efficiency.

🧠
Intel Loihi 2
Intel / Neuromorphic Computing Lab
Cores
128 neurocores
Neurons
~1M
Process
Intel 4
Learning
On-chip STDP

Next-gen neuromorphic chip with programmable learning rules. Supports on-chip learning without external training. Used for odometry, gesture recognition, and robotic control.

IBM TrueNorth
IBM Research
Neurons
1 Million
Synapses
256 Million
Power
~70 mW
Arch
Digital CMOS

Ultra-low-power AI inference at scale. Designed for always-on image and speech recognition. Powers IBM's Neurosynaptic Core architecture.

🔗
SpiNNaker2
University of Manchester / TU Dresden
Chips
48
Cores
7,680 ARM
Neurons
~3.5M
Target
Real-time

Massively parallel brain emulator. Used in the Human Brain Project for large-scale neural simulation. Custom spiking neural network accelerator.

BrainScaleS 2
Heidelberg University
Speed
1000× biology
Neurons
~200k
Type
Analog
wafer
8-inch

"Physical" neuromorphic — uses analog circuits to emulate neuron dynamics directly in silicon. Runs 1000× faster than real biology for accelerated learning.

💾
Memristor Chips
Various (GF, CEA-Leti, Startups)
Device
ReRAM
Synapse
Native
Density
Very High
Status
Emerging

Uses resistive RAM as tunable synaptic weights. Naturally implements in-memory compute. Enables ultra-dense neuromorphic systems. Still in research phase.

The Ecosystem

From Intel's programmable Loihi 2 to analog chips running 1000× biology's speed, the neuromorphic ecosystem spans inference-focused TRUE-North, learning-capable Loihi, and brain-emulating SpiNNaker. Each trades off differently — there's no single winner yet.

Von Neumann

CPU MEM

Separate CPU and memory. Data must shuttle between the two — the von Neumann bottleneck.

Neuromorphic

NEURONS + MEMORY

Computation where data lives. No shuttle — in-memory processing eliminates the bottleneck.

When Von Neumann Wins

  • Precision tasks (floating-point math)
  • Large-scale batch processing
  • Well-established software ecosystem
  • General-purpose computing

When Neuromorphic Wins

  • Always-on edge AI
  • Ultra-low power sensing
  • Real-time temporal processing
  • On-chip learning

Beyond the Bottleneck

Neuromorphic computing reimagines the computer's architecture itself. Instead of separating processing and memory, it merges them — like the brain does. This eliminates the energy-hungry data shuttle that limits traditional chips and enables new computing paradigms.

Interactive Network Builder

Click to add neurons • Drag between to connect • Click neuron to inject current

Neuron Properties

Threshold 35
Input Current 0

Build Your Own Network

Design neural circuits from scratch. Try the Feedforward preset for a classic pattern discriminator, Recurrent for memory/oscillations, or WTA (winner-take-all) for feature selection. Watch how topology shapes dynamics.