Parameters
Live Simulation
LIF Neuron Model
The Leaky Integrate-and-Fire model captures core neuron dynamics: current flows in through synapses, building membrane potential. When V exceeds threshold (Vth), the neuron fires a spike, then resets and enters a refractory period. This is the basic unit of neuromorphic computation.
STDP Learning Curve
Weight change (Δw) vs. spike timing difference (Δt)
How STDP Works
Weight Heatmap
Spike-Timing-Dependent Plasticity
STDP is the biological learning rule that shaped your brain. It's a form of Hebbian learning refined by precise spike timing. Neuromorphic chips like Loihi implement STDP directly in hardware, enabling on-chip, unsupervised learning — no backpropagation required.
Power Consumption
Spike Event Visualization
Dense (traditional) vs. sparse (neuromorphic) firing patterns
Event-Driven Efficiency
In traditional GPUs, every transistor toggles every clock cycle — even if doing nothing useful. neuromorphic chips are event-driven: transistors sleep until input arrives. This is why the brain runs on 20 watts while training GPT-scale models requires megawatts. Sparsity = efficiency.
128 neurocores
~1M
Intel 4
On-chip STDP
Next-gen neuromorphic chip with programmable learning rules. Supports on-chip learning without external training. Used for odometry, gesture recognition, and robotic control.
1 Million
256 Million
~70 mW
Digital CMOS
Ultra-low-power AI inference at scale. Designed for always-on image and speech recognition. Powers IBM's Neurosynaptic Core architecture.
48
7,680 ARM
~3.5M
Real-time
Massively parallel brain emulator. Used in the Human Brain Project for large-scale neural simulation. Custom spiking neural network accelerator.
1000× biology
~200k
Analog
8-inch
"Physical" neuromorphic — uses analog circuits to emulate neuron dynamics directly in silicon. Runs 1000× faster than real biology for accelerated learning.
ReRAM
Native
Very High
Emerging
Uses resistive RAM as tunable synaptic weights. Naturally implements in-memory compute. Enables ultra-dense neuromorphic systems. Still in research phase.
The Ecosystem
From Intel's programmable Loihi 2 to analog chips running 1000× biology's speed, the neuromorphic ecosystem spans inference-focused TRUE-North, learning-capable Loihi, and brain-emulating SpiNNaker. Each trades off differently — there's no single winner yet.
Von Neumann
Separate CPU and memory. Data must shuttle between the two — the von Neumann bottleneck.
Neuromorphic
Computation where data lives. No shuttle — in-memory processing eliminates the bottleneck.
When Von Neumann Wins
- Precision tasks (floating-point math)
- Large-scale batch processing
- Well-established software ecosystem
- General-purpose computing
When Neuromorphic Wins
- Always-on edge AI
- Ultra-low power sensing
- Real-time temporal processing
- On-chip learning
Beyond the Bottleneck
Neuromorphic computing reimagines the computer's architecture itself. Instead of separating processing and memory, it merges them — like the brain does. This eliminates the energy-hungry data shuttle that limits traditional chips and enables new computing paradigms.
Interactive Network Builder
Neuron Properties
Build Your Own Network
Design neural circuits from scratch. Try the Feedforward preset for a classic pattern discriminator, Recurrent for memory/oscillations, or WTA (winner-take-all) for feature selection. Watch how topology shapes dynamics.