
MemDANCE
Research project developing a dendritic neuron model and adaptive algorithms for processing sequences with varied timing using neuromorphic computing principles.
Dendritic Processing for Adaptive, Energy-Efficient Sequence Learning
Problem
Most real-world information unfolds as temporal sequences — streams of events whose timing is irregular, noisy, and probabilistic.
Traditional artificial intelligence handles such temporal processing through algorithms designed for von Neumann architectures, where computation and memory are separate and all operations are synchronized by a global clock.
While flexible, this design comes at a steep cost: processors consume power continuously, even when idle, and operate in quantized time steps that poorly match the asynchronous nature of real-world signals.
In contrast, the brain operates through event-driven, massively parallel, and energy-efficient processes. Biological neurons do not fire continuously; instead, they emit spikes only when necessary, encoding information in both timing and frequency.
Recent advances in neuromorphic computing — systems that mimic neural architectures — have revived interest in mixed analog–digital designs, particularly spiking neural networks (SNNs) that rely on biologically inspired neuron models such as the Leaky Integrate-and-Fire (LIF) model and Spike-Timing-Dependent Plasticity (STDP) learning rules.
However, most SNNs still struggle to handle variable temporal structures where inter-event intervals fluctuate widely. In biological neurons, this temporal processing is handled through dendritic computations, specifically via NMDA plateau potentials in dendritic compartments, which can last 50–200 ms and bridge gaps between inputs.
The idea behind MemDANCE (Memristive Dendritic Adaptive Neural Computing Engine) is to harness this dendritic mechanism to develop a timing-adaptive algorithm capable of learning and processing natural, noisy temporal sequences in real time — efficiently and robustly.
Approach
At the core of MemDANCE is a computational dendrite model that can hold and adaptively tune its internal temporal state to the structure of incoming spike events.
Algorithmic Model
I developed a new neuron model called Leaky Integrate-and-Hold (LIH) — a variant of the traditional LIF model.
Instead of resetting after reaching the firing threshold, the LIH neuron holds its potential for a variable plateau duration before passively decaying. This holding period represents an analog of biological NMDA plateau potentials and forms the basis for processing temporal dependencies.
To enable online learning of temporal patterns, I designed an adaptive mechanism inspired by the Passive–Aggressive (PA) binary classification algorithm.
In this formulation:
- Each dendritic compartment performs a timing-based binary classification, determining whether the next input spike falls within the predictive window.
- The model adjusts the plateau duration using a smooth classification function (a differentiable alternative to the hinge loss in PA algorithms) to avoid abrupt or unstable adaptations.
- Over time, the neuron learns to align its dendritic plateau timing with the statistical structure of input sequences — effectively learning the temporal code of its environment.
Simulation & Implementation
The model was implemented in Julia, chosen for its balance between computational efficiency and readability in numerical modeling.
Through extensive simulation, the system demonstrated its ability to:
- Capture temporal dependencies across irregular event intervals,
- Maintain robust learning under noisy timing jitter, and
- Converge toward optimal plateau durations that match the natural inter-event distributions (typically Poisson).
Hardware Translation
To move beyond simulation, I initiated a collaboration with Forschungszentrum Jülich (PGI-14) under the DFG Priority Program SPP 2262: Memristec.
Together, we designed the world’s first Dendritic Processing Unit (DPU) — a hybrid CMOS–memristor circuit that emulates the adaptive temporal behavior of the LIH model in hardware.
This cross-disciplinary effort bridges computational neuroscience, machine learning, and neuromorphic circuit design, targeting real-time, low-power temporal processing architectures.
Results
The MemDANCE framework establishes a computational link between dendritic physiology and machine learning, demonstrating how dendritic plateau dynamics can serve as a temporal memory substrate.
Key findings include:
- The LIH neuron successfully reproduces biologically plausible temporal integration while remaining computationally efficient.
- The adaptive timing mechanism allows the system to synchronize with variable event distributions without explicit time discretization.
- When mapped to memristive hardware, the DPU architecture promises energy-efficient online sequence learning, operating asynchronously and eliminating idle power consumption.
- Theoretical analysis shows potential scaling advantages in event-driven workloads such as sensory preprocessing, robotic perception, and neuromorphic signal encoding.
These results highlight MemDANCE as a step toward brain-inspired dendritic computing — enabling real-time temporal inference beyond conventional clocked systems.
My Role
- Conceptualized the core research idea linking dendritic plateau potentials with probabilistic sequence learning
- Developed the Leaky Integrate-and-Hold neuron model and adaptive timing algorithm
- Programmed and simulated the model in Julia, validating temporal learning dynamics
- Coordinated collaboration with Forschungszentrum Jülich for CMOS–memristor hardware translation
- Authored and secured a successful DFG SPP 2262 Memristec grant (€800,000)
- Managed project budget and hired personnel at the Osnabrück research site
- Led interdisciplinary communication, project milestones, and reporting
- Performed data analysis and theoretical validation
- Presented findings at academic conferences and in project workshops
- Co-authored scientific publications and documentation of the DPU design
Publications & References
- Nezami, F. N., König, P., & Schiefer, S. (2023). Dendritic Processing Units: Memristive Architectures for Temporal Learning in Spiking Neural Networks. In preparation, SPP 2262 Memristec Technical Series.
- Nezami, F. N., Schiefer, S., & König, P. (2022). Leaky Integrate-and-Hold: A Dendritic Neuron Model for Adaptive Temporal Processing. Preprint, Institute of Cognitive Science, University of Osnabrück.
- DFG SPP 2262 Memristec Project: MemDANCE – Memristive Dendritic Adaptive Neural Computing Engine, 2021–2025.
Keywords: Dendritic computing, spiking neural networks, memristor, neuromorphic architecture, NMDA plateau potentials, sequence learning, online adaptation, event-driven processing, leaky integrate-and-hold model, temporal inference.