Neuromorphic Computing and Engineering - IOPscience
Neuromorphic Computing and Engineering
Purpose-led Publishing
is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
OPEN ACCESS
Neuromorphic Computing and Engineering
is a multidisciplinary, open access journal publishing cutting edge research on the design, development and application of artificial neural networks and systems from both a hardware and computational perspective. For detailed information about subject coverage see the
About the journal
section.
Stay informed about the latest journal news and announcements
Submit
an article
opens in new tab
Track my article
opens in new tab
RSS
Sign up for new issue notifications
Median submission to first decision before peer review
7 days
Median submission to first decision after peer review
70 days
Impact factor
6.1
Citescore
9.2
Full list of journal metrics
The following article is
Open access
2022 roadmap on neuromorphic computing and engineering
Dennis V Christensen
et al
2022
Neuromorph. Comput. Eng.
022501
View article
, 2022 roadmap on neuromorphic computing and engineering
PDF
, 2022 roadmap on neuromorphic computing and engineering
Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 10
18
calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community.
The following article is
Open access
Hands-on reservoir computing: a tutorial for practical implementation
Matteo Cucchi
et al
2022
Neuromorph. Comput. Eng.
032002
View article
, Hands-on reservoir computing: a tutorial for practical implementation
PDF
, Hands-on reservoir computing: a tutorial for practical implementation
This manuscript serves a specific purpose: to give readers from fields such as material science, chemistry, or electronics an overview of implementing a reservoir computing (RC) experiment with her/his material system. Introductory literature on the topic is rare and the vast majority of reviews puts forth the basics of RC taking for granted concepts that may be nontrivial to someone unfamiliar with the machine learning field (see for example reference Lukoševičius (2012
Neural Networks: Tricks of the Trade
(Berlin: Springer) pp 659–686). This is unfortunate considering the large pool of material systems that show nonlinear behavior and short-term memory that may be harnessed to design novel computational paradigms. RC offers a framework for computing with material systems that circumvents typical problems that arise when implementing traditional, fully fledged feedforward neural networks on hardware, such as minimal device-to-device variability and control over each unit/neuron and connection. Instead, one can use a random, untrained reservoir where only the output layer is optimized, for example, with linear regression. In the following, we will highlight the potential of RC for hardware-based neural networks, the advantages over more traditional approaches, and the obstacles to overcome for their implementation. Preparing a high-dimensional nonlinear system as a well-performing reservoir for a specific task is not as easy as it seems at first sight. We hope this tutorial will lower the barrier for scientists attempting to exploit their nonlinear systems for computational tasks typically carried out in the fields of machine learning and artificial intelligence. A simulation tool to accompany this paper is available online
The following article is
Open access
Dynamical systems foundations for neuromorphic intelligence
Marcel van Gerven 2026
Neuromorph. Comput. Eng.
014022
View article
, Dynamical systems foundations for neuromorphic intelligence
PDF
, Dynamical systems foundations for neuromorphic intelligence
Neuromorphic computing seeks to replicate the remarkable efficiency, flexibility, and adaptability of the human brain in artificial systems. Unlike conventional digital approaches, which suffer from the Von Neumann bottleneck and depend on massive computational and energy resources, neuromorphic systems exploit brain-inspired principles of computation to achieve orders of magnitude greater energy efficiency. By drawing on insights from a wide range of disciplines—including artificial intelligence (AI), physics, chemistry, biology, neuroscience, cognitive science and materials science—neuromorphic computing promises to deliver intelligent systems that are sustainable, transparent, and widely accessible. A central challenge, however, is to identify a unifying theoretical framework capable of bridging these diverse disciplines. We argue that stochastic dynamical systems representing equations of motion under random perturbations provide such a foundation. Rooted in differential calculus, dynamical systems theory offers a principled language for modeling inference, learning, and control in both natural and artificial substrates. Within this framework, process noise can be harnessed as a resource for learning, while differential genetic programming enables the discovery of dynamical systems that implement adaptive behaviors through stochastic adaptation across generations. Embracing this perspective paves the way toward emergent neuromorphic intelligence, where intelligent behavior arises from the dynamics of physical substrates, advancing both the science and sustainability of AI.
The following article is
Open access
Constructive community race: full-density spiking neural network model drives neuromorphic computing
Johanna Senk
et al
2026
Neuromorph. Comput. Eng.
012001
View article
, Constructive community race: full-density spiking neural network model drives neuromorphic computing
PDF
, Constructive community race: full-density spiking neural network model drives neuromorphic computing
The local circuitry of the mammalian brain is a focus of the search for generic computational principles because it is largely conserved across species and modalities. In 2014 a model was proposed representing all neurons and synapses of the stereotypical cortical microcircuit below
of brain surface. The model reproduces fundamental features of brain activity but its impact remained limited because of its computational demands. For theory and simulation, however, the model was a breakthrough because it is full-scale, therefore free of uncertainties of downscaling, and larger models are less densely connected. This sparked a race in the neuromorphic computing community and the model became a de facto standard benchmark. Within a few years real-time performance was reached and surpassed at significantly reduced energy consumption. We review how the computational challenge was tackled by different simulation technologies and derive guidelines for the next generation of benchmarks and other domains of science.
The following article is
Open access
DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor
Ole Richter
et al
2024
Neuromorph. Comput. Eng.
014003
View article
, DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor
PDF
, DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor
With the remarkable progress that technology has made, the need for processing data near the sensors at the edge has increased dramatically. The electronic systems used in these applications must process data continuously, in real-time, and extract relevant information using the smallest possible energy budgets. A promising approach for implementing always-on processing of sensory signals that supports on-demand, sparse, and edge-computing is to take inspiration from biological nervous system. Following this approach, we present a brain-inspired platform for prototyping real-time event-based spiking neural networks. The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays. The analog circuits that implement such primitives are paired with a low latency asynchronous digital circuits for routing and mapping events. This asynchronous infrastructure enables the definition of different network architectures, and provides direct event-based interfaces to convert and encode data from event-based and continuous-signal sensors. Here we describe the overall system architecture, we characterize the mixed signal analog-digital circuits that emulate neural dynamics, demonstrate their features with experimental measurements, and present a low- and high-level software ecosystem that can be used for configuring the system. The flexibility to emulate different biologically plausible neural networks, and the chip’s ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
The following article is
Open access
Neuromorphic touch for robotics—a review
Tianyi Liu
et al
2025
Neuromorph. Comput. Eng.
032001
View article
, Neuromorphic touch for robotics—a review
PDF
, Neuromorphic touch for robotics—a review
The field of neuromorphic tactile sensing aims to emulate the biological mechanisms of touch to enable artificial systems with efficiency, adaptability, and precision akin to natural tactile perception. Inspired by the spike-based data encoding of biological mechanoreceptors and neural processing, neuromorphic tactile sensors (NTSs) leverage event-driven architectures to handle sensory information through sparse, low-power, and efficient formats. This review explores the state of neuromorphic tactile sensing, emphasizing its biological foundations, sensor technologies and encoding techniques within the field of robotics. By bridging biological touch mechanisms with neuromorphic engineering, NTSs have the potential to enhance robotic manipulation, prosthetics, and human–machine interfaces. Challenges and future directions include developing novel materials for sensors, improving the performance of spiking neural networks and lowering the barrier to entry into neuromorphic touch research through open-sourcing code and datasets. This review underscores the potential of neuromorphic tactile sensing in creating highly efficient and versatile tactile systems for robotics and beyond.
The following article is
Open access
More than MACs: exploring the role of neuromorphic engineering in the age of LLMs
Wilkie Olin-Ammentorp 2026
Neuromorph. Comput. Eng.
012002
View article
, More than MACs: exploring the role of neuromorphic engineering in the age of LLMs
PDF
, More than MACs: exploring the role of neuromorphic engineering in the age of LLMs
The introduction of large language models has significantly expanded global demand for computing; addressing this growing demand requires novel approaches that introduce new capabilities while addressing extant needs. Although inspiration from biological systems served as the foundation on which modern artificial intelligence (AI) was developed, many modern advances have been made without clear parallels to biological computing. As a result, the ability of techniques inspired by ‘natural intelligence’ (NI) to inflect modern AI systems may be questioned. However, by analyzing remaining disparities between AI and NI, we argue that further biological inspiration can contribute towards expanding the capabilities of artificial systems, enabling them to succeed in real-world environments and adapt to niche applications. To elucidate which NI mechanisms can contribute toward this goal, we review and compare elements of biological and artificial computing systems, emphasizing areas of NI that have not yet been effectively captured by AI. We then suggest areas of opportunity for NI-inspired mechanisms that can inflect AI hardware and software.
The following article is
Open access
Hyperdimensional decoding of spiking neural networks
Cedrick Kinavuidi
et al
2026
Neuromorph. Comput. Eng.
014021
View article
, Hyperdimensional decoding of spiking neural networks
PDF
, Hyperdimensional decoding of spiking neural networks
This work presents a novel spiking neural network (SNN) decoding method, combining SNNs with hyperdimensional computing (HDC). This decoding method is designed to achieve high accuracy, high noise robustness, low inference latency and low energy consumption. Compared to analogous architectures decoded with existing approaches, the SNN-HDC model attains generally better classification accuracy, lower inference latency, lower spike count and lower estimated energy consumption on multiple test cases from the literature. The SNN-HDC achieved spike count reductions of 1.74 × to 3.36 × on the DvsGesture dataset and 1.36 × to 2.70 × on the SL-Animals-DVS dataset. The SNN-HDC achieved estimated energy consumption reductions of 1.24 × to 3.67 × on the DvsGesture dataset and 1.38 × to 2.27 × on the SL-Animals-DVS dataset. The proposed decoding method enables detection of classes unseen during training. On the DvsGesture dataset, the SNN-HDC model can detect 100% of samples from an unseen/untrained class. The findings suggest the proposed decoding method is a compelling alternative to both rate and latency decoding.
The following article is
Open access
HfO
-based resistive switching memory devices for neuromorphic computing
S Brivio
et al
2022
Neuromorph. Comput. Eng.
042001
View article
, HfO2-based resistive switching memory devices for neuromorphic computing
PDF
, HfO2-based resistive switching memory devices for neuromorphic computing
HfO
-based resistive switching memory (RRAM) combines several outstanding properties, such as high scalability, fast switching speed, low power, compatibility with complementary metal-oxide-semiconductor technology, with possible high-density or three-dimensional integration. Therefore, today, HfO
RRAMs have attracted a strong interest for applications in neuromorphic engineering, in particular for the development of artificial synapses in neural networks. This review provides an overview of the structure, the properties and the applications of HfO
-based RRAM in neuromorphic computing. Both widely investigated applications of nonvolatile devices and pioneering works about volatile devices are reviewed. The RRAM device is first introduced, describing the switching mechanisms associated to filamentary path of HfO
defects such as oxygen vacancies. The RRAM programming algorithms are described for high-precision multilevel operation, analog weight update in synaptic applications and for exploiting the resistance dynamics of volatile devices. Finally, the neuromorphic applications are presented, illustrating both artificial neural networks with supervised training and with multilevel, binary or stochastic weights. Spiking neural networks are then presented for applications ranging from unsupervised training to spatio-temporal recognition. From this overview, HfO
-based RRAM appears as a mature technology for a broad range of neuromorphic computing systems.
The following article is
Open access
2D materials-based crossbar array for neuromorphic computing hardware
Hyeon Ji Lee
et al
2024
Neuromorph. Comput. Eng.
032003
View article
, 2D materials-based crossbar array for neuromorphic computing hardware
PDF
, 2D materials-based crossbar array for neuromorphic computing hardware
The growing demand for artificial intelligence has faced challenges for traditional computing architectures. As a result, neuromorphic computing systems have emerged as possible candidates for next-generation computing systems. Two-dimensional (2D) materials-based neuromorphic devices that emulate biological synapses and neurons play a key role in neuromorphic computing hardware due to their unique properties such as high strength, thermal conductivity, and flexibility. Although several studies have shown the simulations of individual devices, experimental implementation of large-scale crossbar arrays is still unclear. In this review, we explore the working principles and mechanisms of memristive devices. Then, we overview the development of neuromorphic devices based on 2D materials including transition metal dichalcogenides, graphene, hexagonal boron nitride, and layered halide perovskites. We also highlight the requirement and recent progress for building crossbar arrays by utilizing the advantageous properties of 2D materials. Lastly, we address the challenges that hardware implementation of neuromorphic computing systems currently face and propose a path towards system-level applications of neuromorphic computing.
The following article is
Open access
Spiking neural networks for continuous control via end-to-end model-based learning
Justus Huebotter
et al
2026
Neuromorph. Comput. Eng.
024004
View article
, Spiking neural networks for continuous control via end-to-end model-based learning
PDF
, Spiking neural networks for continuous control via end-to-end model-based learning
Despite recent progress in training spiking neural networks (SNNs) for classification, their application to continuous motor control remains limited. Here, we demonstrate that fully spiking architectures can be trained end-to-end to control robotic arms with multiple degrees of freedom in continuous environments. Our predictive-control framework combines leaky integrate-and-fire dynamics with surrogate gradients, jointly optimizing a forward model for dynamics prediction and a policy network for goal-directed action. We evaluate this approach on both a planar 2D reaching task and a simulated 6-DOF Franka Emika Panda robot with torque control. In direct comparison to non-spiking recurrent baselines trained under the same predictive-control pipeline, the proposed SNN achieves comparable task performance while using substantially fewer parameters. An extensive ablation study highlights the role of initialization, learnable time constants, adaptive thresholds, and latent-space compression as key contributors to stable training and effective control. Together, these findings establish SNNs as a viable and scalable substrate for high-dimensional continuous control, while emphasizing the importance of principled architectural and training design.
The following article is
Open access
Bruno: backpropagation running undersampled for novel device optimisation
Luca Fehlings
et al
2026
Neuromorph. Comput. Eng.
024003
View article
, Bruno: backpropagation running undersampled for novel device optimisation
PDF
, Bruno: backpropagation running undersampled for novel device optimisation
Recent efforts to improve the efficiency of neuromorphic and machine learning systems have centred on developing of specialised hardware for neural networks. These systems typically feature architectures that go beyond the von Neumann model employed in general-purpose hardware such as GPUs, offering potential efficiency and performance gains. However, neural networks developed for specialised hardware must consider its specific characteristics. This requires novel training algorithms and accurate hardware models, since they cannot be abstracted as a general-purpose computing platform. In this work, we present a bottom-up approach to training neural networks for hardware-based spiking neurons and synapses, built using ferroelectric capacitors and resistive random-access memories (RRAMs), respectively. Unlike the common approach of designing hardware to fit abstract neuron or synapse models, we start with compact models of the physical device to model the computational primitives. Based on these models, we have developed a training algorithm backpropagation running undersampled for novel device optimisation (BRUNO) that can reliably train the networks, even when applying hardware limitations, such as stochasticity or low bit precision. We analyse and compare BRUNO with backpropagation through time. We test it on different spatio-temporal datasets. First on a music prediction dataset, where a network composed of ferroelectric leaky integrate-and-fire (FeLIF) neurons is used to predict at each time step the next musical note that should be played. The second dataset consists on the classification of the Braille letters using a network composed of quantised RRAM synapses and FeLIF neurons. The performance of this network is then compared with that of networks composed of LIF neurons. Experimental results show the potential advantages of using BRUNO by reducing the time and memory required to detect spatio-temporal patterns with quantised synapses.
The following article is
Open access
The more the merrier: running multiple neuromorphic components on-chip for robotic control
Evan Eames
et al
2026
Neuromorph. Comput. Eng.
024001
View article
, The more the merrier: running multiple neuromorphic components on-chip for robotic control
PDF
, The more the merrier: running multiple neuromorphic components on-chip for robotic control
It has long been realized that neuromorphic hardware offers benefits for the domain of robotics such as low energy, low latency, as well as unique methods of learning. In aiming for more complex tasks, especially those incorporating multimodal data, one hurdle continuing to prevent their realization is an inability to orchestrate multiple networks on neuromorphic hardware without resorting to off-chip process management logic. To address this, we show a first example of a pipeline for vision-based robot control in which numerous complex networks can be run entirely on hardware via the use of a spiking neural state machine for process orchestration. The pipeline is validated on the Intel Loihi 2 research chip. We show that all components can run concurrently on-chip in the milliwatt regime at latencies competitive with the state-of-the-art. An equivalent network on simulated hardware is shown to accomplish robotic arm plug insertion in simulation, and the core elements of the pipeline are additionally tested on a real robotic arm.
The following article is
Open access
Line-based event preprocessing: towards low-energy neuromorphic computer vision
Amélie Gruel
et al
2026
Neuromorph. Comput. Eng.
024002
View article
, Line-based event preprocessing: towards low-energy neuromorphic computer vision
PDF
, Line-based event preprocessing: towards low-energy neuromorphic computer vision
Neuromorphic vision has made significant progress in recent years, thanks to the natural match between spiking neural networks and event data in terms of biological inspiration, energy savings, latency and memory use for dynamic visual data processing. However, optimising its energy requirements still remains a challenge within the community, especially for embedded applications. One solution may reside in preprocessing events to optimise data quantities thus lowering the energy cost of neuromorphic hardware, proportional to the number of synaptic operations. To this end, we extend an end-to-end neuromorphic line detection mechanism to introduce line-based event data preprocessing. Our results demonstrate on three benchmark event-based datasets that preprocessing leads to an advantageous trade-off between energy consumption and classification performance. Depending on the line-based preprocessing strategy and the complexity of the classification task, we show that one can maintain or increase the classification accuracy while significantly reducing the theoretical energy consumption. Our approach systematically leads to a significant improvement of the neuromorphic classification efficiency, thus laying the groundwork towards a more frugal neuromorphic computer vision thanks to event preprocessing.
The following article is
Open access
2D-materials for analog in-memory computing: a device-centric review of advantages and limitations
Jimin Shim
et al
2026
Neuromorph. Comput. Eng.
012003
View article
, 2D-materials for analog in-memory computing: a device-centric review of advantages and limitations
PDF
, 2D-materials for analog in-memory computing: a device-centric review of advantages and limitations
This review examines the practical advantages of two-dimensional materials for energy-efficient in-memory computing by assembling a curated, experiment-only dataset covering 32 material systems across diverse device structures, mechanisms, and fabrication routes. Energy analysis was standardized using an averaged pulse-based metric, and key figures of merit—switching energy, on/off ratio, endurance, retention, and linearity—were compared against structural and mechanistic factors. Two low-energy-consumption design pathways emerge: ultrathin (<10 nm) two-terminal devices exploiting filament formation for sub-
s updates and three-terminal heterojunction devices leveraging charge trapping to achieve nA-level programming currents over longer timescales. However, dynamic on/off ratios remain modest and are often overstated by DC sweep data. Endurance improves with shorter switching times, and the most intrinsically linear conductance evolution is observed in three-terminal gate-controlled devices employing charge trapping, Schottky barrier modulation, or ion intercalation. No universal optimum exists, as enhancing one performance metric typically compromises another. Based on the comparative analysis presented in this review, three near-term levers emerge as particularly relevant for translating selective material advantages into reproducible system-level gains: standardized pulsed benchmarking, scalable chemical vapor deposition growth with controlled defects and interfaces, and device–circuit co-design.
The following article is
Open access
2D-materials for analog in-memory computing: a device-centric review of advantages and limitations
Jimin Shim
et al
2026
Neuromorph. Comput. Eng.
012003
View article
, 2D-materials for analog in-memory computing: a device-centric review of advantages and limitations
PDF
, 2D-materials for analog in-memory computing: a device-centric review of advantages and limitations
This review examines the practical advantages of two-dimensional materials for energy-efficient in-memory computing by assembling a curated, experiment-only dataset covering 32 material systems across diverse device structures, mechanisms, and fabrication routes. Energy analysis was standardized using an averaged pulse-based metric, and key figures of merit—switching energy, on/off ratio, endurance, retention, and linearity—were compared against structural and mechanistic factors. Two low-energy-consumption design pathways emerge: ultrathin (<10 nm) two-terminal devices exploiting filament formation for sub-
s updates and three-terminal heterojunction devices leveraging charge trapping to achieve nA-level programming currents over longer timescales. However, dynamic on/off ratios remain modest and are often overstated by DC sweep data. Endurance improves with shorter switching times, and the most intrinsically linear conductance evolution is observed in three-terminal gate-controlled devices employing charge trapping, Schottky barrier modulation, or ion intercalation. No universal optimum exists, as enhancing one performance metric typically compromises another. Based on the comparative analysis presented in this review, three near-term levers emerge as particularly relevant for translating selective material advantages into reproducible system-level gains: standardized pulsed benchmarking, scalable chemical vapor deposition growth with controlled defects and interfaces, and device–circuit co-design.
The following article is
Open access
More than MACs: exploring the role of neuromorphic engineering in the age of LLMs
Wilkie Olin-Ammentorp 2026
Neuromorph. Comput. Eng.
012002
View article
, More than MACs: exploring the role of neuromorphic engineering in the age of LLMs
PDF
, More than MACs: exploring the role of neuromorphic engineering in the age of LLMs
The introduction of large language models has significantly expanded global demand for computing; addressing this growing demand requires novel approaches that introduce new capabilities while addressing extant needs. Although inspiration from biological systems served as the foundation on which modern artificial intelligence (AI) was developed, many modern advances have been made without clear parallels to biological computing. As a result, the ability of techniques inspired by ‘natural intelligence’ (NI) to inflect modern AI systems may be questioned. However, by analyzing remaining disparities between AI and NI, we argue that further biological inspiration can contribute towards expanding the capabilities of artificial systems, enabling them to succeed in real-world environments and adapt to niche applications. To elucidate which NI mechanisms can contribute toward this goal, we review and compare elements of biological and artificial computing systems, emphasizing areas of NI that have not yet been effectively captured by AI. We then suggest areas of opportunity for NI-inspired mechanisms that can inflect AI hardware and software.
The following article is
Open access
Constructive community race: full-density spiking neural network model drives neuromorphic computing
Johanna Senk
et al
2026
Neuromorph. Comput. Eng.
012001
View article
, Constructive community race: full-density spiking neural network model drives neuromorphic computing
PDF
, Constructive community race: full-density spiking neural network model drives neuromorphic computing
The local circuitry of the mammalian brain is a focus of the search for generic computational principles because it is largely conserved across species and modalities. In 2014 a model was proposed representing all neurons and synapses of the stereotypical cortical microcircuit below
of brain surface. The model reproduces fundamental features of brain activity but its impact remained limited because of its computational demands. For theory and simulation, however, the model was a breakthrough because it is full-scale, therefore free of uncertainties of downscaling, and larger models are less densely connected. This sparked a race in the neuromorphic computing community and the model became a de facto standard benchmark. Within a few years real-time performance was reached and surpassed at significantly reduced energy consumption. We review how the computational challenge was tackled by different simulation technologies and derive guidelines for the next generation of benchmarks and other domains of science.
The following article is
Open access
Van der Waals integration of 2D materials for advanced intelligent computing
Chaehyeon Kwak
et al
2025
Neuromorph. Comput. Eng.
042002
View article
, Van der Waals integration of 2D materials for advanced intelligent computing
PDF
, Van der Waals integration of 2D materials for advanced intelligent computing
The increasing demand for faster, energy-efficient, and higher bandwidth semiconductor devices has pushed conventional Si-based scaling to its fundamental limits, including mobility degradation, short-channel effects, and high power consumption. To overcome these challenges, three-dimensional integration has emerged as a promising strategy, but wafer-based approaches like through-Si-via face critical limitations in stacking density, mechanical stress, and fabrication complexity. Two-dimensional materials provide a compelling alternative due to their atomically thin structure, superior electrical and mechanical properties, and ability to sustain performance at the atomic scale. Moreover, their van der Waals integration enables heterogeneous, high-density, and efficient assembly of functional layers. This review summarizes recent advances in the preparation and van der Waals integration of 2D materials, including growth, transfer, and direct integration. Their applications in intelligent computing that range from logic to sensor devices and their potential as next-generation electronics are discussed.
The following article is
Open access
Retinomorphic devices beyond silicon for dynamic machine vision
Yuxin Xia
et al
2025
Neuromorph. Comput. Eng.
042001
View article
, Retinomorphic devices beyond silicon for dynamic machine vision
PDF
, Retinomorphic devices beyond silicon for dynamic machine vision
The human visual system can effectively sense optical information through the retina and process it at the visual cortex. Compared with conventional machine vision, it demonstrates superiority in terms of energy efficiency, adaptability, and accuracy. The retina-inspired machine vision systems can process information near or within the sensors at the front end, thereby compressing the raw sensory data and optimising the input to back-end processor for high-level computing tasks. In recent years, amid surge of interest in artificial intelligence technology, research in retinomorphic devices has achieved breakthroughs in both academic and industrial settings. Herein, we present a comprehensive review of this emerging field -based on several materials classes, such as halide perovskites, two-dimensional materials, organic materials and metal oxides. We discuss the steps taken towards achieving not only static pattern recognition, but also dynamic motion tracking and we identify the key challenges that need to be addressed by the community to push this technology forward.
The following article is
Open access
Neuromorphic computing for radar and radio systems: a survey
Hamrell et al
View accepted manuscript
, Neuromorphic computing for radar and radio systems: a survey
PDF
, Neuromorphic computing for radar and radio systems: a survey
Taking inspiration from the brain on how to create energy efficient and low latency
neuromorphic systems has the potential to create new opportunities with AI across many
domains. Firstly, it creates a possibility to mitigate problems with too large digital signal
processing costs in various technologies. Secondly, it also enables the use of AI and
machine learning algorithms where it is currently impossible due to energy constraints.
Recently, neuromorphic technology has been introduced to radio communication and radar
applications. In this work, we highlight advantages of applying energy efficient, low latency
and often lightweight neuromorphic computing for radar and radio signal processing. We
perform a comprehensive review of the main current works on neuromorphic technology
for radar applications, focusing on frequency-modulated continuous-wave and synthetic
aperture radar. Additionally, we cover radio frequency signal classification for both radar
and radio signals. Our ambition is to facilitate research on neuromorphic computing for
radar and radio systems, as well as help bringing researchers from these fields together.
The following article is
Open access
Energy-efficient radar detection with spiking neural resonators via activity-gated sparsity on Intel Loihi 2
Reeb et al
View accepted manuscript
, Energy-efficient radar detection with spiking neural resonators via activity-gated sparsity on Intel Loihi 2
PDF
, Energy-efficient radar detection with spiking neural resonators via activity-gated sparsity on Intel Loihi 2
Radar sensors are a corner stone of autonomous driving, offering reliable perception under adverse weather and lighting conditions. However, the increasing resolution of modern automotive radar systems generates large data volumes that must be processed in real time, imposing significant computational and energy demands. This challenge is particularly acute in energy-constrained platforms such as electric vehicles and embedded devices, where power efficiency is critical. Neuromorphic computing offers a promising alternative by emulating the brain's event-driven and energy-efficient information processing. In this work, we extend existing resonate-and-fire neuron models, called spiking neural resonators (SpiNRs), into the Doppler domain to enable velocity estimation. We integrate SpiNR with a spiking Ordered Statistics Constant False Alarm Rate (OS-CFAR) algorithm to realize a full neuromorphic peak detection. Crucially, we introduce a novel activity-gated sparsity mechanism that dynamically deactivates inactive resonators, substantially reducing energy consumption while preserving estimation fidelity. All neuromorphic algorithms are implemented on Intel's Loihi 2 neuromorphic processor, which allows us to exploit event-driven computation and benchmark against conventional digital implementations under realistic hardware constraints. Evaluation against the conventional Fast Fourier Transform and classical OS-CFAR pipeline demonstrates that SpinR achieves competitive accuracy in range-velocity estimation. The proposed activity-gated sparsity mechanism yields additional energy savings and removes the need for a separate peak detection stage, further simplifying the processing chain. These findings highlight the potential of neuromorphic radar processing as a power-efficient alternative to conventional methods and underscore the importance of developing next-generation neuromorphic substrates optimized for embedded signal processing.
The following article is
Open access
Unsupervised feature learning in spiking neural networks using nonlinear interface dipole modulation-based synaptic devices
Miyata
View accepted manuscript
, Unsupervised feature learning in spiking neural networks using nonlinear interface dipole modulation-based synaptic devices
PDF
, Unsupervised feature learning in spiking neural networks using nonlinear interface dipole modulation-based synaptic devices
Recently, a three-terminal interface dipole modulation field-effect transistor (IDM FET) memory device has been proposed that leverages electric-field-induced dipole modulation at oxide/oxide interfaces. This device has been reported to exhibit a double-pulse-induced response analogous to the spike-timing-dependent plasticity (STDP) observed in biological synapses. Although the STDP behavior of the IDM FET exhibits pronounced nonlinearity, previous simulation studies have suggested that it can still be applied to unsupervised feature learning in spiking neural networks (SNNs) when combined with an additional frequency-independent (FI) depression operation. In this study, we first briefly review the nonlinear IDM response based on experimental observations and clarify that the nonlinearity is intrinsic to the IDM interface, originating from changes in the interface dipole states. We then present the synaptic weight-update model of IDM FETs employed in our SNN simulations and analyze the weight-update dynamics during feature learning using a simple single-layer SNN. Based on this analysis, we examine the optimal update conditions in terms of the balance between potentiation and depression rates. Furthermore, we evaluate feature learning on the MNIST handwritten-digit dataset using a two-layer network. Based on frequency-dependent rate-equilibrium considerations, we propose a switching FI depression/potentiation algorithm to improve feature‑learning performance, demonstrating enhanced robustness, improved classification accuracy, and reasonable tolerance to device‑to-device variation.
The following article is
Open access
An adaptive filter for denoising brain-inspired complementary vision sensor
Wang et al
View accepted manuscript
, An adaptive filter for denoising brain-inspired complementary vision sensor
PDF
, An adaptive filter for denoising brain-inspired complementary vision sensor
Recent advancements in brain-inspired complementary vision chips (CVS) with intensity, multi-bit temporal difference (TD) and spatial difference (SD) sensing capabilities offer a promising solution to the limitations of traditional image sensors by enabling high-speed, high-precision sensing with reduced bandwidth consumption. However, noise characterizations and denoising strategies for these sensors remain underexplored. In this study, leveraging a recently developed state-of-the-art CVS, Tianmouc, we present a theoretical analysis of its noise characteristics, revealing the main challenge for denoising: a distinctive distribution that varies with local illumination. Building on this analysis, we develop a suite of novel denoising algorithms named locally adaptive direction-aware filter (LADF). LADF implements multi-stage denoising algorithms consisting of preprocessing followed by an adaptive threshold filter that adjusts parameters locally to mitigate noise variability. Additionally, considering the distinct characteristics of SD, we develop a multi-directional and polarity-aware separation strategy, while for TD, we exploit the inherent time-space correlation between TD and SD to suppress noise further. To enable rigorous evaluation, we construct a large-scale paired dataset through a novel synthetic-real approach that combines accurately labeled synthetic images with real-world captured data. Experimental results demonstrate that LADF achieves an average signal-to-noise ratio (SNR) of 10.11 in SD, outperforming two baseline methods by factors of 1.54× and 2.73×, respectively, and an average SNR of 4.52 in TD, surpassing the baselines by 1.47× and 3.57×, respectively. Furthermore, our method reduces errors in motion estimation by 28.9%, and enhances the peak SNR in reconstruction by 3.35 dB, demonstrating its effectiveness in downstream tasks. Our algorithm establishes a new benchmark for CVS denoising and demonstrates significant potential to enhance the performance of application systems utilizing CVS.
Trending on Altmetric
The following article is
Open access
2022 roadmap on neuromorphic computing and engineering
Dennis V Christensen
et al
2022
Neuromorph. Comput. Eng.
022501
View article
, 2022 roadmap on neuromorphic computing and engineering
PDF
, 2022 roadmap on neuromorphic computing and engineering
Modern computation based on von Neumann architecture is now a mature cutting-edge science. In the von Neumann architecture, processing and memory units are implemented as separate blocks interchanging data intensively and continuously. This data transfer is responsible for a large part of the power consumption. The next generation computer technology is expected to solve problems at the exascale with 10
18
calculations each second. Even though these future computers will be incredibly powerful, if they are based on von Neumann type architectures, they will consume between 20 and 30 megawatts of power and will not have intrinsic physically built-in capabilities to learn or deal with complex data as our brain does. These needs can be addressed by neuromorphic computing systems which are inspired by the biological concepts of the human brain. This new generation of computers has the potential to be used for the storage and processing of large amounts of digital information with much lower power consumption than conventional processors. Among their potential future applications, an important niche is moving the control from data centers to edge devices. The aim of this roadmap is to present a snapshot of the present state of neuromorphic technology and provide an opinion on the challenges and opportunities that the future holds in the major areas of neuromorphic technology, namely materials, devices, neuromorphic circuits, neuromorphic algorithms, applications, and ethics. The roadmap is a collection of perspectives where leading researchers in the neuromorphic community provide their own view about the current state and the future challenges for each research area. We hope that this roadmap will be a useful resource by providing a concise yet comprehensive introduction to readers outside this field, for those who are just entering the field, as well as providing future perspectives for those who are well established in the neuromorphic computing community.
The following article is
Open access
Hands-on reservoir computing: a tutorial for practical implementation
Matteo Cucchi
et al
2022
Neuromorph. Comput. Eng.
032002
View article
, Hands-on reservoir computing: a tutorial for practical implementation
PDF
, Hands-on reservoir computing: a tutorial for practical implementation
This manuscript serves a specific purpose: to give readers from fields such as material science, chemistry, or electronics an overview of implementing a reservoir computing (RC) experiment with her/his material system. Introductory literature on the topic is rare and the vast majority of reviews puts forth the basics of RC taking for granted concepts that may be nontrivial to someone unfamiliar with the machine learning field (see for example reference Lukoševičius (2012
Neural Networks: Tricks of the Trade
(Berlin: Springer) pp 659–686). This is unfortunate considering the large pool of material systems that show nonlinear behavior and short-term memory that may be harnessed to design novel computational paradigms. RC offers a framework for computing with material systems that circumvents typical problems that arise when implementing traditional, fully fledged feedforward neural networks on hardware, such as minimal device-to-device variability and control over each unit/neuron and connection. Instead, one can use a random, untrained reservoir where only the output layer is optimized, for example, with linear regression. In the following, we will highlight the potential of RC for hardware-based neural networks, the advantages over more traditional approaches, and the obstacles to overcome for their implementation. Preparing a high-dimensional nonlinear system as a well-performing reservoir for a specific task is not as easy as it seems at first sight. We hope this tutorial will lower the barrier for scientists attempting to exploit their nonlinear systems for computational tasks typically carried out in the fields of machine learning and artificial intelligence. A simulation tool to accompany this paper is available online
The following article is
Open access
HfO
-based resistive switching memory devices for neuromorphic computing
S Brivio
et al
2022
Neuromorph. Comput. Eng.
042001
View article
, HfO2-based resistive switching memory devices for neuromorphic computing
PDF
, HfO2-based resistive switching memory devices for neuromorphic computing
HfO
-based resistive switching memory (RRAM) combines several outstanding properties, such as high scalability, fast switching speed, low power, compatibility with complementary metal-oxide-semiconductor technology, with possible high-density or three-dimensional integration. Therefore, today, HfO
RRAMs have attracted a strong interest for applications in neuromorphic engineering, in particular for the development of artificial synapses in neural networks. This review provides an overview of the structure, the properties and the applications of HfO
-based RRAM in neuromorphic computing. Both widely investigated applications of nonvolatile devices and pioneering works about volatile devices are reviewed. The RRAM device is first introduced, describing the switching mechanisms associated to filamentary path of HfO
defects such as oxygen vacancies. The RRAM programming algorithms are described for high-precision multilevel operation, analog weight update in synaptic applications and for exploiting the resistance dynamics of volatile devices. Finally, the neuromorphic applications are presented, illustrating both artificial neural networks with supervised training and with multilevel, binary or stochastic weights. Spiking neural networks are then presented for applications ranging from unsupervised training to spatio-temporal recognition. From this overview, HfO
-based RRAM appears as a mature technology for a broad range of neuromorphic computing systems.
The following article is
Open access
DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor
Ole Richter
et al
2024
Neuromorph. Comput. Eng.
014003
View article
, DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor
PDF
, DYNAP-SE2: a scalable multi-core dynamic neuromorphic asynchronous spiking neural network processor
With the remarkable progress that technology has made, the need for processing data near the sensors at the edge has increased dramatically. The electronic systems used in these applications must process data continuously, in real-time, and extract relevant information using the smallest possible energy budgets. A promising approach for implementing always-on processing of sensory signals that supports on-demand, sparse, and edge-computing is to take inspiration from biological nervous system. Following this approach, we present a brain-inspired platform for prototyping real-time event-based spiking neural networks. The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays. The analog circuits that implement such primitives are paired with a low latency asynchronous digital circuits for routing and mapping events. This asynchronous infrastructure enables the definition of different network architectures, and provides direct event-based interfaces to convert and encode data from event-based and continuous-signal sensors. Here we describe the overall system architecture, we characterize the mixed signal analog-digital circuits that emulate neural dynamics, demonstrate their features with experimental measurements, and present a low- and high-level software ecosystem that can be used for configuring the system. The flexibility to emulate different biologically plausible neural networks, and the chip’s ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
The following article is
Open access
Recent progress in optoelectronic memristors for neuromorphic and in-memory computation
Maria Elias Pereira
et al
2023
Neuromorph. Comput. Eng.
022002
View article
, Recent progress in optoelectronic memristors for neuromorphic and in-memory computation
PDF
, Recent progress in optoelectronic memristors for neuromorphic and in-memory computation
Neuromorphic computing has been gaining momentum for the past decades and has been appointed as the replacer of the outworn technology in conventional computing systems. Artificial neural networks (ANNs) can be composed by memristor crossbars in hardware and perform in-memory computing and storage, in a power, cost and area efficient way. In optoelectronic memristors (OEMs), resistive switching (RS) can be controlled by both optical and electronic signals. Using light as synaptic weigh modulator provides a high-speed non-destructive method, not dependent on electrical wires, that solves crosstalk issues. In particular, in artificial visual systems, OEMs can act as the artificial retina and combine optical sensing and high-level image processing. Therefore, several efforts have been made by the scientific community into developing OEMs that can meet the demands of each specific application. In this review, the recent advances in inorganic OEMs are summarized and discussed. The engineering of the device structure provides the means to manipulate RS performance and, thus, a comprehensive analysis is performed regarding the already proposed memristor materials structure and their specific characteristics. Moreover, their potential applications in logic gates, ANNs and, in more detail, on artificial visual systems are also assessed, taking into account the figures of merit described so far.
The following article is
Open access
Ferroelectric-based synapses and neurons for neuromorphic computing
Erika Covi
et al
2022
Neuromorph. Comput. Eng.
012002
View article
, Ferroelectric-based synapses and neurons for neuromorphic computing
PDF
, Ferroelectric-based synapses and neurons for neuromorphic computing
The shift towards a distributed computing paradigm, where multiple systems acquire and elaborate data in real-time, leads to challenges that must be met. In particular, it is becoming increasingly essential to compute on the edge of the network, close to the sensor collecting data. The requirements of a system operating on the edge are very tight: power efficiency, low area occupation, fast response times, and on-line learning. Brain-inspired architectures such as spiking neural networks (SNNs) use artificial neurons and synapses that simultaneously perform low-latency computation and internal-state storage with very low power consumption. Still, they mainly rely on standard complementary metal-oxide-semiconductor (CMOS) technologies, making SNNs unfit to meet the aforementioned constraints. Recently, emerging technologies such as memristive devices have been investigated to flank CMOS technology and overcome edge computing systems’ power and memory constraints. In this review, we will focus on ferroelectric technology. Thanks to its CMOS-compatible fabrication process and extreme energy efficiency, ferroelectric devices are rapidly affirming themselves as one of the most promising technologies for neuromorphic computing. Therefore, we will discuss their role in emulating neural and synaptic behaviors in an area and power-efficient way.
The following article is
Open access
Reducing reservoir computer hyperparameter dependence by external timescale tailoring
Lina Jaurigue and Kathy Lüdge 2024
Neuromorph. Comput. Eng.
014001
View article
, Reducing reservoir computer hyperparameter dependence by external timescale tailoring
PDF
, Reducing reservoir computer hyperparameter dependence by external timescale tailoring
Task specific hyperparameter tuning in reservoir computing is an open issue, and is of particular relevance for hardware implemented reservoirs. We investigate the influence of directly including externally controllable task specific timescales on the performance and hyperparameter sensitivity of reservoir computing approaches. We show that the need for hyperparameter optimisation can be reduced if timescales of the reservoir are tailored to the specific task. Our results are mainly relevant for temporal tasks requiring memory of past inputs, for example chaotic timeseries prediction. We consider various methods of including task specific timescales in the reservoir computing approach and demonstrate the universality of our message by looking at both time-multiplexed and spatially-multiplexed reservoir computing.
The following article is
Open access
2D materials and van der Waals heterojunctions for neuromorphic computing
Zirui Zhang
et al
2022
Neuromorph. Comput. Eng.
032004
View article
, 2D materials and van der Waals heterojunctions for neuromorphic computing
PDF
, 2D materials and van der Waals heterojunctions for neuromorphic computing
Neuromorphic computing systems employing artificial synapses and neurons are expected to overcome the limitations of the present von Neumann computing architecture in terms of efficiency and bandwidth limits. Traditional neuromorphic devices have used 3D bulk materials, and thus, the resulting device size is difficult to be further scaled down for high density integration, which is required for highly integrated parallel computing. The emergence of two-dimensional (2D) materials offers a promising solution, as evidenced by the surge of reported 2D materials functioning as neuromorphic devices for next-generation computing. In this review, we summarize the 2D materials and their heterostructures to be used for neuromorphic computing devices, which could be classified by the working mechanism and device geometry. Then, we survey neuromorphic device arrays and their applications including artificial visual, tactile, and auditory functions. Finally, we discuss the current challenges of 2D materials to achieve practical neuromorphic devices, providing a perspective on the improved device performance, and integration level of the system. This will deepen our understanding of 2D materials and their heterojunctions and provide a guide to design highly performing memristors. At the same time, the challenges encountered in the industry are discussed, which provides a guide for the development direction of memristors.
The following article is
Open access
Brain-inspired methods for achieving robust computation in heterogeneous mixed-signal neuromorphic processing systems
Dmitrii Zendrikov
et al
2023
Neuromorph. Comput. Eng.
034002
View article
, Brain-inspired methods for achieving robust computation in heterogeneous mixed-signal neuromorphic processing systems
PDF
, Brain-inspired methods for achieving robust computation in heterogeneous mixed-signal neuromorphic processing systems
Neuromorphic processing systems implementing spiking neural networks with mixed signal analog/digital electronic circuits and/or memristive devices represent a promising technology for edge computing applications that require low power, low latency, and that cannot connect to the cloud for off-line processing, either due to lack of connectivity or for privacy concerns. However, these circuits are typically noisy and imprecise, because they are affected by device-to-device variability, and operate with extremely small currents. So achieving reliable computation and high accuracy following this approach is still an open challenge that has hampered progress on the one hand and limited widespread adoption of this technology on the other. By construction, these hardware processing systems have many constraints that are biologically plausible, such as heterogeneity and non-negativity of parameters. More and more evidence is showing that applying such constraints to artificial neural networks, including those used in artificial intelligence, promotes robustness in learning and improves their reliability. Here we delve even more into neuroscience and present network-level brain-inspired strategies that further improve reliability and robustness in these neuromorphic systems: we quantify, with chip measurements, to what extent population averaging is effective in reducing variability in neural responses, we demonstrate experimentally how the neural coding strategies of cortical models allow silicon neurons to produce reliable signal representations, and show how to robustly implement essential computational primitives, such as selective amplification, signal restoration, working memory, and relational networks, exploiting such strategies. We argue that these strategies can be instrumental for guiding the design of robust and reliable ultra-low power electronic neural processing systems implemented using noisy and imprecise computing substrates such as subthreshold neuromorphic circuits and emerging memory technologies.
The following article is
Open access
Spike-based local synaptic plasticity: a survey of computational models and neuromorphic circuits
Lyes Khacef
et al
2023
Neuromorph. Comput. Eng.
042001
View article
, Spike-based local synaptic plasticity: a survey of computational models and neuromorphic circuits
PDF
, Spike-based local synaptic plasticity: a survey of computational models and neuromorphic circuits
Understanding how biological neural networks carry out learning using spike-based local plasticity mechanisms can lead to the development of real-time, energy-efficient, and adaptive neuromorphic processing systems. A large number of spike-based learning models have recently been proposed following different approaches. However, it is difficult to assess if these models can be easily implemented in neuromorphic hardware, and to compare their features and ease of implementation. To this end, in this survey, we provide an overview of representative brain-inspired synaptic plasticity models and mixed-signal complementary metal–oxide–semiconductor neuromorphic circuits within a unified framework. We review historical, experimental, and theoretical approaches to modeling synaptic plasticity, and we identify computational primitives that can support low-latency and low-power hardware implementations of spike-based learning rules. We provide a common definition of a locality principle based on pre- and postsynaptic neural signals, which we propose as an important requirement for physical implementations of synaptic plasticity circuits. Based on this principle, we compare the properties of these models within the same framework, and describe a set of mixed-signal electronic circuits that can be used to implement their computing principles, and to build efficient on-chip and online learning in neuromorphic processing systems.
Journal links
Submit an article
About the journal
Editorial Board
Author guidelines
Review for this journal
Journal collections
Awards
Publication charges
Journal information
2021-present
Neuromorphic Computing and Engineering
doi: 10.1088/issn.2634-4386
Online ISSN: 2634-4386
US