For the tutorial day on 27 March 2026, 6 tutorial suggestions have been accepted by the program committee (in title alphabetical order):
- BrainScaleS hands-on tutorial
- Introduction to Fugu: A Framework for Composing Neural Algorithms
- LiteVLA at the Edge: CPU-Only Vision–Language–Action Control as a Testbed for Neuro-Inspired Robotics
- N2A – neural programming language and workbench
- Simulation Tool for Asynchronous Cortical Streams (STACS) Tutorial
- SuperNeuro + NeuroCoreX Tutorial: Running Fast and Scalable Neuromorphic Simulations
BrainScaleS hands-on tutorial
Tutorial by: Johannes Schemmel, Yannik Stradtmann, Joscha Imberger, Jakob Kaiser and Björn Kindler from Heidelberg University (Germany)
In this tutorial, participants have the chance to explore BrainScaleS-2, one of the world’s most advanced analog platforms for neuromorphic computing.
For the tutorial, participants will use a web browser on their own laptop for remote access to BrainScaleS-2 systems via the EBRAINS Research Infrastructure. After a short introduction to neuromorphic computing and spiking neural networks, they will learn how to express and run experiments on the neuromorphic platform through either the (machine-learning targeting) PyTorch- or the (neuroscience targeting) PyNN-based software interfaces. This will allow them to gain insights into the unique properties and challenges of analog computing and to exploit the versatility of the system by exploring user-defined learning rules. Each participant will have the opportunity to follow a prepared tutorial or branch-off and implement their own project on the systems.
Participants can use their EBRAINS account (available at https://ebrains.eu/register free of charge) or can use a guest account during the tutorial. With an own account the participants can continue using the neuromorphic compute systems also after the end of the tutorial.
Introduction to Fugu: A Framework for Composing Neural Algorithms
Tutorial by: Michael Krygier from Sandia National Laboratories
In this tutorial, we will begin with an overview of the basic algorithm design in Fugu and typical workflows.
Fugu is an open-source high-level python programming framework designed for developing spiking algorithms in terms of computational graphs. It provides a hardware-independent mechanism for linking a variety of scalable spiking neural algorithms from different sources. Fugu is intended to be suitable for a wide range of neuromorphic applications, including machine learning, scientific computing, and more brain-inspired neural algorithms. Unlike other tools, Fugu separates the task of programming applications that may leverage neuromorphic hardware from the design of spiking neural algorithms and the specific details of neuromorphic hardware platforms. This allows users to focus on developing their applications without needing to be experts in neural computation or neuromorphic hardware.
To design Fugu bricks and run these algorithms on hardware, users first construct a computational graph of their application using Fugu’s API. The API provides a simple and intuitive way to define the graph, and Fugu takes care of the rest, including automating the construction of a graphical intermediate representation of the spiking neural algorithm. The output of Fugu is a single NetworkX graph that fully describes the spiking neural algorithm, which can then be compiled down to platform-specific code using one of Fugu’s hardware backends or run on Fugu’s reference simulator. By providing a high-level abstraction and automating the process of constructing and compiling spiking neural algorithms, Fugu makes it easier for users to develop and deploy neuromorphic applications, and enables the exploration of new and innovative uses for neuromorphic computing.
LiteVLA at the Edge: CPU-Only Vision–Language–Action Control as a Testbed for Neuro-Inspired Robotics
Tutorial by: Kishor Datta Gupta, Justin Williams and Mohd. Ariful Haque from Clark Atlanta University
This 2-hour tutorial presents LiteVLA, a lightweight vision–language–action pipeline that runs fully on a Raspberry Pi 4 / TurtleBot-class robot using only CPU resources. We show how its LoRA-adapted SmolVLM backbone and 4-bit NF4 quantization create a constrained, low-power control loop that mirrors many of the design pressures in neuromorphic and non-von-Neumann systems. Participants will walk through the full stack—RGB+action data collection, parameter-efficient fine-tuning,
hybrid-precision quantization, and ROS 2 integration with asynchronous Action Chunking—then discuss how such edge VLA controllers can benchmark algorithms, latency, and robustness before porting them to neuromorphic hardware or event-driven sensors. The tutorial targets NICE attendees interested in robotics and edge AI applications of neuro-inspired computing.
N2A – neural programming language and workbench
Tutorial by: Fred Rothganger from Sandia National Laboratories, Albuquerque, NM, USA
This tutorial will introduce the user to the N2A programming language and its associated IDE. Upon completion, the user will be able to create new neuron types, new applications, and run them on their local machine (or on SpiNNaker-2 if hardware is available). This will be a hands-on tutorial. N2A may be downloaded from https://github.com/sandialabs/n2a and run on your personal laptop.
Typically, neuromorphic machines execute a simple dynamical system called the Leaky Integrate and Fire (LIF) model, analogous to a logic gate in conventional machines, and communicate between these using single-bit events called “spikes”. However, neuromorphic device makers are moving away from simple LIF dynamics toward more general neurons. The SpiNNaker system has always been general-purpose programmable due to its use of ARM cores, and SpiNNaker-2 has the capacity to send up to four 32-bit floats with each event. Intel’s second-generation Loihi also supports graded spikes and assembly-level programming of neuron models.. The future of neuromorphic computing will likely be neurons with complex dynamics combined with high-volume short-packet communication. Several frameworks exist for programming neuromorphic systems. The challenge is to enable general-purpose programming of neuron types while maintaining cross-platform portability. Remarkably, these are complementary goals. With an appropriate level of abstraction, it is possible to “write once, run anywhere”.
N2A’s unique approach allows the user to specify the dynamics for each class of neuron by simply listing its equations. The tool then compiles these for a given target platform. The structure of the network and interactions between neurons are specified in the same equation language. Network structures can be arbitrarily deep and complex. The language supports component creation, extension, reuse and sharing. The tool comes with a base library that supplies common neuroscience models as well as components for specific neuromorphic devices.
Simulation Tool for Asynchronous Cortical Streams (STACS) Tutorial
Tutorial by: Felix Wang, Sandia National Laboratories
In this tutorial, we will explore how to define networks and take advantage of the parallel capabilities of the spiking neural network (SNN) simulator STACS (Simulation Tool for Asynchronous Cortical Streams)
〈https://github.com/sandialabs/STACS〉. Developed to be parallel from the ground up, STACS leverages the highly portable Charm++ parallel programming framework, which expresses a paradigm of asynchronous message-driven parallel objects, and supports both large-scale and long running simulations on high performance computing systems. In addition to the parallel runtime, STACS also implements a memory-efficient distributed network data structure for network construction, simulation,
and serialization. In particular, STACS uses a distributed intermediate representation, an SNN extension to the distributed compressed sparse row format, which supports interoperability with graph partitioners to facilitate optimizing communication costs across compute resources. With respect to the neuromorphic computing software ecosystem, this has enabled toolchains for mapping networks onto neuromorphic platforms such as Loihi 2 and SpiNNaker 2.
SuperNeuro + NeuroCoreX Tutorial: Running Fast and Scalable Neuromorphic Simulations
Tutorial by: Shruti R. Kulkarni, Ashish Gautam, Xi Zhang and Prasanna Date from Oak Ridge National Laboratory and Kevin Zhu from George Mason University Washington
A tutorial for two neuromorphic computing tools: SuperNeuro and NeuroCoreX.
SuperNeuro is a fast and scalable Python-based simulator for neuromorphic computing. It has the capability to support homogeneous and heterogeneous neuromorphic simulations and also support GPU acceleration. We will present a brief overview of the different evaluation modes within
SuperNeuro—Matrix Computation (MAT) mode and the the Agent Based Modeling (ABM) mode. The users will be guided through the process of installing SuperNeuro, setting up their networks using
the SuperNeuro API, defining connectivity with the spiking neural networks (SNNs), and leveraging different hardware backends for accelerating the simulations. SuperNeuro is tightly integrated with
NeuroCoreX, which is an FPGA-based neuromorphic hardware platform that enables seamless translation of simulated SNNs from software to hardware execution. This integration allows users to
validate algorithmic concepts, learning rules, and timing dynamics in both simulated and physical environments, thereby promoting a unified neuromorphic co-design workflow. We will demonstrate
how a network written in SuperNeuro can be run on NeuroCoreX seamlessly.