NICE 2025 Tutorials
At the NICE tutorial day on Friday, 28 March 2025 there will be (as usual) the possibility to attend various tutorials. We will have 3 slots of 2 hours each – with a coffee break / lunch in between = each attendant can visit up to three tutorials (each tutorial will typically only be offered in one slot, so not all combinations will be possible, as each slot will host multiple tutorials in parallel).
- Accelerated Neuromorphic Computing on BrainScaleS
- Development and Deployment of SNNs on FPGA for Embedded Applications
- NEST Simulator as a neuromorphic prototyping platform
- NeuroBench
- Neuromorphic Control for Autonomous Driving
- Running SNNs on SpiNNaker
- SpiNNaker2 Tutorial: Beyond Neural Simulation
Tutorial details :
Tutorial: Accelerated Neuromorphic Computing on BrainScaleS
In this suggested tutorial, participants have the chance to explore BrainScaleS-2, one of the world’s most advanced analog platforms for neuromorphic computing. BrainScaleS-2 has primarily been designed to serve as a versatile computational substrate for the emulation of spiking neural networks. As such, each ASIC integrates 512 analog neuron circuits implementing the rich dynamics of the adaptive exponential leaky integrate-and-fire (AdEx) model. Each neuron receives input from 256 current- or conductance-based synapses with configurable sign and weight. Multi-compartment extensions allow the formation of complex, spatially distributed, dendritic trees with active processing elements. Integrating thousands of ADC and DAC channels as well as two custom microprocessors with SIMD extensions, each ASIC represents a software-controlled analog computer that can be configured and probed at will.
For the tutorial, participants will use a web browser on their own laptop for remote access to BrainScaleS-2 systems via the EBRAINS Research Infrastructure. After a short introduction to neuromorphic computing and spiking neural networks, they will learn how to express and run experiments on the neuromorphic platform through either the (machine-learning targeting) PyTorch- or the (neuroscience targeting) PyNN-based software interfaces. This will allow them to gain insights into the unique properties and challenges of analog computing and to exploit the versatility of the system by exploring user-defined learning rules.
Each participant will have the opportunity to follow a prepared tutorial or branch-off and implement their own project on the systems. Participants can use their EBRAINS account (free of charge available at https://ebrains.eu/register) or use a guest account during the tutorial. With an own account the participants can continue using the neuromorphic compute systems also after the end of the tutorial.
Tutorial: Development and Deployment of SNNs on FPGA for Embedded Applications
This tutorial presents an in-depth introduction to a many-core near-memory-computing Spiking Neural Network (SNN) FPGA accelerator developed at the FZI Research Center for Information Technology. The accelerator is designed for embedded sensor processing applications in medical, industrial, and automotive contexts, with a focus on dataset evaluation and real-time processing of high data rate neuromorphic sensors. The hardware architecture is based on a pipelined SNN processing core, and the tutorial will delve into the numerous co-design decisions made to optimize its performance and versatility. Participants will gain insights into critical concepts such as quantization, the mapping of logical neurons onto physical processing elements (PEs), and the accelerator’s integration within a System-on-Chip (SoC) FPGA context running Linux on classical processors. The tutorial will also cover the current (work-in-progress) feature set of the accelerator and provide hands-on experience in developing and deploying SNNs using our toolchain. The accelerator is intended to be open-sourced to the neuromorphic community upon reaching maturity in its development and deployment framework. In the interim, this tutorial aims to gather valuable feedback from potential users, researchers, and experts in neuromorphic hardware implementation to refine and enhance the accelerator’s capabilities.
Necessary background
- Basic knowledge in SNN, ML
- Basic knowledge in Python
- Basic knowledge in (neuromorphic) hardware
Tutorial materials
- Participants should bring: their laptop
- Participants can bring: their own Xilinx KR260/KV260 board
- Provided:
- Git repo with tutorial notebooks, libraries, FPGA bitstream
- A couple Xilinx KR260/KV260 boards with Linux image prepared
Tutorial: NEST Simulator as a neuromorphic prototyping platform
In the design of neuromorphic systems, it is vital to have a flexible and highly performant way of exploring system parameters. Using NEST Simulator [1] and the NESTML modeling language [2], spiking neural network models can be quickly prototyped and subjected to design constraints that mirror those of the intended neuromorphic platform. NEST has a proven track record on a large and diverse set of use cases, and can run on anywhere from laptops to supercomputers, making it an ideal prototyping and research platform for neuromorphic systems. This benefits reproducibility (obtaining the same numerical results across platforms), highlighting the value of NEST in verification and validation of neuromorphic systems.
In this tutorial, participants will get hands-on experience creating neuron and synapse models in NESTML, and using them to build networks in NEST that perform various tasks, such as sequence learning and reinforcement learning. We will introduce several tools and front-ends to implement modeling ideas most effectively, such as the graphical user interface NEST Desktop [3]. Through the use of target-specific code generation options in NESTML, the same model can even be directly run on neuromorphic platforms.
Participants do not have to install software as all tools are accessible via the cloud. All parts of the tutorial are hands-on, and take place via Jupyter notebooks.
- [1] https://nest-simulator.readthedocs.org/
- [2] https://nestml.readthedocs.org/
- [3] https://nest-desktop.readthedocs.org/
Tutorial: NeuroBench
Benchmarking is an essential component of research which involves measuring and comparing approaches in order to evaluate improvements and demonstrate objective benefits. Essentially, it aims to answer the questions – “How much better are my approaches now, and how can I make them even better next?”
NeuroBench is a community-driven initiative towards providing a standardized framework for benchmarking neuromorphic solutions, unifying the field with straightforward, well-defined, and reproducible benchmark measurement. NeuroBench offers common tools and methodology that apply broadly across different models, tasks, and scenarios, allowing for comprehensive insights into the correctness and costs of execution. Recently, it was used to compare and score accurate, tiny-compute sequence models in the BioCAS 2024 Neural Decoding Grand Challenge.
In this tutorial, we provide a hands-on guide to using the open-source NeuroBench harness for profiling neuromorphic models, such as spiking neural networks and other efficiency-focused models. Participants will learn how to benchmark models, extracting meaningful metrics in order to have a comprehensive understanding of the cost profile associated with model execution. We will show how the harness interfaces can be used to connect with other popular software libraries and how users can easily extend the harness with their own custom tasks and metrics of interest, which will provide the most relevant information for their research.
The hands-on examples will be offered through Python notebooks. Please bring your own laptop.
Tutorial: Neuromorphic Control for Autonomous Driving
This tutorial is based on 3 of our recent publications:
- Halaly, Raz, and Elishai Ezra Tsur. “Continuous adaptive nonlinear model predictive control using spiking neural networks and real-time learning.” Neuromorphic Computing and Engineering 4.2 (2024): 024006.
- Halaly, Raz, and Elishai Ezra Tsur. “Autonomous driving controllers with neuromorphic spiking neural networks.” Frontiers in Neurorobotics 17 (2023): 1234962.
- Shalumov, Albert, Raz Halaly, and Elishai Ezra Tsur. “Lidar-driven spiking neural network for collision avoidance in autonomous driving.” Bioinspiration & Biomimetics 16.6 (2021): 066016.
Autonomous driving is one of the hallmarks of artificial intelligence. Neuromorphic control is posed to significantly contribute to autonomous behavior by leveraging spiking neural networks-based energy-efficient computational frameworks. In this tutorial, we will explore neuromorphic implementations of four prominent controllers for autonomous driving: pure-pursuit, Stanley, PID, and MPC, using a physics-aware simulation framework (CARLA). We will showcase these controllers with various vehicle models (from a Tesla Model 3 to an Ambulance) and compare their performance with conventional CPU-based implementations.
While being neural approximations, we will demonstrate how neuromorphic models can perform competitively with their conventional counterparts. Particularly, we will show that neuromorphic models can converge to their op=mal performances with merely 100–1,000 neurons while providing state-of-the-art response dynamics to unforeseen situations. For example, we will showcase realistic driving scenarios in which vehicles experience malfunctioning and swift steering scenarios. We will demonstrate significant improvements in dynamic error rate compared with traditional control implementation with up to 89.15% median prediction error reduction with 5 spiking neurons and up to 96.08% with 5,000 neurons. In this tutorial, we will provide guidelines for building neuromorphic architectures for control and describe the importance of their underlying tuning parameters and neuronal resources. We will also highlight the importance of hybrid – conventional and neuromorphic – designs, as well as highlight the limitations of neuromorphic implementations, particularly at higher speeds where they tend to degrade faster than in conventional designs.
Tutorial: Running SNNs on SpiNNaker
SpiNNaker is a highly programmable neuromorphic platform, designed to simulate large spiking neural networks in real-time. It uses many conventional low-power ARM processors executing customizable software in parallel, coupled with a specialized multicast network enabling the transmission of many spikes to multiple target neurons.
In this tutorial, participants will be able to construct and simulate Spiking Neural Networks directly on the SpiNNaker hardware using the EBRAINS JupyterLab platform. They will learn how to program networks using the PyNN SNN language, and how the PyNN constructs work on the SpiNNaker platform. They will then get to try out these networks themselves and see the results from the simulations, as well as ask any other questions about SpiNNaker and how they might use it to explore SNNs in their own work.
Tutorial: SpiNNaker2 Tutorial: Beyond Neural Simulation
SpiNNaker2 is a scalable many-core architecture for flexible neuromorphic computing. It combines low-power ARM cores and dedicated accelerators for deep neural networks with a scalable, event-based communication infrastructure. This unique combination allows to explore a wide range of applications on SpiNNaker2 including spiking neural network simulation, deep neural networks, hybrid neural networks as well other event-based algorithms.
This tutorial complements the planned PyNN tutorial for SpiNNaker by the University of Manchester and focuses on applications that go beyond neural simulation and make use of SpiNNaker2’s features. We will bring single-chip SpiNNaker2 boards and offer remote access to 48-chip server boards. The first part of the tutorial will focus on deploying deep SNN on SpiNNaker2 using the neuromorphic intermediate representation. In the second part we will showcase examples of our generic compute and/or deep learning software stacks.