NICE 2020 – invited speakers

For NICE 2020 these invited speakers accepted the invitation:

  • Ryad Benosman (University of Pittsburgh, USA)
    “Why is Neuromorphic Event-based Engineering the future of AI?” (abstract)
  • Mike Davies (Intel)
  • Andrew Davison (CNRS, France)
    Programming neuromorphic computers: PyNN and beyond (abstract)
  • Markus Diesmann (FZ Jülich, Germany)
    “Natural density cortical models as benchmarks for universal neuromorphic computers”
  • Charlotte Frenkel (ETH Zürich, Switzerland)
    “Bottom-up and top-down neuromorphic processor design: Unveiling roads to embedded cognition” (abstract)
  • Wolfgang Maass (TU Graz, Austria)
  • Thomas Pfeil (Bosch, Germany)
    Neuromorphic and AI research at BCAI (abstract)
  • Titash Rakshit (Samsung)
  • Irina Rish (IBM) (abstract)
  • Johannes Schemmel (Heidelberg University, Germany)
  • Walter Senn (U Bern, Switzerland)
  • William Severa (Sandia, USA)
    Platform-Agnostic Neural Algorithm Composition using Fugu (abstract)
  • Luping Shi (Tsinghua, China)
  • Fabian Sinz (U Tübingen, Germany)
    “Inductive bias transfer between brains and machines” (abstract)
  • Jonathan Tapson (GrAI Matter Labs, USA) (abstract)

Talk abstracts

Info: the meeting agenda with all abstracts is available here.

Ryad Benosman

Why is Neuromorphic Event-based Engineering the future of AI?
While neuromorphic vision sensors and processors are becoming more available and usable by laymen and although they outperform existing devices specially in the case of sensing, there are still no successful commercial applications that allowed them to overtake conventional computation and sensing. In this presentation, I will provide insights on what are the missing key steps that are preventing this new computational revolution to happen. I will give an overview of neuromorphic, event-based approaches for image sensing and processing and how these have the potential to radically change current AI technologies and open new frontiers in building intelligent machines. I will focus on what is intended by event-based computation and the urge to process information in the time domain rather than recycling old concepts such as images, backpropagation and any form of frame-based approach. I will introduce new models of machine learning based on spike timings and show the importance of being compatible with neurosciences findings and recorded data. Finally, I will provide new insights on how to build neuromorphic neural processors able to operate these new AI and the urge to move to new architectural concepts.

Andrew Davison

Programming neuromorphic computers: PyNN and beyond

PyNN is a Python API for describing spiking neuronal networks consisting of point neurons, with synaptic plasticity. The API is intended to be independent of the underlying simulator or hardware platform: PyNN models can run on traditional simulators such as NEST, NEURON and Brian, GPU-based simulators such as GeNN, and neuromorphic hardware systems such as BrainScaleS and SpiNNaker. In this talk I will present the current state of PyNN and forthcoming extensions, in particular support for multicompartmental models, intracellular calcium dynamics, and structural plasticity. I will also briefly discuss ideas for higher-level APIs/component libraries, built on PyNN, to support cognitive modelling and machine-learning-inspired networks.

Charlotte Frenkel

Bottom-up and top-down neuromorphic processor design: Unveiling roads to embedded cognition
While Moore’s law has driven exponential computing power expectations, its nearing end calls for new roads to embedded cognition. The field of neuromorphic computing aims at a paradigm shift compared to conventional von-Neumann computers, both for the architecture (i.e. memory and processing co-location) and for the data representation (i.e. spike-based event-driven encoding). However, it is unclear which of the bottom-up (neuroscience-driven) or top-down (application-driven) design approaches could unveil the most promising roads to embedded cognition. In order to clarify this question, this talk is divided into two parts.

The first part focuses on the bottom-up approach. From the building-block level to the silicon integration, we design two bottom-up neuromorphic processors: ODIN and MorphIC. We demonstrate with measurement results that hardware-aware neuroscience model design and selection allows reaching record neuron and synapse densities with low-power operation. However, the inherent difficulty for bottom-up designs lies in applying them to real-world problems beyond the scope of neuroscience applications.

The second part investigates the top-down approach. By starting from the applicative problem of adaptive edge computing, we derive the direct random target projection (DRTP) algorithm for low-cost neural network training and design a top-down DRTP-enabled neuromorphic processor: SPOON. We demonstrate with pre-silicon implementation results that combining event-driven and frame-based processing with weight-transport-free update-unlocked training supports low-cost adaptive edge computing with spike-based sensors. However, defining a suitable target for bio-inspiration in top-down designs is difficult, as it should ensure both the efficiency and the relevance of the resulting neuromorphic device.

Therefore, we claim that each of these two design approaches can act as a guide to address the shortcomings of the other.

Thomas Pfeil

Neuromorphic and AI research at BCAI

We will give an overview of current challenges and activites at Bosch Center for Artificial Intelligence regarding neuromorphic computing, spiking neural networks and deep learning. This includes a short introduction to the publicly funded project ULPEC addressing ultra-low power vision systems. In addition, we will give a summary of selected academic contributions in the field of spiking neural networks and hardware-aware compression of deep neural networks.

Irina Rish

Beyond Backprop: Different Approaches to Credit Assignment in Neural Nets

Backpropagation algorithm (backprop) has been the workhorse of neural net learning for several decades, and its practical effectiveness is demonstrated by recent successes of deep learning in a wide range of applications. This approach uses chain rule differentiation to compute gradients in state-of-the-art learning algorithms such as stochastic gradient descent (SGD) and its variations. However, backprop has several drawbacks as well, including the vanishing and exploding gradients issue, inability to handle non-differentiable nonlinearities and to parallelize weight-updates across layers, and biological implausibility. These limitations continue to motivate exploration of alternative training algorithms, including several recently proposed auxiliary-variable methods which break the complex nested objective function into local subproblems. However, those techniques are mainly offline (batch), which limits their applicability to extremely large datasets, as well as to online, continual or reinforcement learning. The main contribution of our work is a novel online (stochastic/mini-batch) alternating minimization (AM) approach for training deep neural networks, together with the first theoretical convergence guarantees for AM in stochastic settings and promising empirical results on a variety of architectures and datasets.

William Severa

Platform-Agnostic Neural Algorithm Composition using Fugu

Spiking neural networks and corresponding neuromorphic hardware are undergoing an uptick in interest as key milestones are accomplished by industry, academic and government research groups. Unfortunately, from an end-user’s perspective, testing or deploying applications on a neuromorphic platform is very challenging and often infeasible. We hope to address two common and key challenges, portability and composition, by the creation of an overarching software framework called Fugu. Fugu allows for spiking neural algorithms, created by independent designers, to be combined seamlessly in a scalable and target- platform-agnostic manner. This resulting intermediate representation is then translatable to multiple neuromorphic hardware backends.

Acknowledgements: Sandia National Laboratories is a multi-mission laboratory managed and operated by National Technology and Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International, Inc., for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-NA0003525.

Fabian Sinz

Inductive bias transfer between brains and machines

Machine Learning, in particular computer vision, has made tremendous progress in recent year. On standardized datasets deep networks now frequently achieve close to human or super human performance. However, despite this enormous progress, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called ‘‘inductive bias,’’ determines how well any learning algorithm—or brain—generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. I will give an overview on some conceptual ideas and preliminary results how the rapid increase of neuroscientific data could be used to transfer low level inductive biases from the brain to learning machines.

Jonathan Tapson (GrAI Matter Labs)

Batch << 1: Why Neuromorphic Computing Architectures Suit Real-Time Workloads

As predicted by John Hennessy, there has been a “Cambrian explosion” of computing architectures as Moore’s Law scaling has broken down. This is most obvious in the new field of AI hardware, where the competition to develop and commercialize chips for deep learning training and inference is particularly strong. There is no consensus as to whether the same architectures will be appropriate for data-center computation and edge computation, although some practitioners are starting to differentiate architectures on the basis of whether inputs (typically, images or video frames) can be accumulated before processing (allowing for very large memory read and write blocks and large matrix multiplications); or whether the task demands that each frame must be processed in real time (so-called “Batch = 1” processing).

In this presentation we show that many real-world tasks are in fact “Batch << 1” operations. For example, in the case of a forward-facing video camera in a self-driving car application, the similarity between successive frames is very high, and increases as the frame rate and resolution of the video increase; a 240fps 1080p camera will typically have well over 99% of pixels unchanged between successive frames. The same high correlation between successive samples applies in other real-world workloads such as conversational audio processing.

Exploiting the correlation of input streams can lead to very efficient processing (as shown in video compression techniques such as H.264 / MPEG-4). However, it requires significantly different processing architectures, chief among which is the necessity to maintain system state in memory between inputs.

We will show that neuromorphic architectures intrinsically implement the most important features of a ‘Batch << 1” architecture, and are very well suited to edge processing. We will describe a new architecture – NeuronFlow – which is optimized for this purpose, and present results from GrAIOne, the first chip manufactured to implement this architecture. Early results show a significant processing advantage in terms of both latency and power consumption.