NICE 2020 – invited speakers

For NICE 2020 these invited speakers accepted the invitation (as of 18 December 2019):

  • Ryad Benosman (Pitt)
    “Why is Neuromorphic Event-based Engineering the future of AI?” (abstract)
  • Mike Davies (Intel)
  • Andrew Davison (CNRS, France)
  • Markus Diesmann (FZ Jülich, Germany)
    “Natural density cortical models as benchmarks for universal neuromorphic computers”
  • Charlotte Frenkel
    “Bottom-up and top-down neuromorphic processor design: Unveiling roads to embedded cognition” (abstract)
  • Wolfgang Maass (TU Graz, Austria)
  • Thomas Pfeil (Bosch, Germany)
  • Titash Rakshit (Samsung)
  • Johannes Schmmel (Heidelberg University, Germany)
  • Walter Senn (U Bern, Switzerland)
  • William Severa (Sandia)
  • Luping Shi (Tsinghua, China)
  • Fabian Sinz (U Tübingen, Germany)

Talk abstracts

Ryad Benosman

Why is Neuromorphic Event-based Engineering the future of AI?
While neuromorphic vision sensors and processors are becoming more available and usable by laymen and although they outperform existing devices specially in the case of sensing, there are still no successful commercial applications that allowed them to overtake conventional computation and sensing. In this presentation, I will provide insights on what are the missing key steps that are preventing this new computational revolution to happen. I will give an overview of neuromorphic, event-based approaches for image sensing and processing and how these have the potential to radically change current AI technologies and open new frontiers in building intelligent machines. I will focus on what is intended by event-based computation and the urge to process information in the time domain rather than recycling old concepts such as images, backpropagation and any form of frame-based approach. I will introduce new models of machine learning based on spike timings and show the importance of being compatible with neurosciences findings and recorded data. Finally, I will provide new insights on how to build neuromorphic neural processors able to operate these new AI and the urge to move to new architectural concepts.

Charlotte Frenkel

Bottom-up and top-down neuromorphic processor design: Unveiling roads to embedded cognition
While Moore’s law has driven exponential computing power expectations, its nearing end calls for new roads to embedded cognition. The field of neuromorphic computing aims at a paradigm shift compared to conventional von-Neumann computers, both for the architecture (i.e. memory and processing co-location) and for the data representation (i.e. spike-based event-driven encoding). However, it is unclear which of the bottom-up (neuroscience-driven) or top-down (application-driven) design approaches could unveil the most promising roads to embedded cognition. In order to clarify this question, this talk is divided into two parts.

The first part focuses on the bottom-up approach. From the building-block level to the silicon integration, we design two bottom-up neuromorphic processors: ODIN and MorphIC. We demonstrate with measurement results that hardware-aware neuroscience model design and selection allows reaching record neuron and synapse densities with low-power operation. However, the inherent difficulty for bottom-up designs lies in applying them to real-world problems beyond the scope of neuroscience applications.

The second part investigates the top-down approach. By starting from the applicative problem of adaptive edge computing, we derive the direct random target projection (DRTP) algorithm for low-cost neural network training and design a top-down DRTP-enabled neuromorphic processor: SPOON. We demonstrate with pre-silicon implementation results that combining event-driven and frame-based processing with weight-transport-free update-unlocked training supports low-cost adaptive edge computing with spike-based sensors. However, defining a suitable target for bio-inspiration in top-down designs is difficult, as it should ensure both the efficiency and the relevance of the resulting neuromorphic device.

Therefore, we claim that each of these two design approaches can act as a guide to address the shortcomings of the other.