Mike Davies Director, Neuromorphic Computing Lab Intel Labs

Similar documents
Neuromorphic computing

A hardware implementation of a network of functional spiking neurons with hebbian learning

COMPLEX NEURAL COMPUTATION WITH SIMPLE DIGITAL NEURONS. Andrew Thomas Nere. A dissertation submitted in partial fulfillment of

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

STDP enhances synchrony in feedforward network

Shadowing and Blocking as Learning Interference Models

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

Intelligent Control Systems

Spiking Inputs to a Winner-take-all Network

Realization of Visual Representation Task on a Humanoid Robot

Evaluating the Effect of Spiking Network Parameters on Polychronization

Running PyNN Simulations on SpiNNaker

ASP: Learning to Forget with Adaptive Synaptic Plasticity in Spiking Neural Networks

Efficient Emulation of Large-Scale Neuronal Networks

Introduction to Computational Neuroscience

Different inhibitory effects by dopaminergic modulation and global suppression of activity

Neurons and neural networks II. Hopfield network

Cell Responses in V4 Sparse Distributed Representation

Evolution of Spiking Neural Controllers for Autonomous Vision-Based Robots

Real-time inference in a VLSI spiking neural network

Part 11: Mechanisms of Learning

Spike Timing-Dependent Plasticity in the Address Domain

Bridging the Semantic Gap: Emulating Biological Neuronal Behaviors with Simple Digital Neurons

Biomimetic Cortical Nanocircuits: The BioRC Project. Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008

Delay Learning and Polychronization for Reservoir Computing

Automated Femoral Neck Shaft Angle Measurement Using Neural Networks

Active Control of Spike-Timing Dependent Synaptic Plasticity in an Electrosensory System

An FPGA-based Massively Parallel Neuromorphic Cortex Simulator

Basics of Computational Neuroscience: Neurons and Synapses to Networks

FUZZY LOGIC AND FUZZY SYSTEMS: RECENT DEVELOPMENTS AND FUTURE DIWCTIONS

Consciousness as representation formation from a neural Darwinian perspective *

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Convolutional Spike Timing Dependent Plasticity based Feature Learning in Spiking Neural Networks

Task 1: Machine Learning with Spike-Timing-Dependent Plasticity (STDP)

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

Algorithms in Nature. Pruning in neural networks

A COMPETITIVE NETWORK OF SPIKING VLSI NEURONS

AUTOMATIONWORX. User Manual. UM EN IBS SRE 1A Order No.: INTERBUS Register Expansion Chip IBS SRE 1A

Implementing Spike-Timing-Dependent Plasticity on SpiNNaker Neuromorphic Hardware

AND BIOMEDICAL SYSTEMS Rahul Sarpeshkar

F09/F10 Neuromorphic Computing

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( )

Introduction to Computational Neuroscience

A SUPERVISED LEARNING

Recognition of English Characters Using Spiking Neural Networks

You submitted this quiz on Sun 19 May :32 PM IST (UTC +0530). You got a score of out of

Cellular Bioelectricity

Challenges for Brain Emulation: Why is Building a Brain so Difficult?

RetinotopicNET: An Efficient Simulator for Retinotopic Visual Architectures.

-Ensherah Mokheemer. -Amani Nofal. -Loai Alzghoul

Neuromorhic Silicon Neuron and Synapse: Analog VLSI Implemetation of Biological Structures

Pattern recognition with spiking neural networks and the ROLLS low-power online learning neuromorphic processor

Beyond Vanilla LTP. Spike-timing-dependent-plasticity or STDP

APPLICATION OF SPARSE DISTRIBUTED MEMORY TO THE INVERTED PENDULUM PROBLEM

Firing Cell: An Artificial Neuron with a Simulation of Long-Term-Potentiation-Related Memory

STDP-based spiking deep convolutional neural networks for object recognition

Recurrently Connected Silicon Neurons with Active Dendrites for One-Shot Learning

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

Racing to Learn: Statistical Inference and Learning in a Single Spiking Neuron with Adaptive Kernels

Sparse Coding in Sparse Winner Networks

International Journal of Advanced Computer Technology (IJACT)

Using OxRAM arrays to mimic bio-inspired short and long term synaptic plasticity Leti Memory Workshop Thilo Werner 27/06/2017

CHAPTER I From Biological to Artificial Neuron Model

VHDL implementation of Neuron based classification for future artificial intelligence applications

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Axon initial segment position changes CA1 pyramidal neuron excitability

Delay learning and polychronization for reservoir computing

Theme 2: Cellular mechanisms in the Cochlear Nucleus

Page Proof. Oscillation-Driven Spike-Timing Dependent Plasticity Allows Multiple Overlapping Pattern Recognition in Inhibitory Interneuron Networks

Recap DVS. Reduce Frequency Only. Reduce Frequency and Voltage. Processor Sleeps when Idle. Processor Always On. Processor Sleeps when Idle

Using a Harmony Search Algorithm/Spiking Neural Network Hybrid for Speech Recognition in a Noisy Environment

TIME SERIES MODELING USING ARTIFICIAL NEURAL NETWORKS 1 P.Ram Kumar, 2 M.V.Ramana Murthy, 3 D.Eashwar, 4 M.Venkatdas

Computing with Spikes in Recurrent Neural Networks

Synaptic Plasticity and Connectivity Requirements to Produce Stimulus-Pair Specific Responses in Recurrent Networks of Spiking Neurons

Modeling Neurotransmission and Drugs. CHM 108 Lab. Spring Dr. Angela King

Artificial Intelligence Lecture 7

The synaptic Basis for Learning and Memory: a Theoretical approach

Dynamic 3D Clustering of Spatio-Temporal Brain Data in the NeuCube Spiking Neural Network Architecture on a Case Study of fmri Data

Timing and the cerebellum (and the VOR) Neurophysiology of systems 2010

Indirect Training of a Spiking Neural Network for Flight Control via Spike-Timing-Dependent Synaptic Plasticity

What is Anatomy and Physiology?

VS : Systemische Physiologie - Animalische Physiologie für Bioinformatiker. Neuronenmodelle III. Modelle synaptischer Kurz- und Langzeitplastizität

Spike-IDS, A Novel Biologically Inspired Spiking Neural Model for Active Learning Method Fuzzy Modeling

Emergent creativity in declarative memories

A Digital Neuromorphic Architecture Efficiently Facilitating Complex Synaptic Response Functions Applied to Liquid State Machines

QUIZ YOURSELF COLOSSAL NEURON ACTIVITY

THE HISTORY OF NEUROSCIENCE

BUILDING A BEHAVING BRAIN

Question 1 Multiple Choice (8 marks)

Model-Based Reinforcement Learning by Pyramidal Neurons: Robustness of the Learning Rule

Modulators of Spike Timing-Dependent Plasticity

Neural Networks. Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity

Modeling Excitatory and Inhibitory Chemical Synapses

THE mammalian neocortex offers an unmatched pattern

THE HISTORY OF NEUROSCIENCE

Toward Silicon-Based Cognitive Neuromorphic ICs A Survey

Evolving, Dynamic Clustering of Spatio/Spectro-Temporal Data in 3D Spiking Neural Network Models and a Case Study on EEG Data

AU B. Sc.(Hon's) (Fifth Semester) Esamination, Introduction to Artificial Neural Networks-IV. Paper : -PCSC-504

A Neuromorphic VLSI Head Direction Cell System

Analog-digital simulations of full conductance-based networks of spiking neurons with spike timing dependent plasticity

Transcription:

Mike Davies Director, Neuromorphic Computing Lab Intel Labs

Loihi at a Glance Key Properties Integrated Memory + Compute Neuromorphic Architecture 128 neuromorphic cores supporting up to 128k neurons and 128M synapses with an advanced SNN feature set. Scalable on-chip learning capabilities to support a range of learning paradigms (unsupervised, supervised, reinforcement-based, and others) Supports highly complex neural network topologies (up to 2000-way fan-out between neurons) Fully digital asynchronous implementation Fabricated in Intel s 14nm FinFET process technology 2

LMT Parallel IO Parallel IO LMT LMT Chip Architecture Technology: 14nm Die Area: 64 mm 2 Core area: 0.41 mm 2 NmC cores: 128 cores x86 cores: 3 LMT cores Parallel IO Neuromorphic core LIF neuron model Programmable learning 128 KB synaptic memory Up to 1,024 neurons Asynchronous design Max # neurons: 128K neurons Max # synapses: Transistors: 128M synapses 2.07 billion Neuromorphic Mesh Parallel off-chip interfaces Two-phase asynchronous Single-ended signaling 100-200 MB/s BW Low-overhead NoC fabric 8x16-core 2D mesh Scalable to 1000 s cores Dimension order routed Two physical fabrics 8 GB/s per hop Parallel IO FPIO Embedded x86 processors Efficient spike-based communication with neuromorphic cores Data encoding/decoding Network configuration Synchronous design 3

Mesh Operation Time step T begins. Cores update dynamic neuron state and evaluate firing thresholds Above-threshold neurons send spike messages to fanout cores (Two neuron firings shown.) All neurons that fire in time T route their spike messages to all destination cores. 1 2 3 4 5 6 7 8 9 10 Barrier synchronization wavefronts advance time to T+1 Barrier Synchronization messages exchanged between all cores. When complete, time advances to time step T+1. S-bound Messages N-bound Messages 4

Neuromorphic Core Architecture All synaptic connections pooled 128KB shared memory Sparse, dense, and hierarchical Synaptic mapping representations Discrete time LIF neuron model (CUBA) Multi-compartment dendritic trees up to 1K compartments Intrinsic excitability homeostasis Synaptic delays Synaptic eligibility traces Flexible 3-tuple synaptic variables (1-9b weight, 0-6b delay, 0-8b tag) Graded reward spikes Flexible synaptic plasticity with microcode-programmable rules Filtered spike train traces Axon delays Shared output routing table 4K axon routes Refractory delays (+ random) Sum-of-products rule semantics Plasticity rules target any synaptic variable 5

Trace-Based Programmable Learning Rules X 1 (t) τ=20 Short time scale trace correlations => STDP regime Trace: Exponentially filtered spike train Presynaptic spike X traces w Postsynaptic spike Y traces Y 1 (t) τ=20 Weight, Delay, and Tag learning rules programmed as sum-of-product equations X 2 (t) w = w + τ=200 N P S i i=1 n i (V i,j + C i,j) j=1 Y 2 (t) τ=200 Traces are low precision (7-9b) and may decay stochastically for implementation efficiency Long time scale traces respond to correlations in activity rates Synaptic Variables Wgt, Delay, Tag (variable precision) Variable Dependencies X 0, Y 0, X 1, Y 1, X 2, Y 2, Wgt, Delay, Tag, etc. 6

Dendrite Accumulation Axon Fanout Physical Implementation Neuron Model Learning Engine 860um Bundled Data Asynchronous Implementation Event-driven with integrated flow control Fully automated design flow from CSP Supports FPGA emulation Integrates with synchronous x86 CPUs Synaptic Memory 480um One asynchronous controller s associated pipeline logic 7

Application Results 8

More to come Thursday morning Also see our posters and demos for more M. Davies et al., Loihi: A Neuromorphic Manycore Processor with On-Chip Learning, in IEEE Micro, vol. 38, no. 1, pp. 82-99, January/February 2018

Legal Information This presentation contains the general insights and opinions of Intel Corporation ( Intel ). The information in this presentation is provided for information only and is not to be relied upon for any other purpose than educational. Statements in this document that refer to Intel s plans and expectations for the quarter, the year, and the future, are forward-looking statements that involve a number of risks and uncertainties. A detailed discussion of the factors that could affect Intel s results and plans is included in Intel s SEC filings, including the annual report on Form 10-K. Any forecasts of goods and services needed for Intel s operations are provided for discussion purposes only. Intel will have no liability to make any purchase in connection with forecasts published in this document. Intel accepts no duty to update this presentation based on more current information. Intel is not liable for any damages, direct or indirect, consequential or otherwise, that may arise, directly or indirectly, from the use or misuse of the information in this presentation. Intel technologies features and benefits depend on system configuration and may require enabled hardware, software or service activation. Learn more at intel.com, or from the OEM or retailer. Copyright 2018 Intel Corporation. Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries. *Other names and brands may be claimed as the property of others