Neurons and neural networks II. Hopfield network

Similar documents
AU B. Sc.(Hon's) (Fifth Semester) Esamination, Introduction to Artificial Neural Networks-IV. Paper : -PCSC-504

Learning in neural networks

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Sensory Cue Integration

Introduction to Machine Learning. Katherine Heller Deep Learning Summer School 2018

Practical Bayesian Optimization of Machine Learning Algorithms. Jasper Snoek, Ryan Adams, Hugo LaRochelle NIPS 2012

TIME SERIES MODELING USING ARTIFICIAL NEURAL NETWORKS 1 P.Ram Kumar, 2 M.V.Ramana Murthy, 3 D.Eashwar, 4 M.Venkatdas

Intelligent Systems. Discriminative Learning. Parts marked by * are optional. WS2013/2014 Carsten Rother, Dmitrij Schlesinger

Computational Cognitive Neuroscience

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Contributions to Brain MRI Processing and Analysis

Vision as Bayesian inference: analysis by synthesis?

Ch.20 Dynamic Cue Combination in Distributional Population Code Networks. Ka Yeon Kim Biopsychology

Artificial Neural Networks

EECS 433 Statistical Pattern Recognition

Bayesian integration in sensorimotor learning

Coding and computation by neural ensembles in the primate retina

Neuro-Inspired Statistical. Rensselaer Polytechnic Institute National Science Foundation

On Training of Deep Neural Network. Lornechen

Mike Davies Director, Neuromorphic Computing Lab Intel Labs

Question 1 Multiple Choice (8 marks)

BayesOpt: Extensions and applications

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

10CS664: PATTERN RECOGNITION QUESTION BANK

Statistics 202: Data Mining. c Jonathan Taylor. Final review Based in part on slides from textbook, slides of Susan Holmes.

lateral organization: maps

Overview of the visual cortex. Ventral pathway. Overview of the visual cortex

Signal Detection Theory and Bayesian Modeling

PMR5406 Redes Neurais e Lógica Fuzzy. Aula 5 Alguns Exemplos

Brief History of Work in the area of Learning and Memory

CSE 258 Lecture 1.5. Web Mining and Recommender Systems. Supervised learning Regression

Reach and grasp by people with tetraplegia using a neurally controlled robotic arm

Application of Artificial Neural Networks in Classification of Autism Diagnosis Based on Gene Expression Signatures

Goals. Visual behavior jobs of vision. The visual system: overview of a large-scale neural architecture. kersten.org

Learning from data when all models are wrong

Dynamic Stochastic Synapses as Computational Units

Modeling of Hippocampal Behavior

Review: Logistic regression, Gaussian naïve Bayes, linear regression, and their connections

Biologically-Inspired Control in Problem Solving. Thad A. Polk, Patrick Simen, Richard L. Lewis, & Eric Freedman

Artificial neural networks: application to electrical stimulation of the human nervous system

Neural Networks. Nice books to start reading:

ERA: Architectures for Inference

Learning and Adaptive Behavior, Part II

Intelligent Control Systems

Adaptive Type-2 Fuzzy Logic Control of Non-Linear Processes

MS&E 226: Small Data

Single cell tuning curves vs population response. Encoding: Summary. Overview of the visual cortex. Overview of the visual cortex

TESTING at any level (e.g., production, field, or on-board)

Introduction to Computational Neuroscience

Bayesians methods in system identification: equivalences, differences, and misunderstandings

Supervised Learner for the Prediction of Hi-C Interaction Counts and Determination of Influential Features. Tyler Yue Lab

SLAUGHTER PIG MARKETING MANAGEMENT: UTILIZATION OF HIGHLY BIASED HERD SPECIFIC DATA. Henrik Kure

Probabilistic Modeling to Support and Facilitate Decision Making in Early Drug Development

An investigation of neural networks in thyroid function diagnosis

Combining Risks from Several Tumors using Markov chain Monte Carlo (MCMC)

NONLINEAR REGRESSION I

Bayesian Joint Modelling of Benefit and Risk in Drug Development

M.Sc. in Cognitive Systems. Model Curriculum

Why do we have a hippocampus? Short-term memory and consolidation

arxiv: v2 [cs.lg] 30 Oct 2013

Bayes in the Brain On Bayesian Modelling in Neuroscience Matteo Colombo and Peggy Seriès

Bayesian Inference. Thomas Nichols. With thanks Lee Harrison

Lecture 2: Foundations of Concept Learning

Active User Affect Recognition and Assistance

Identification and Classification of Coronary Artery Disease Patients using Neuro-Fuzzy Inference Systems

Ordinal Data Modeling

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

CSE 258 Lecture 2. Web Mining and Recommender Systems. Supervised learning Regression

A Modular Attractor Model of Semantic Access

The synaptic Basis for Learning and Memory: a Theoretical approach

Real-time inference in a VLSI spiking neural network

Motivation: Attention: Focusing on specific parts of the input. Inspired by neuroscience.

Bayesian Nonparametric Methods for Precision Medicine

5th Mini-Symposium on Cognition, Decision-making and Social Function: In Memory of Kang Cheng

Model-Based fmri Analysis. Will Alexander Dept. of Experimental Psychology Ghent University

Classical Psychophysical Methods (cont.)

Artificial Neural Network : Introduction

CSE Introduction to High-Perfomance Deep Learning ImageNet & VGG. Jihyung Kil

2012 Course: The Statistician Brain: the Bayesian Revolution in Cognitive Sciences

Introduction to Bayesian Analysis 1

Introduction to Computational Neuroscience

Unmanned autonomous vehicles in air land and sea

A Fuzzy Improved Neural based Soft Computing Approach for Pest Disease Prediction

Flexible, High Performance Convolutional Neural Networks for Image Classification

Probabilistic Models in Cognitive Science and Artificial Intelligence

Spatial and Temporal Data Fusion for Biosurveillance

Multilayer Perceptron Neural Network Classification of Malignant Breast. Mass

Computational Modeling of Human Cognition

Bayesian Machine Learning for Decoding the Brain

An Ideal Observer Model of Visual Short-Term Memory Predicts Human Capacity Precision Tradeoffs

A Brief Introduction to Bayesian Statistics

Behavior Architectures

Bayesian Inference Bayes Laplace

Tissue Classification Based on Gene Expression Data

BayesRandomForest: An R

Hierarchical Bayesian Modeling of Individual Differences in Texture Discrimination

A hybrid Model to Estimate Cirrhosis Using Laboratory Testsand Multilayer Perceptron (MLP) Neural Networks

B657: Final Project Report Holistically-Nested Edge Detection

Introduction to Computational Neuroscience

Transcription:

Neurons and neural networks II. Hopfield network 1

Perceptron recap key ingredient: adaptivity of the system unsupervised vs supervised learning architecture for discrimination: single neuron perceptron error function & learning rule gradient descent learning & divergence regularisation learning as inference 2

Interpreting learning as inference So far: optimization wrt. an objective function where 3

Interpreting learning as inference So far: optimization wrt. an objective function where What s this quirky regularizer, anyway? 3

Interpreting learning as inference Let s interpret y(x,w) as a probability: 4

Interpreting learning as inference Let s interpret y(x,w) as a probability: in a compact form: 4

Interpreting learning as inference Let s interpret y(x,w) as a probability: in a compact form: the likelihood of the input data can be expressed with the original error function function 4

Interpreting learning as inference Let s interpret y(x,w) as a probability: in a compact form: the likelihood of the input data can be expressed with the original error function function the regularizer has the form of a prior! 4

Interpreting learning as inference Let s interpret y(x,w) as a probability: in a compact form: the likelihood of the input data can be expressed with the original error function function the regularizer has the form of a prior! what we get in the objective function M(w): the posterior distribution of w: 4

Interpreting learning as inference Relationship between M(w) and the posterior interpretation: minimizing M(w) leads to finding the maximum a posteriori estimate w MP The log probability interpretation of the objective function retains: additivity of errors, while keeping the multiplicativity of probabilities 5

Interpreting learning as inference Properties of the Bayesian estimate 6

Interpreting learning as inference Properties of the Bayesian estimate The probabilistic interpretation makes our assumptions explicit: by the regularizer we imposed a soft constraint on the learned parameters, which expresses our prior expecations. An additional plus: beyond getting w MP we get a measure for learned parameter uncertainty 6

Interpreting learning as inference Demo 7

Interpreting learning as inference Demo 7

Interpreting learning as inference Demo 7

Interpreting learning as inference Demo 7

Interpreting learning as inference Making predictions Up to this point the goal was optimization: 8

Interpreting learning as inference Making predictions Up to this point the goal was optimization: 8

Interpreting learning as inference Making predictions Up to this point the goal was optimization: Are we equally confident in the two predictions? 8

Interpreting learning as inference Making predictions Up to this point the goal was optimization: Are we equally confident in the two predictions? The Bayesian answer exploits the probabilistic interpretation: 8

Predictive probability: Interpreting learning as inference Calculating Bayesian predictions 9

Predictive probability: Interpreting learning as inference Calculating Bayesian predictions Likelihood: Weight posterior Partition function: 9

Predictive probability: Interpreting learning as inference Calculating Bayesian predictions Likelihood: Weight posterior Partition function: Finally: 9

Interpreting learning as inference Calculating Bayesian predictions How to solve the integral? 10

Interpreting learning as inference Calculating Bayesian predictions How to solve the integral? Bad news: Monte Carlo integration is needed 10

11

12

Interpreting learning as inference Calculating Bayesian predictions 13

Interpreting learning as inference Calculating Bayesian predictions Original estimate 13

Interpreting learning as inference Calculating Bayesian predictions Original estimate Bayesian estimate 13

Interpreting learning as inference Gaussian approximation Taylor expansion around the MAP estimate 14

Interpreting learning as inference Gaussian approximation Taylor expansion around the MAP estimate The Gaussian approximation: 14

Interpreting learning as inference Gaussian approximation Taylor expansion around the MAP estimate The Gaussian approximation: 14

Neural networks Unsupervised learning Capacity of a single neuron is limited: certain data can only be learned So far, we used a supervised learning paradigm: a teacher was necessary to teach an input-output relation Hopfield networks try to cure both Unsupervised learning: what is it about? Hebb rule: an enlightening example assuming 2 neurons and a weight modification process: This simple rule realizes an associative memory! 15

Neural networks The Hopfield network Architecture: a set of I neurons connected by symmetric synapses of weight w ij no self connections: w ii =0 output of neuron i: x i Activity rule: Learning rule: Synchronous/ asynchronous update ; 16

Neural networks The Hopfield network Architecture: a set of I neurons connected by symmetric synapses of weight w ij no self connections: w ii =0 output of neuron i: x i Activity rule: Learning rule: Synchronous/ asynchronous update alternatively, a continuous network can be defined as: ; 16

Are the memories stable? Neural networks Stability of Hopfield network Necessary conditions: symmetric weights; asynchronous update 17

Are the memories stable? Neural networks Stability of Hopfield network Necessary conditions: symmetric weights; asynchronous update 17

Are the memories stable? Neural networks Stability of Hopfield network Necessary conditions: symmetric weights; asynchronous update Robust against perturbation of a subset of weights 17

Neural networks Capacity of Hopfield network How many traces can be memorized by a network of I neurons? 18

Neural networks Capacity of Hopfield network 19