Recognition of English Characters Using Spiking Neural Networks

Similar documents
Neuromorphic computing

Modulators of Spike Timing-Dependent Plasticity

Basics of Computational Neuroscience: Neurons and Synapses to Networks

Realization of Visual Representation Task on a Humanoid Robot

Computational cognitive neuroscience: 2. Neuron. Lubica Beňušková Centre for Cognitive Science, FMFI Comenius University in Bratislava

International Journal of Advanced Computer Technology (IJACT)

Evaluating the Effect of Spiking Network Parameters on Polychronization

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( )

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

Heterogeneous networks of spiking neurons: self-sustained activity and excitability

Nuttapod Nuntalid. A thesis submitted to. Auckland University of Technology. in fulfilment of the requirements for the degree of

A Computational Biological Network for Wood Defect Classification

Shadowing and Blocking as Learning Interference Models

Task 1: Machine Learning with Spike-Timing-Dependent Plasticity (STDP)

Spiking Neural Networks: Neuron Models, Plasticity, and Graph Applications

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

RetinotopicNET: An Efficient Simulator for Retinotopic Visual Architectures.

Hearing Thinking. Keywords: Cortical Neurons, Granular Synthesis, Synaptic Plasticity

Efficient Emulation of Large-Scale Neuronal Networks

Dynamic Stochastic Synapses as Computational Units

Sample Lab Report 1 from 1. Measuring and Manipulating Passive Membrane Properties

Simulating inputs of parvalbumin inhibitory interneurons onto excitatory pyramidal cells in piriform cortex

Hierarchical dynamical models of motor function

A biophysically realistic Model of the Retina

Input-speci"c adaptation in complex cells through synaptic depression

Signal detection in networks of spiking neurons with dynamical synapses

Signal Processing by Multiplexing and Demultiplexing in Neurons

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Intro. Comp. NeuroSci. Ch. 9 October 4, The threshold and channel memory

Introduction to Computational Neuroscience

Sparse Coding in Sparse Winner Networks

Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task

Investigation of Physiological Mechanism For Linking Field Synapses

APPLICATION OF SPARSE DISTRIBUTED MEMORY TO THE INVERTED PENDULUM PROBLEM

Observational Learning Based on Models of Overlapping Pathways

Simplified Model of Machado-Joseph Disease in Comparison to Parkinson s Disease and Normal Models

Neural Cognitive Modelling: A Biologically Constrained Spiking Neuron Model of the Tower of Hanoi Task

Scalable Biologically Inspired Neural Networks with Spike Time Based Learning

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Basics of Computational Neuroscience

A SUPERVISED LEARNING

Dynamic 3D Clustering of Spatio-Temporal Brain Data in the NeuCube Spiking Neural Network Architecture on a Case Study of fmri Data

CHAPTER I From Biological to Artificial Neuron Model

Simulation of Large Scale Neural Networks for Evaluation Applications

Implementation of Inference Engine in Adaptive Neuro Fuzzy Inference System to Predict and Control the Sugar Level in Diabetic Patient

Intelligent Control Systems

Question 1 Multiple Choice (8 marks)

Introduction to Computational Neuroscience

Memory retention the synaptic stability versus plasticity dilemma

Imperfect Synapses in Artificial Spiking Neural Networks

Spiking neural network simulator: User s Guide

Neuromorphic Self-Organizing Map Design for Classification of Bioelectric-Timescale Signals

NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS

arxiv: v1 [cs.ne] 10 Sep 2018

The Pennsylvania State University. The Graduate School. Department of Computer Science and Engineering EFFICIENT AND SCALABLE BIOLOGICALLY PLAUSIBLE

Information Processing During Transient Responses in the Crayfish Visual System

Modeling of Hippocampal Behavior

Tunable Low Energy, Compact and High Performance Neuromorphic Circuit for Spike-Based Synaptic Plasticity

Neurons: Structure and communication

Spiking Inputs to a Winner-take-all Network

Improved Intelligent Classification Technique Based On Support Vector Machines

Modeling synaptic facilitation and depression in thalamocortical relay cells

R Jagdeesh Kanan* et al. International Journal of Pharmacy & Technology

Scaling a slow-wave sleep cortical network model using NEOSIM*

Different inhibitory effects by dopaminergic modulation and global suppression of activity

Dynamics of Hodgkin and Huxley Model with Conductance based Synaptic Input

Learning in neural networks by reinforcement of irregular spiking

Hiroki Mori Graduate School of Fundamental Science and Engineering Waseda University Tokyo, Japan

Biomimetic Cortical Nanocircuits: The BioRC Project. Alice C. Parker NSF Emerging Models of Technology Meeting July 24, 2008

Computing with Spikes in Recurrent Neural Networks

Synfire chains with conductance-based neurons: internal timing and coordination with timed input

Delay Learning and Polychronization for Reservoir Computing

The control of spiking by synaptic input in striatal and pallidal neurons

Mike Davies Director, Neuromorphic Computing Lab Intel Labs

Indirect Training of a Spiking Neural Network for Flight Control via Spike-Timing-Dependent Synaptic Plasticity

EEG Classification with BSA Spike Encoding Algorithm and Evolving Probabilistic Spiking Neural Network

A Statistical Analysis of Information- Processing Properties of Lamina-Specific Cortical Microcircuit Models

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D

TWO HANDED SIGN LANGUAGE RECOGNITION SYSTEM USING IMAGE PROCESSING

Two lectures on autism

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons

DIGITIZING HUMAN BRAIN: BLUE BRAIN PROJECT

A Neural Network Architecture for.

Evolving, Dynamic Clustering of Spatio/Spectro-Temporal Data in 3D Spiking Neural Network Models and a Case Study on EEG Data

Designing Behaviour in Bio-inspired Robots Using Associative Topologies of Spiking-Neural-Networks

Cellular Neurobiology BIPN140

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Neural Information Processing: between synchrony and chaos

Chapter 1. Introduction

Preparing More Effective Liquid State Machines Using Hebbian Learning

Beyond Vanilla LTP. Spike-timing-dependent-plasticity or STDP

Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification and Spike-Shifting

First-spike based visual categorization using reward-modulated STDP

Theory of correlation transfer and correlation structure in recurrent networks

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

Applied Neuroscience. Conclusion of Science Honors Program Spring 2017

Running PyNN Simulations on SpiNNaker

Satoru Hiwa, 1 Kenya Hanawa, 2 Ryota Tamura, 2 Keisuke Hachisuka, 3 and Tomoyuki Hiroyasu Introduction

Parallel Event-Driven Neural Network Simulations Using the Hodgkin-Huxley Neuron Model

ERA: Architectures for Inference

Transcription:

Recognition of English Characters Using Spiking Neural Networks Amjad J. Humaidi #1, Thaer M. Kadhim *2 Control and System Engineering, University of Technology, Iraq, Baghdad 1 601116@uotechnology.edu.iq Control and System Engineering, University of Technology, Iraq, Baghdad 2 cse.61012@uotechnology.edu Abstract Text Recognition is one of the active and challenging areas of research in the pattern recognition field. It covers many applications like automatic number plate recognition, bank cheques, aid of reading for blind, and hand-written document conversion into form of structural text. In the present work, the spiking neural network (SNN) model is utilized for the identification of characters in a character set. The structure of considered neural network consists of two layers with Izhekevich neurons. Remote Supervised Method (ReSuMe) has been used as a learning rule for training. Also, the recognition of English characters has been considered based on spike Neural Networks. The network could successfully recognize a set of 48 characters. Keyword- Spiking Neural Network (SNN), Remote Supervised Method (ReSuMe), Artificial Neural Networks(ANN) I. INTRODUCTION Spiking neuron models lead to the rise of third generation of artificial neural networks(ann) [1]. Such models add more realistic sense in neural simulation and invocation of the time concept. It has been shown that a considerable enhancement in image recognition speed can be reached by using pulse coding (versus rate coding) in biological neurons of the cortex [2]. Generally, a longer processing time is required by rate coding as the neuron could be fired after waiting for several spikes to come. Also, in case of pulse coding, the information based on timing order of incoming spikes can be encoded at faster rate. The researchers in [3] and [4] have used spiking neural networks based on pulse coding for image recognition application. The model has been developed by Gupta could recognize a set of 48 characters with 3 5 pixel size. The model of integrate and fire spiking neuron network has been utilized for recognition and the spike timing based plasticity (STDP) has been used for training [5]. Thorpe used the same neuron model to examine the recognition, but with enlarged size of images [6]. Buonomano et al. have proposed a spiking neural model which can code the intensity of the input using relative firing time for the application of position invariant character recognition [7]. Presently, researchers in the field of character recognition systems showed a strong interest to find models which mimicking the biological nature neural networks. In other words, it would be useful to inspect for such systems (image recognition) the use of neuron models which are more identical to biological nature than integrate-and-fire model. Izhikevich stated that the integrate and fire model is far from mimicking the true biological nature of spiking neuron models. He notified that integrate-and-fire model cannot satisfy most essential features of cortical spiking neurons and thereby this model has to be avoided [8]. Izhikevich has set up a comparison analysis between 11 spiking neuron models in the aspects of biological accuracy and computational load [8]. He sorted the models according to their biological accuracy and showed that the first five most biologically accurate models are Hodgkin-Huxley, Izhikevich, Wilson, Hindmarsh-Rose, and Morris-Lecar models. It has been shown that the model of Hodgkin-Huxley is the most computationally intensive, while that of Izhikevich is the most computationally efficient. In the present work, the spiking neuron network for character recognition is presented. The structure of neural network is a two-layer spiking neuron network based on biological-like Izhikevich neuron model. Also, the Remote Supervised Method (ReSuMe) is suggested for model training and the objective of neuron network is to recognize a set of 48 characters with a 5 3 pixel size. II. NEURON MODEL In 2003, a new spiking neuron model has been proposed by Izhikevich The model is characterized by second order differential equations [9]. It has been shown that the computations required by Izhikevich model is significantly fewer (13 flops per neuron update) than that needed by Hodgkin-Huxley model (1200 flops per neuron update) [8]. In spite of that Izhikevich model could successfully produce the responses of most types of neurons that have been used in biological experiments. As had been done by Izhikevich, the time step of 1ms is utilized [9]. dv dt 0.04V 5V 140 u I 1 DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3494

Subjected to the condition; and u u d (3) du dt a bv u 2 if V 30, then V c The variables V and u denote the potential of neuron membrane and its recovery variable, respectively. The variable I refers to the total synaptic current. The portion 0.04V 5V 140 has been acquired by fitting the spike initiation dynamics of cortical neurons such that the unit of variable V correspond to mv and unit of time correspond to ms. By suitable adjustment of fixed parameters (a, b, c, and d), Izhikevich model can produce the responses of most types of neurons shown in biological tests. The values of constants are listed in Appendix. The spikes obtained by this model are depicted in Fig. 1. Fig. 1. Spikes generated by Izhikevich model III. CHARACTER RECOGNITION ALGORITHM In this algorithm, a two-layer spiking neural network structure based on Izhikevich model has been used for character recognition. The first layer of the network works as input neurons, while the second layer represents the output neurons. The suggested network has been trained to recognize the 48 different input images corresponding to 48 different characters as indicated in Fig. 2. These 48 images correspond to upper case (A-Z) letters, eight Greek letters, 10 numerals (0-9) and four symbols. The characters corresponding to input images are given to the first layer of neurons (designated as level 1), such that each pixel of the image is presented to its corresponding individual input neuron. Therefore, the number of pixels in the input image is equivalent to the Fig. 2. Initial character test DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3495

number of neurons in level 1. Moreover, the number of training images is equal to number of neurons at second layer, which is referred to as level 2. This is because each neuron of second layer (level 2) has been encoded to fire only when it recognizes its corresponding image. Each neuron of level 1 is linked to every neuron of level 2. A prototype of this neural network structure is depicted in Fig. 3. Fig. 3. The Neural Structure and Connection between the Neurons of Two Levels. Membrane potential V is evaluated by input current I of each neuron. If this potential of membrane reaches a certain threshold level through the cycle, the neuron is promptly fired. Concerning the neuron of level 1, the input current I of certain neuron is set to zero if its corresponding pixel in the input image is "off ". However, a constant current is fed to the input neuron if its corresponding input pixel of image is "on". On the other hand, the input current to a certain neuron at level 2 is the equal to the sum of all individual currents coming from the connected neurons at level 1. The expression of input current I (j) for neuron j at level 2 is described by Eq. (4), I j = w i,j f(i) (4) i where w is a weight matrix whose entries w i,j is the input weight from neuron i (at level 1) to neuron j (at level 2). The function f(i) shown in Eq. (4) is described as a firing vector whose value is equal to zero if the i th neuron of level 1 does not fire, and it is equal to one if the i th level 1 neuron does fire. Training process is responsible for determining the entries of the weight matrix w immediately after the process of driving the set of training images sequentially to the input neurons. The training process would produce a weight matrix, which cab utilized to evaluate the input current of each output neurons. During the recognition phase, an input image is delivered to neurons of the level 1 and there is one output neuron will be fired after passing a certain number of cycles; indicating that the input image is identified. Based on the input image, the neurons at level 1 are evaluated during each cycle such that the firing vector is updated to specify which neurons are fired at that cycle. At the same cycle, the firing vector resulting from the last cycle is utilized to determine the value of input current I coming into each neuron at level 2. Based on input current values of neurons at level 2, their membrane potentials V can be determined. IV. LEARNING METHOD The synaptic weights of the network in supervised training algorithms are updated iteratively such that the desired input/output mapping to the SNN is obtained [11]. The present work adopts the Remote Supervised Method (ReSuMe) rule for updating the weight of a synapse i. In Artificial Neural Networks, the basic principle of supervised learning method is based on synaptic weight adaptation w, which is expressed in terms error function e= y d -y out between the desired (y d ) and the actual neuron output (y out ) [12]: w i =γ x i y d -y out (5) where γ is a positive real-valued learning rate, x i is the input passing through synapse i, y d the desired neural output and y out refer to the actual neural output. The weight updating rule indicated by Eq. (5) was commonly used for traditional neural networks. In spike neural network, a series of spikes are transferred among neurons and the form of information exchange represented by Eq. (3) has to be explained. DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3496

The supervised learning process consists of three types of participating neurons: input, learning and teacher neurons [12]. The set of the input neurons designated as N in =(N 1 in,n 2 in,..) are the neurons responsible for activating the learning synapses. The set defined by N o =(N 1 o,n 2 o, ) are the learning neurons and they invoke all neurons that receive signals from the learning synapses, which try to yield the target signals S d (t). Then, the signals S d (t) are submitted to the network by means of a set of the teacher neurons N d =(N 1 d,n 2 d,..). Fig. 4 shows three types of neurons. n k in i n i d W k n i o Fig. 4. Three neurons types In Izhikevick models, the neurons are stimulated by increasing input current to the input neuron, therefore the currents of teacher neuron and learning neuron consider y d and y out respectively. Then, Eq. (5) can be written as: w=γ I i I j d -I j o (6) The change in the synaptic strength between neurons N i in and N j o requires information about the synapse between neurons N i in and N k d w ij = - w ij + w ik for excitory synapses w ij = w ij - w ik for inhibitory synapses V. RESULT This work utilized the same set of characters used in the work referred by [13], which used a structure of traditional neural network and based on back propagation on massively parallel computer. The suggested Izhikevick model has been simulated and tested using MATLAB. The spike neural network has been initialized and trained for the character set consists of only 12 characters ( A, B, C, D, F, L, U, T, +, -, /, and ) as shown in Fig. 5. For this case, the elements of network structure consist of 15 input neurons and 12 output neurons. In Tab.1, described in the appendix, lists the running parameters used in this work. The initial values of weights are set to random values, range between 0 and 1, such that all output neurons would spike at the first iteration of training. During the training process, each character is presented to the spike neural structure sequentially one at a time. Through the application of Remote Supervised Method (ReSuMe) learning method, the elements of weight matrix would continuously change until the outputs of both the teacher neuron and learning neuron are identically equal and the network is said to be fully trained at this stage. DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3497

Fig.5 Initial character test The responses of twelve trained neurons when fed with characters A, B, C, D, F, L, U, T, +, -, /, and are illustrated in Fig. 6 to Fig. 17. It is clear from the figures that there is a particular character responding to only one neuron, which is continuously spiking with spike-time interval of 100 ms. Other Neurons show no spikes; as their membrane potentials are below the threshold value designated by 30 mv. Particularly, neurons 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12 respond to characters A, B, C, D, F, L, U, T, +, -, /, and respectively Fig.6 Behaviour of membrane voltages for character 'A' DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3498

Fig.7 Behaviour of membrane voltages for character 'B' Fig.8 Behaviour of membrane voltages for character 'C' Fig.9 Behaviour of membrane voltages for character 'D' DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3499

Fig.10 Behaviour of membrane voltages for character 'F' Fig.11 Behaviour of membrane voltages for character 'L' Fig.12 Behaviour of membrane voltages for character 'U' DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3500

Fig.13 Behaviour of membrane voltages for character 'T' Fig.14 Behaviour of membrane voltages for character '+' Fig.15 Behaviour of membrane voltages for character '-' DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3501

Fig.16 Behaviour of membrane voltages for character '/' Fig.17 Behaviour of membrane voltages for character ' ' REFERENCES [1] W. Maass, Networks of spiking neurons: the third generation of neural network models, Neural Networks, vol. 10, no. 9, pp. 1659-1671, 1997. [2] A.R. Baig, Spatial-Temporal Artificial Neurons Applied to Online Cursive Handwritten Character Recognition, in European Symposium on Artificial Neural Networks., Bruges (Belgium)., 2004, pp. 561-566. [3] A. Delorme and S.J. Thorpe, SpikeNET: an event-driven simulation package for modeling large networks of spiking neurons, Network-computation in neural systems, vol. 14, 4, ) pp.613 627, 2003. [4] A. Gupta and L. Long, Character Recognition using Spiking Neural Networks, in International Joint Conference on Neural Networks., Orlando., Florida, USA, 2007. [5] Y. Dan and M. Poo, Spike time dependent plasticity of neural circuits, Neuron, vol.14, pp. 23 30, 2004. [6] S. Thorpe, A. Delorme and R.V. Rullen, Spike based strategies rapid processing, Neural Networks, vol. 10, pp. 715-726, 2001. [7] D.V. Buonomano and M.M. Merzenich, A neural network model of temporal code generation and position invariant pattern recognition, Neural Computation, vol. 11, pp.103 116, 1999. [8] E. Izhikevich, Which Model to Use for Cortical Spiking Neurons, IEEE Transactions on Neural Networks, vol. 15, no. 9, pp.1063-1070, 2004. [9] E. Izhikevich, Simple Model of Spiking Neurons, IEEE Transactions on Neural Networks, vol. 14, no. 6, pp.1569-1572, 2003. [10] K. Rice, Accelerating Pattern Recognition Algorithms On Parallel Computing Architectures, Ph.D. dissertation, Computer Engineering, Clemson University, 2011. [11] A. Mohemmed, S. Schliebs, S. Matsudu and N. Kasabov, Method for Training a Spiking Neuron to Associate Input-Output Spike Trains, in IFIP International Federation for Information Processing., Japan., 2011, pp.219 228. DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3502

[12] F. Ponulak, Supervised Learning in Spiking Neural Networks with ReSuMe Method Mohemmed, Ph.D. dissertation, Control and Information Engineering, Poznań University of Technology, 2006. [13] R. Brette, Simulation of networks of spiking neurons: A review of tools and strategies, J. Computer Neuroscience, 2006. Table below lists the important running parameters; APPENDIX TABLE I Running parameters Parameter Value a 0.02 b 0.2 c -65 b 8 Threshold 30 mv Step time 1 γ 0.1 DOI: 10.21817/ijet/2017/v9i5/170905028 Vol 9 No 5 Oct-Nov 2017 3503