Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6) BPNN in Practice Week 3 Lecture Notes page 1 of 1
The Hopfield Network In this network, it was designed on analogy of brain s memory, which is work by association. For example, we can recognise a familiar face even in an unfamiliar environment within 100-200ms. We can also recall a complete sensory experience, including sounds and scenes, when we hear only a few bars of music. The brain routinely associates one thing with another. Multilayer Neural Networks trained with backpropagation algorithm are used for pattern recognition problems. To emulate the human memory s associative characteristics, we use a recurrent neural network. Week 3 Lecture Notes page 2 of 2
A recurrent neural network has feedback loops from its outputs to its inputs. The presence of such loops has a profound impact on the learning capability of the network. Single layer n-neuron Hopfield network The Hopfield network uses McCulloch and Pitts neurons with the sign activation function as its computing element: The current state of the Hopfield is determined by the current outputs of all neurons, y 1, y 2,,y n. Week 3 Lecture Notes page 3 of 3
Hopfield Learning Algorithm: Step 1: Assign weights o Assign random connections weights with values w = +1 or w = 1 for all i j and ij 0 for i = j ij Step 2: Initialisation o Initialise the network with an unknown pattern: x i = whereo and x + 1or -1 O ( k), 0 i i i i ( k) is N 1, theoutput of is an element at input i of nodei at time t = k = an input pattern, 0 Step 3: Convergence o Iterate until convergence is reached, using the relation: O i ( k N 1 + 1) = f i= 0 w ij O i ( k), 0 j N 1 where the function f(.) is a hard limiting nonlinearity. o Repeat the process until the node outputs remain unchanged. Week 3 Lecture Notes page 4 of 4
o The node outputs then best represent the exemplar pattern that best matches the unknown input. Step 4: Repeat for Next Pattern o Go back to step 2 and repeat for next x i, and so on. Hopfield network can act as an error correction network. Type of Learning Supervised Learning o the input vectors and the corresponding output vectors are given o the ANN learns to approximate the function from the inputs to the outputs Week 3 Lecture Notes page 5 of 5
Reinforcement Learning o the input vectors and a reinforcement signal are given o the reinforcement signal tells how good the true output was Unsupervised Learning o only input are given o the ANN learns to form internal representations or codes for the input data that can then be used e.g. for clustering From now we will look at unsupervised learning neural networks. Week 3 Lecture Notes page 6 of 6
Hebbian Learning In 1949, Donald Hebb proposed one of the key ideas in biological learning, commonly known as Hebb s Law. Hebb s Law states that if neuron i is near enough to excite neuron j and repeatedly participates in its activation, the synaptic connection between these two neurons is strengthened and neuron j becomes more sensitive to stimuli from neuron i. Hebb s Law can be represented in the form of two rules: If two neurons on either side of a connection are activated synchronously, then the weight of that connection is increased. If two neurons on either side of a connection are activated asynchronously, then the weight of that connection is decreased. Week 3 Lecture Notes page 7 of 7
Hebbian learning implies that weights can only increase. To resolve this problem, we might impose a limit on the growth of synaptic weights. It can be implemented by introducing a non-linear forgetting factor into Hebb s Law: where ö is the following factor Forgetting factor usually falls in the interval between 0 and 1, typically between 0.01 and 0.1, to allow only a little forgetting while limiting the weight growth. Week 3 Lecture Notes page 8 of 8
Hebbian Learning Algorithm: Step 1: Initialisation o Set initial synaptic weights and threshold to small random values, say in an interval [0,1]. Step 2: Activation o Compute the neuron output at iteraction p where n is the number of neuron inputs, and è j is the threshold value of neuron j. Step 3: Learning o Update the weights in the network: where Äw ij (p) is the weight correction at iteration p. Week 3 Lecture Notes page 9 of 9
o The weight correction is determine by the generalised activity product rule: Step 4: Iteration Increase iteration p by one, go back to Step 2. Competitive Learning Neurons compete among themselves to be activated While in Hebbian Learning, several output neurons can be activated simultaneously, in competitive learning, only a single output neuron is active at any time. The output neuron that wins the competition is called the winner-takes-all neuron. In the late 1980s, Kohonen introduced a special call of ANN called self-organising maps. These maps are based on competitive learning. Week 3 Lecture Notes page 10 of 10
Self-organising Map Our brain is dominated by the cerebral cortex, a very complex structure of billions of neurons and hundreds of billions of synapses. The cortex includes areas that are responsible for different human activities (motor, visual, auditory, etc), and associated with different sensory input. Each sensory input is mapped into a corresponding area of the cerebral cortex. The cortex is a selforganising computational map in the human brain. Week 3 Lecture Notes page 11 of 11
Feature-mapping Kohonen model The Kohonen Network The Kohonen model provides a topological mapping. It places a fixed number of input patterns from the input layer into a higher dimensional output or Kohonen layer. Week 3 Lecture Notes page 12 of 12
Training in the Kohonen network begins with the winner s neighbourhood of a fairly large size. Then, as training proceeds, the neighbourhood size gradually decreases. The lateral connections are used to create a competition between neurons. The neuron with the largest activation level among all neurons in the output layer becomes the winner. The winning neuron is the only neuron that produces an output signal. The activity of all other neurons is suppressed in the competition. Week 3 Lecture Notes page 13 of 13
The lateral feedback connections produce excitatory or inhibitory effects, depending on the distance from the winning neuron. This is achieved by the use of a Mexican hat function which describes synaptic weights between neurons in the Kohonen layer. In the Kohonen network, a neuron learns by shifting its weights from inactive connections to actives ones. Only the winning neuron and its neighbourhood are allowed to learn. Week 3 Lecture Notes page 14 of 14
Competitive Learning Algorithm Step 1: Initialisation o Set initial synaptic weights to small random values, say in an interval [0, 1], and assign a small positive value to learning rate parameter á Step 2: Activation and Similarity Matching o Activate the Kohonen network by applying the input vector X, and find the winner-takes-all (best matching) neuron j X at iteration p, using the minimum-distance Euclidean criterion where n is the number of neurons in the input layer, and m is the number of neurons in the Kohonen layer. Week 3 Lecture Notes page 15 of 15
Step 3: Learning o Update the synaptic weights where Äw ij (p) is the weight correction at iteration p. o The weight correction is determined by the competitive learning rule: where á is the learning rate parameter, and Ë j (p) is the neighbourhood function central around the winner-takes-all neuron j x at iteration p. Step 4: Iteration o Increase iteration p by one, go back to step 2 and continue until minimum-distance Euclidean criterion is satisfied, or no noticeable changes occur in the feature map Week 3 Lecture Notes page 16 of 16