The Re(de)fined Neuron

Similar documents
New Ideas for Brain Modelling

Processing of Logical Functions in the Human Brain

CHAPTER I From Biological to Artificial Neuron Model

Applied Neuroscience. Conclusion of Science Honors Program Spring 2017

Chapter 1. Introduction

What is Anatomy and Physiology?

Introduction to Computational Neuroscience

Introduction. Chapter The Perceptual Process

LESSON 3.3 WORKBOOK. Why does applying pressure relieve pain? Workbook. Postsynaptic potentials

Omar Sami. Muhammad Abid. Muhammad khatatbeh

Introduction to Computational Neuroscience

LESSON 3.3 WORKBOOK. Why does applying pressure relieve pain?

Neuromorphic computing

Fundamentals of Computational Neuroscience 2e

Neurons: Structure and communication

Intelligent Control Systems

Learning and Adaptive Behavior, Part II

Prof. Greg Francis 7/31/15

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Intro. Comp. NeuroSci. Ch. 9 October 4, The threshold and channel memory

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

CAS Seminar - Spiking Neurons Network (SNN) Jakob Kemi ( )

University of Cambridge Engineering Part IB Information Engineering Elective

You can follow the path of the neural signal. The sensory neurons detect a stimulus in your finger and send that information to the CNS.

Consider the following aspects of human intelligence: consciousness, memory, abstract reasoning

NEOCORTICAL CIRCUITS. specifications

Neurons Chapter 7 2/19/2016. Learning Objectives. Cells of the Nervous System. Cells of the Nervous System. Cells of the Nervous System

2Lesson. Outline 3.3. Lesson Plan. The OVERVIEW. Lesson 3.3 Why does applying pressure relieve pain? LESSON. Unit1.2

A toy model of the brain

Concept 48.1 Neuron organization and structure reflect function in information transfer

International Journal of Scientific & Engineering Research Volume 4, Issue 2, February ISSN THINKING CIRCUIT

PSY 215 Lecture 3 (1/19/2011) (Synapses & Neurotransmitters) Dr. Achtman PSY 215

THE HISTORY OF NEUROSCIENCE

Nervous System. 2. Receives information from the environment from CNS to organs and glands. 1. Relays messages, processes info, analyzes data

Overview. Unit 2: What are the building blocks of our brains?

Rolls,E.T. (2016) Cerebral Cortex: Principles of Operation. Oxford University Press.

Guided Reading Activities

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells

Why is dispersion of memory important*

ANATOMY AND PHYSIOLOGY OF NEURONS. AP Biology Chapter 48

Neurophysiology scripts. Slide 2

VHDL implementation of Neuron based classification for future artificial intelligence applications

Evaluating the Effect of Spiking Network Parameters on Polychronization

International Journal of Advanced Computer Technology (IJACT)

Ameen Alsaras. Ameen Alsaras. Mohd.Khatatbeh

Sparse Coding in Sparse Winner Networks

Introduction. Visual Perception Aditi Majumder, UCI. Perception is taken for granted!

The Nervous System. B. The Components: 1) Nerve Cells Neurons are the cells of the body and are specialized to carry messages through an process.

Hierarchical dynamical models of motor function

Introduction to Computational Neuroscience

Modeling of Hippocampal Behavior

1. Introduction 1.1. About the content

Neurons, Synapses, and Signaling

3) Most of the organelles in a neuron are located in the A) dendritic region. B) axon hillock. C) axon. D) cell body. E) axon terminals.

Realization of Visual Representation Task on a Humanoid Robot

DIGITIZING HUMAN BRAIN: BLUE BRAIN PROJECT

Nervous System Communication. Nervous System Communication. The First Nerve Cells 1/2/11

Outline. Animals: Nervous system. Neuron and connection of neurons. Key Concepts:

An Escalation Model of Consciousness

How has Computational Neuroscience been useful? Virginia R. de Sa Department of Cognitive Science UCSD

All questions below pertain to mandatory material: all slides, and mandatory homework (if any).

Brief History of Work in the area of Learning and Memory

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

Chapter 2 The Brain or Bio Psychology

AU B. Sc.(Hon's) (Fifth Semester) Esamination, Introduction to Artificial Neural Networks-IV. Paper : -PCSC-504

Part 11: Mechanisms of Learning

PSY380: VISION SCIENCE

1. Introduction 1.1. About the content. 1.2 On the origin and development of neurocomputing

Evolutionary Programming

COGS 107B Week 1. Hyun Ji Friday 4:00-4:50pm

The Nervous System 12/11/2015

Neurons, Synapses, and Signaling

Synfire chains with conductance-based neurons: internal timing and coordination with timed input

Basics of Computational Neuroscience: Neurons and Synapses to Networks

The Nervous System. Nerves, nerves everywhere!

The Brain & Homeostasis. The Brain & Technology. CAT, PET, and MRI Scans

Neurophysiology and Information: Theory of Brain Function

Chapter 11 Introduction to the Nervous System and Nervous Tissue Chapter Outline

IIE 269: Cognitive Psychology

Information Processing & Storage in the Brain

9/28/2016. Neuron. Multipolar Neuron. Astrocytes Exchange Materials With Neurons. Glia or Glial Cells ( supporting cells of the nervous system)

Chapter Nervous Systems

THE HISTORY OF NEUROSCIENCE

Artificial intelligence (and Searle s objection) COS 116: 4/29/2008 Sanjeev Arora

The Nervous System. Nervous System Functions 1. gather sensory input 2. integration- process and interpret sensory input 3. cause motor output

Electrophysiology. General Neurophysiology. Action Potentials

35-2 The Nervous System Slide 1 of 38

Bio11: The Nervous System. Body control systems. The human brain. The human brain. The Cerebrum. What parts of your brain are you using right now?

Computational Neuroscience. Instructor: Odelia Schwartz

Chapter 7 Nerve Cells and Electrical Signaling

Part 1 Making the initial neuron connection

Introduction to Physiological Psychology

Animal Physiology Prof. Mainak Das Department of Biological Sciences and Bioengineering Indian Institute of Technology, Kanpur

Hebbian Plasticity for Improving Perceptual Decisions

Hole s Human Anatomy and Physiology Eleventh Edition. Chapter 10

Neurons, Synapses, and Signaling

Introduction to Neurobiology

Chapter 2--Introduction to the Physiology of Perception

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Transcription:

The Re(de)fined Neuron Kieran Greer, Distributed Computing Systems, Belfast, UK. http://distributedcomputingsystems.co.uk Version 1.0 Abstract This paper describes a more biologically-oriented process that could be used to build the sort of networks we associate with the human brain. A refined neuron will be proposed that can process more variable sets of values, but with the same level of control and reliability that a binary neuron would have. When modelling the human brain on a computer, it would be normal to consider using binary units, but there is a sense that our intelligence should derive from something more complex than this. There is nothing particularly new in this paper with regard to basic structures, but after reading, it will be possible to think of an essentially binary system in terms of a more variable set of values that can be created and controlled reliably. A plausible automatic construction of such a network will also be proposed. This will allow for more complex formulae or equations to be considered as normal. Keywords: neuron, neural network, self-organise, analogue, cognitive. 1 Introduction This paper describes a more biologically-oriented process that could be used to build the sort of networks we associate with the human brain. The biological background is not any more than what a typical AI researcher would know about, but the proposed process might offer a new way of looking at the problem. So while a computer model or simulator is the goal, the processes map more closely to a real human brain, at least in theory. The classic paper [9] gives the original research on looking at the human brain as a collection of binary operating neurons. They describe a set of theorems under which these neurons behave, including synaptic links with excitatory or inhibitory connections. They argue that if there is a definite set of rules or theorems under which the neurons behave, this can then be described by propositional logic. A layered or hierarchical structure is implicit in all of the descriptions and used to explain how a temporal factor can be added and possibly measured, through a spatial structure. They then focus more on behaviour and states, 1

rather than precise mathematical values. So while they show how the neurons might fire, the idea of controlling and measuring the signal value accurately is not really addressed. This paper considers how a more refined processing unit can be constructed from similar neuronal components. While only an artificial neural network is considered, the binary nature of each individual component allows for a closer comparison with real neurons in a human brain. If studying this topic, it is easy to think in binary terms only, but the question of how our sophisticated intelligence derives from that is difficult to answer. The thought process might try to find a particularly clever or complicated function that works inside of a neuron, to compensate for its apparent simplicity. The computer scientist would also try to use a distributed architecture, to combine several units for an operation, again to make the whole process more complex and variable. That is the direction of this paper, but it will attempt to show that the modeller s thinking process can begin to look at a collection of neurons as a single unit that can deliver a complex set of values reliably. It can be constructed relatively easily and to some degree of accuracy, by a mechanical process that the simple binary neuron might not be thought to accommodate. The rest of this paper is structured as follows: section 2 describes some background work that can help to explain this paper. Section 3 describes the refined neuronal component. Section 4 describes a plausible process for linking these neurons, while section 5 gives a final discussion on the work. 2 Background to the Proposal The superficially simplistic nature of neurons is also written about in [10], which discusses that a binary signal is limiting. It then explains that adding more states to a single unit comes at a price and that it has been worked out mathematically that the binary unit with 2 states is the most efficient. This is because the signal also needs to be decoded, which is expensive. Therefore, if more states can be achieved through different connection sets instead, this would be desirable. Also in [10] chapter 2, it is explained how any logic function can be built from the McCulloch-Pitts neurons and also a hint at the architecture of this book; by 2

pointing out that any weighted neuron can be replaced by an unweighted one, with additional redundant connections. A weight value of 3, for example, can be replaced with a single connection that is split into 3 inputs, but that is definitely a different design. The most interesting papers from the author that have been published could be [3] and [5], with also a small section in [4]. These suggested reasons for duplicating neurons, considering patterns of them, time and grouping them; which are also of course, the established theories. Duplication was argued ([4], section 4.2) not only for safety reasons, but also to try and reinforce a signal from different sources. Considering collections of neurons then, can allow for more complex patterns and signals that would not be derived from a single neuron. There has been lots of research looking at modelling the brain neuron more closely, through binary units with thresholds and excitatory or inhibitory inputs. The paper [11] is one example that gives an equation for the firing rate that includes external sensory inputs as well as from other neurons. It also states that for the firing to be sustained, that is, to be relevant, requires sufficient feedback from the firing neurons, to maintain the level of excitation. Once the process starts, this can then excite and bring in other firing neurons, when inhibitory inputs also need to fire, to stop the process from saturating. A weighted equation is given to describe how the process can self-stabilise if enough inhibitory inputs fire. The model described here is even simpler and would not require the fine tuning of weight values. The paper [8] shows evidence of a chemospecific steering and aligning process for synaptic connections. It describes however that this is only the case for certain types of local connection, although other researchers have produced slightly different theories there. The paper [1] also contains information that is useful for this paper. Its main point is the argument that a mechanical process, more than an electro-chemical one, is responsible for moving the signals around the brain. This includes mechanical pressure to generate impulses. In [10], only an Ion pump is described, where the activity results solely from Ions moving through a fluid environment. It is argued in [1] however that this is not sufficient to explain the forces involved. One might think that fluid pressure plays a part in the construction of the synapse and also in keeping pathways open, for example. It is also argued [1] that a steady pressure state must be maintained throughout the brain and; as the brain needs to solve differential equations, it needs to be analogue in that respect. 3

A hierarchical network structure for the human brain is well known and is written about in many texts, [5] or [6] for example. In the hierarchy, some hidden or middle-layer neurons do not represent anything specifically, but are more for structural or grouping purposes. In [6] it is also noted that there are abstract structures or points in a hierarchy, representing largely how the structure has formed. This can also include lateral connections between hierarchies. There are also higher and lower levels of the hierarchy, as more basic concepts are grouped into higher-level ones. It is then explained that feedback is more common than signals travelling forwards only. This feedback helps to make predictions, or to confirm a certain pattern of firing, as part of an auto-associative memory-prediction structure. The brain needs to remember the past to predict what will happen in the present, to allow you to reason about it. This is a different purpose for the brain structures than will be described in this paper, which is more concerned with mechanical processes than the intelligence reasons behind them. 3 Refined Neuronal Unit This section makes the argument for an artificial neural network only, but because the neuronal unit is essentially binary, close comparisons with a biological neuron can be made. To explain the theory, a simplified model will be used, with neurons connected by synapses. Dendrites or axons, for example, will not be included. Consider a single neuron with a number of inputs from other neurons and a single output. This neuron will be called the main neuron here. Each input transmits a signal with a strength value of 1. The activation function is stepwise, where if the threshold value is met, the neuron itself outputs a signal with a strength value of 1 and if not, it outputs nothing, or 0. This is essentially a binary function and is being called a course neuron here. All neurons are therefore the same and operate in the same simplistic way. Consider structures that then use course neurons, as shown in Figure 1. The most basic is a single neuron with a number of inputs, 5 for example and its own output (1a). If the threshold value is 4, then 4 of the 5 inputs must fire for the neuron itself to fire. Consider a second scenario where the neuron has 25 other neurons that wish to connect with it 4

DCS 03 February 2013 through their related synapses (1b). If all of the input neurons connect directly, then only 4 of the 25 neurons need to fire to trigger the main neuron. This is a very low number of inputs and might even cause the concept that the neuron represents to lose most of its meaning. (1a) Single Neuron (1b) Single Neuron with more Inputs (1c) Refined Neuron with Intermediary Layer Figure 1. From Course to Refined Neuronal Unit Structure. What if there are now a number of units as part of a middle or intermediary layer, still made from the same neuronal type (1c). Maybe 5 of each of the input neurons connect to one middle layer neuron. The 5 middle layer neurons then connect with the main neuron. Then, 4 of the 5 intermediary neurons must fire and each of those will only fire if 4 of the related 5 input neurons fire. Therefore, for the main concept neuron to fire now requires 16 of the 5

original input neuron set to fire, which is more meaningful in itself. It has also turned the whole unit into a more analogue process, as each input neuron is now represented by a fraction of its true signal strength. In effect, each input neuron now sends a signal represented by one quarter of its true value. This new entity will be called a refined neuron (ReN). The term does not relate specifically to a single neuronal component or to a specific set of them. The term relates, slightly abstractly, to the main neuron plus its set of inputs that have been converted into a more analogue set of values. The inputs could have derived from any number of other neurons. The process can be further modified by allowing some neurons to connect directly, while others go through one or more intermediary layers. The changing of a neuron s signal strength is then performed by adding more intermediary units, where the operation of the neuron itself is unchanged. Even things like the same input concept connecting more than once, possibly from different original sources, will make the process more variable and therefore more analogue. Therefore, each input can have a very different weight or influence over the result of another neuron firing and it can all be controlled using the same simple units, but with more complex connections. While a hierarchical structure is included in most research work on this topic and therefore the input/output value sets must also be implicit, it is difficult to recall a paper that suggests looking at a group of neurons in this way. The importance shifts more to the synaptic connection and structures, with the operation of the neuron itself regarded as constant. The paper [9] notes that synapses can vary in size and transmission speeds and even that the threshold of a neuron can be changed. This could lead to different firing rates for the neuron as well, but for a reliable system that works over and over again, the simplest configuration is the best. This would be to change the relatively static synaptic structure and have the neurons firing, mostly under one state or condition, without additional moving parts that might be prone to damage over time. In [6] it is described how feedback to the same region is more common than signals in one direction only. This is done however by sending a neuron s output signal, back into the input layers again. The idea being that this confirms what the correct patterns are. This paper will suggest a slightly different form of feedback, back up the same synapse channel. 6

3.1 Potential Equations This theory is intended for a computer simulation and it should be possible to create some mathematical equations to model the main process relatively easily. For example: Let N be the number of direct input synapses from other neurons. Let T m be the threshold for the main neuron. Let I s be the input signal from a single direct synapse. Let I sn be the total input signal from all direct synapses. Let AE sn be the average excess input value, for each input synapse. Then: I sn = ݏܫ AE sn = (I sn T m ) / N. Each input synapse can be assigned the default value of 1, for example. The main neuron threshold can also be assigned some value, set it to 5 in this case. Therefore after the main neuron itself fires and collapses, rejecting any other signals during that time, it will reject for each synapse the calculated AE sn value. This makes sense, as it is proportional to the total number of inputs N, where more inputs will give a larger average rejection value. It is easy to see that as the rejection value becomes larger, it will counter any signal flowing forwards with a force that might cause a disturbance and possible subsequent events. For example, if there are 10 inputs, then the excess is (10 5) / 10 = 0.5, but if there are 50 inputs, then the excess is (50 5) / 50 = 0.9. This calculation is therefore very easy, but a more complete system would need to include other factors, more specifically with modelling the synapse. As well as a neuron threshold value, the amount of required rejected signal, or the amount of time it is consistently rejected for; to cause a sufficient disturbance and subsequent events (see section 5), needs to be considered. So while the neuron itself does not require a complex set of weights and 7

can be modelled simply, other factors now have to be included. As the goal is an equilibrium state, it would be possible to test different sets of values, representing different neuron firing configurations or inputs, and measure time and amounts before the whole system does reach a balanced state. The whole concept is still relatively simplistic for modelling. 4 Automatic Network Construction This section describes a plausible automatic process for constructing a network based on the new neuronal model. It is from the computer science perspective and will suggest mechanical processes that can be simulated in a computer program. It is not necessarily 100% correct biologically, but does show a clear comparison. 4.1 Mechanical process There is a main neuron with a number of synapse inputs from other similar neuronal units. The inputs send signals where the signal strength collectively is too strong. The main neuron s threshold is met and so it fires and then simulates the refractory period, by rejecting the excess input that it receives. This excess rejected input counters more input travelling to the main neuron, where a set of equations relating to section 3.1, determines that it should stimulate a sideways growth or extension to the synapse. The extensions from different synapses meet and join. They also form a new neuron, so that their combined input only produces an output value of 1. The new intermediary neuron then links with the main neuron again and the original connection paths are closed. Note that not all inputs need to fire at the same time and so it is only the group currently firing that would receive the negative feedback. Therefore, time is a critical element in the whole process as well. This would lead to a more balanced system that would not upset the overall energy that causes signals to flow. There are one or two obvious problems with this simple mechanical process. Firstly, two synapses, representing two neurons, may join, but the neurons do not always fire at the same time. Neuron A, for example, fires only with the set that produces excess input. Neuron B also fires every time there is excess input, but also at other times. When these 8

join, the intermediary neuron will only fire when they are both firing, which is OK for neuron A, but not for neuron B. Neuron B might send a signal at some other time and it will not then reach the main neuron. The answer to this is to make the rejection procedure primarily dependent on the closing of the original path. If the original path is diverted, it must be closing first, where the ejected signal would cause stagnation or loss of pressure, allowing it to do that. If the rule requires the amount of path closure to determine the amount of rejection, then neurons only firing in excess will always receive the negative feedback and close together. Neurons also firing at other times will be able to complete the process occasionally and will be less likely to have their original path closed. The additional firing of some neurons would put them slightly out-of-sync with the other ones. Another feature of real neuronal networks is the idea of resonance, where neurons in-sync with each other, do the same thing. This paper would support that as well, where it is also the fundamental idea behind the Adaptive Resonance Theory neural networks [2]. There is also the other extreme where a neuron fires less often than the excess set, but the excess set always fires when it does. In that case, its feedback will be less and the excess set will have grown new links more quickly, possibly relieving the excess pressure before the less frequent neuron is forced to complete a new path link. Each synapse growth is therefore attracted to other neuron sets firing at the same time and tries to join up with the closest resonating set. If the neurons fire at the same time then they probably represent the same global concept in some sense [5]. For an accurate computer model, there is still no specification of what the correct synapse connections would be; which is critical. With incorrect connections, everything will still be wrong and not make sense. Therefore, some theory on how the synapse is guided would be useful, especially one that can be practically modelled. On the other hand, the relationship between the connecting synapse and the neuron is completely flexible, meaning that any sort of chaotic structure can be built. The main basic principles are therefore: 1. The system will try to self-organise itself through more intermediary units. 2. The intermediary units create a more balanced system, add meaning to and produce more analogue input values. 3. Neurons firing together can represent a common concept or goal. 9

4. Synapses firing at the same time will try to link up with each other, or related neurons. 5. Resonance is important, where out-of-sync neurons will be able to perform differently. Clustering algorithms would typically combine two groups if they overlap. The paper [5] suggests keeping them separate and creating the intermediary or hidden layer units, based more on time events. The exact model that paper gave is inaccurate in that the group could probably not exist exactly as it was encountered during one moment in time, but would need to be a bit fuzzy, with some margin for differences. The general idea however appears to agree with this paper. 4.2 Biological process There are real biological processes that this can be mapped to. The main neuron firing and then collapsing, when a refractory period prevents it from firing again, is the real biological process. This can help to prevent cycles, or attractors [12], from occurring. If the synapses are still sending input signals, maybe the neuron cannot absorb the signal then and will instead eject or block the new input. In [10] it is explained that selective channels need to be opened and can be closed after depolarisation (firing), but [1] also argues for a mechanical process, with a pressure loss. Therefore, a physical block does take place. It is unclear if a signal would be forcibly ejected back again, but if it is blocked and has some momentum, that must cause some sort of reverse disruption. If the ejected signal meets a signal travelling in the correct direction, this disruption might stimulate growth in that area. This growth would necessarily be sideways and would help to relieve pressure. If two sideways synapse growths met, they would then need to form a new neuron and that would also need to find the main neuron again, to complete the process. That is definitely fanciful, but these last 2 tasks are required, even without this new process. Neurons need to form and synapses need to find other neurons, as part of any brain construction. Hebb s law [7] neurons that wire together fire together, would also suggest that the closer neurons would combine into a single unit. The electrically charged chemical impulses could also play a part in the synapse construction, providing a stimulus or pressure. They might even control the direction, if it is attracted to other certain fields around it [8]. So there is a clear set of biological processes that could, at least in theory, map to the mechanical one. 10

5 Discussion Looking at a cross-section, or magnified image of a real brain, shows how completely random the structure is. A computer model would struggle to create this sort of structure and prefers a much more ordered one. However, this paper might give some more confidence to the neuronal unit itself. There can be added confidence that the same type of neuron can behave reliably, including threshold and output signal strength, and can give a desirable variety of input signal strengths through a layered hierarchical structure that can be created naturally, as part of any random process. If a neuron is receiving inputs from a number of other ones, duplication of the inputs for the same concept can add to the variable or analogue values. It is thought that neuron groups are duplicated for safety reasons, so that when a number of neurons die naturally, it does not remove a concept or idea completely. If a new sensation enters the brain however, it might be more natural for it to form or grow in more than one area. If the brain is not made of particularly intelligent stuff, then it would be very specific for a stimulus to cause a reaction in only one particular area. So this could be an example of nature at its best. The (accidental) result of duplication allows for safety and also for the desired variety in the signal. This then poses a question: do these duplicated neurons try to join up, or is the larger pattern that they belong to also duplicated? That is for another day, but this paper would like them to join up, at least in some cases. The middle or intermediary units are useful for different purposes. If a neuron has a large number of direct inputs, much larger than the required threshold, they would lose some of their meaning. If these are grouped by intermediary units first, then each intermediary unit must represent some sort of abstract concept by itself, even if it is not a real world one. The excess feedback would almost force the creation of these sub-concepts, as part of a selforganising system. Then the other reason is the original point of this paper - to produce that apparent analogue set of values, from binary-operating neurons. The complete system for growing the connections might be fanciful biologically, but does look plausible. You could imagine a computer simulator being built, based on these 11

principles and the model would also be able to give accurate feedback. So another question is: even if the process is successfully able to self-organise, what would it be used for? That would probably depend on external entities, such as sensory organs. As far as the brain is concerned, when the appropriate stimulus is encountered, all of the neurons that are related to it can fire together, representing in some way the same concept. This would be regarded as what they are supposed to do. It could also lead to better models of how neurons fire, if more sophisticated sets of values can be considered. Maybe some sensors or input/output connectors with the real world could be connected with the computer simulator to test the theory. 6 References [1] Ales, G. (2008). Electronic, Chemical and Mechanical processes in the Human Brain, 12th WSEAS International Conference on Systems, July 22-24, Heraklion, Greece, pp. 637 642, ISBN 978-960-6766-83-1, ISSN 1790-2769. [2] Carpenter, G., Grossberg, S., and Rosen, D. (1991). Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system. Neural Networks, Vol. 4, pp. 759 771. [3] Greer, K. (2012). Turing: Then, Now and Still Key, book chapter in: Artificial Intelligence, Evolutionary Computation and Metaheuristics (AIECM) - Turing 2012, Eds. X-S. Yang, Studies in Computational Intelligence, 2013, Vol. 427/2013, pp. 43-62, DOI: 10.1007/978-3-642-29694-9_3, Springer-Verlag Berlin Heidelberg. [4] Greer, K. (2012). A Stochastic Hyper-Heuristic for Matching Partial Information, Advances in Artificial Intelligence, Vol. 2012, Article ID 790485, 8 pages. doi:10.1155/2012/790485, Hindawi. [5] Greer, K. (2011). Symbolic Neural Networks for Clustering Higher-Level Concepts, NAUN International Journal of Computers, Issue 3, Vol. 5, pp. 378 386, extended version of the WSEAS/EUROPMENT International Conference on Computers and Computing (ICCC 11). [6] Hawkins, J. and Blakeslee, S. On Intelligence. Times Books, 2004. [7] Hebb, D.O. (1949). The Organisation of Behaviour. 12

[8] Hill, S.L., Wang, Y., Riachi, I., Schürmann, F. and Markram, H. (2012). Statistical connectivity provides a sufficient foundation for specific functional connectivity in neocortical neural microcircuits, Proceedings of the National Academy of Sciences. [9] McCulloch, W.S. and Pitts, W. (1943). A Logical Calculus of the Ideas Immanent in Nervous Activity, Bulletin of Mathematical Biophysics, Vol. 5. [10] Rojas, R. Neural Networks: A Systematic Introduction. Springer-Verlag, Berlin and online at books.google.com, 1996. [11] Vogels, T.P., Kanaka Rajan, K. and Abbott, L.F. (2005). Neural Network Dynamics, Annu. Rev. Neurosci., Vol. 28, pp. 357-376. [12] Weisbuch, G. (1999). The Complex Adaptive Systems Approach to Biology, Evolution and Cognition, Vol. 5, No. 1, pp. 1-11. 13