Genetically Generated Neural Networks I: Representational Effects

Similar documents
Evolutionary Programming

A Neural Theory of Visual Search: Recursive Attention to Segmentations and Surfaces

Sparse Coding in Sparse Winner Networks

Cognitive Neuroscience History of Neural Networks in Artificial Intelligence The concept of neural network in artificial intelligence

Introduction to Computational Neuroscience

Exploring the Functional Significance of Dendritic Inhibition In Cortical Pyramidal Cells

An Escalation Model of Consciousness

Realization of Visual Representation Task on a Humanoid Robot

Genetic Algorithms and their Application to Continuum Generation

Artificial Neural Networks (Ref: Negnevitsky, M. Artificial Intelligence, Chapter 6)

Spiking Inputs to a Winner-take-all Network

Chapter 1. Introduction

The Evolution of Cooperation: The Genetic Algorithm Applied to Three Normal- Form Games

Oscillatory Neural Network for Image Segmentation with Biased Competition for Attention

Introduction to Computational Neuroscience

The storage and recall of memories in the hippocampo-cortical system. Supplementary material. Edmund T Rolls

Direct memory access using two cues: Finding the intersection of sets in a connectionist model

A Dynamic Neural Network Model of Sensorimotor Transformations in the Leech

Learning and Adaptive Behavior, Part II

A model of the interaction between mood and memory

Lesson 6 Learning II Anders Lyhne Christensen, D6.05, INTRODUCTION TO AUTONOMOUS MOBILE ROBOTS

Genetic Algorithm for Solving Simple Mathematical Equality Problem

Revisiting the Personal Satellite Assistant: Neuroevolution with a Modified Enforced Sub-Populations Algorithm

Genetic Algorithm for Scheduling Courses

CS148 - Building Intelligent Robots Lecture 5: Autonomus Control Architectures. Instructor: Chad Jenkins (cjenkins)

Learning Classifier Systems (LCS/XCSF)

Modeling of Hippocampal Behavior

A toy model of the brain

Biologically-Inspired Control in Problem Solving. Thad A. Polk, Patrick Simen, Richard L. Lewis, & Eric Freedman

Observational Learning Based on Models of Overlapping Pathways

CHAPTER I From Biological to Artificial Neuron Model

Reactive agents and perceptual ambiguity

A Learning Method of Directly Optimizing Classifier Performance at Local Operating Range

ENVIRONMENTAL REINFORCEMENT LEARNING: A Real-time Learning Architecture for Primitive Behavior Refinement

Dynamics of Hodgkin and Huxley Model with Conductance based Synaptic Input

Artificial Intelligence For Homeopathic Remedy Selection

Intelligent Control Systems

Trip generation: comparison of neural networks and regression models

Computational & Systems Neuroscience Symposium

Learning to play Mario

Hierarchical dynamical models of motor function

ERA: Architectures for Inference

Self-Organization and Segmentation with Laterally Connected Spiking Neurons

Assignment Question Paper I

Application of Tree Structures of Fuzzy Classifier to Diabetes Disease Diagnosis

Contents. Just Classifier? Rules. Rules: example. Classification Rule Generation for Bioinformatics. Rule Extraction from a trained network

On the Combination of Collaborative and Item-based Filtering

University of Cambridge Engineering Part IB Information Engineering Elective

Competition Between Objective and Novelty Search on a Deceptive Task

NEURAL SYSTEMS FOR INTEGRATING ROBOT BEHAVIOURS

LEARNING ARBITRARY FUNCTIONS WITH SPIKE-TIMING DEPENDENT PLASTICITY LEARNING RULE

Application of ecological interface design to driver support systems

Mark J. Anderson, Patrick J. Whitcomb Stat-Ease, Inc., Minneapolis, MN USA

Cerebral Cortex. Edmund T. Rolls. Principles of Operation. Presubiculum. Subiculum F S D. Neocortex. PHG & Perirhinal. CA1 Fornix CA3 S D


Time Experiencing by Robotic Agents

PSY380: VISION SCIENCE

Reinforcement Learning in Steady-State Cellular Genetic Algorithms

arxiv: v1 [cs.ai] 28 Nov 2017

Modeling Depolarization Induced Suppression of Inhibition in Pyramidal Neurons

Error Detection based on neural signals

AVENIO family of NGS oncology assays ctdna and Tumor Tissue Analysis Kits

MODELING SMALL OSCILLATING BIOLOGICAL NETWORKS IN ANALOG VLSI

Cell Responses in V4 Sparse Distributed Representation

Lateral Inhibition Explains Savings in Conditioning and Extinction

Affective Action Selection and Behavior Arbitration for Autonomous Robots

IEEE SIGNAL PROCESSING LETTERS, VOL. 13, NO. 3, MARCH A Self-Structured Adaptive Decision Feedback Equalizer

A HMM-based Pre-training Approach for Sequential Data

Neural response time integration subserves. perceptual decisions - K-F Wong and X-J Wang s. reduced model

Evolution of Plastic Sensory-motor Coupling and Dynamic Categorization

Abstract A neural network model called LISSOM for the cooperative self-organization of

Hebbian Plasticity for Improving Perceptual Decisions

Stepper Motors and Control Part II - Bipolar Stepper Motor and Control (c) 1999 by Rustle Laidman, All Rights Reserved

A framework for the Recognition of Human Emotion using Soft Computing models

Memory, Attention, and Decision-Making

MBios 478: Systems Biology and Bayesian Networks, 27 [Dr. Wyrick] Slide #1. Lecture 27: Systems Biology and Bayesian Networks

A general error-based spike-timing dependent learning rule for the Neural Engineering Framework

EXIT Chart. Prof. Francis C.M. Lau. Department of Electronic and Information Engineering Hong Kong Polytechnic University

Using Genetic Algorithms to Optimise Rough Set Partition Sizes for HIV Data Analysis

Variable-size Memory Evolutionary Algorithm:

Using Probabilistic Reasoning to Develop Automatically Adapting Assistive

Expert System Profile

Grounding Ontologies in the External World

FACIAL COMPOSITE SYSTEM USING GENETIC ALGORITHM

Neural Processing of Counting in Evolved Spiking and McCulloch-Pitts Agents

A Monogenous MBO Approach to Satisfiability

Capacity Limits in Mechanical Reasoning

Introduction to Computational Neuroscience

The Re(de)fined Neuron

Study the Evolution of the Avian Influenza Virus

Evaluating the Effect of Spiking Network Parameters on Polychronization

Evaluating Classifiers for Disease Gene Discovery

Computational Explorations in Cognitive Neuroscience Chapter 7: Large-Scale Brain Area Functional Organization

A NEW DIAGNOSIS SYSTEM BASED ON FUZZY REASONING TO DETECT MEAN AND/OR VARIANCE SHIFTS IN A PROCESS. Received August 2010; revised February 2011

Behavior Architectures

A MULTIPLE-PLASTICITY SPIKING NEURAL NETWORK EMBEDDED IN A CLOSED-LOOP CONTROL SYSTEM TO MODEL CEREBELLAR PATHOLOGIES

Evolving Imitating Agents and the Emergence of a Neural Mirror System

Basics of Computational Neuroscience: Neurons and Synapses to Networks

Learning Utility for Behavior Acquisition and Intention Inference of Other Agent

Institute of Psychology C.N.R. - Rome. Using emergent modularity to develop control systems for mobile robots

Transcription:

Boston University OpenBU Cognitive & Neural Systems http://open.bu.edu CAS/CNS Technical Reports 1992-02 Genetically Generated Neural Networks I: Representational Effects Marti, Leonardo Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems https://hdl.handle.net/2144/2094 Boston University

GENETICALLY GENERATED NEURAL NETWORKS I: REPRESENTATIONAL EFFECTS Leonardo Marti February, 1992 Technical Report CAS/CNS-92-014 Permission to copy without fee all or part of this material is granted provided that: 1. the copies are not made or distributed for direct commercial advantage, 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of the BOSTON UNIVERSITY CENTER FOR ADAPTIVE SYSTEMS AND DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS. To copy otherwise, or to republish, requires a fee and/or special permission. Copyright @ 1992 Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems 111 Cummington Street Boston, MA 02215

Genetically Generated Neural Networks I: Representational Effects Leonardo Marti Boston University Center for Adaptive Systems 111 Cummington Street Boston, MA 02215 lmarti@cns.bu.edu Abstract This paper studies several applications of genetic algorithms (GAs) within the neural networks field. After generating a robust GA engine, the system was used to generate neural network circuit architectures. This was accomplished by using the GA to determine the weights in a fully interconnected network. The importance of the internal genetic representation was shown by testing different approaches. The effects in speed of optimization of varying the constraints imposed upon the desired network were also studied. It was observed that relatively loose constraints provided results comparable to a fully constn1ined system. The type of neural network circuits generated were recurrent competitive fields as described by Grossberg (1982). I. Introduction Genetic Algorithms (GAs) have a lot in common with neural networks. While used in engineering applications, neural networks are noted for their neurobiological foundations. GAs are also based on biological foundations. However, not all known natural genetic functions have been incorporated into GAs. GAs have been used mainly as search and optimization procedures. Natural genetics perform some of these tasks, but more importantly, genetic material contains the "program" for life-building. Although recent discoveries (I-Io & Fox, 1988) have changed our view on this "program", it is still undisputed that the genetic material (DNA, RNA) contains enough information to generate an organism. A genetic system consists of a population of orga.nisrns or individuals where each member is composed of a. gene. A gene is composed of a. string of alleles. In this particular case, alleles arc represented a.s single bit binary values. An initial population of an arbitrary number of members is created by assigning random values to each allele. Once the population is established, each individual's gene is tested against some metric. This can be seen a.s their "life" performance, generating a probability of reproduction. Once all individuals have been tested (have "lived") the individuals with higher performance will be more likely to procreate and pass on their genes.

This paper uses GAs to search for the parameters that describe a neural network. These parameters will be used to generate such a network and to analyze its behavior when required to perform a specific task. The aim here is not only to use GAs as a parameter search tool, but as a code building tool. Garis (1990), Miller, Todd, & Hegde (1989), and Harp, Samad, & Guha, (1989) have done some work in this direction. The Miller, Todd, & Hegde's system (1989) can't be considered strictly a network building model, but as a network design or configuration system. It basically determines the weight values in a fully interconnected network where the number of nodes is predetermined. Harp, Samad, & Guha (1989) on the other hand, determine the number of nodes and their connectivity in a fairly complete way. Here, the type of network searched for will be of a more biologically based type. The internal genetic representation is critical to the speed and optimization level of a genetic system. This was shown by testing different approaches to the genome representation. In order to further accelerate the optimization process, the effects of varying the constraints imposed upon the desired network were also studied. In this paper, the formation of subsequent generations is based on two genetic operators: mutation and crossover. Mutation is performed by switching each allele to its complementary value with certain probability. Crossovers are performed by selecting two individuals from the population for reproduction. A crossover point is randomly selected somewhere along the extent of the gene, and two progenitors arc generated by switching the genetic material of the two progenitors after the point chosen. In the simulations carried out here, the probability of mutation was 0.03. The probability of crossover was 1.00. In addition, the genome of the best individual of each generation was copied unchanged for the next gcnemtion. For further reference on genetic operations and implementation details see Goldberg (1989). II. System Description An approach similar to that of Miller, Todd, & Hegde (1989) was used. A network of fixed node size was implemented. The connections between nodes were represented by a 4 by 4 rnatrix. The Gi\ was used to find which connections should exist and whether these should be inhibitory or excitatory. The activation equation was of the form: d~~; =-Ax;+ (B- a:;)(i; +I; f(xg))- (x; +D) I; f(tch) where a:; is node i from 1 to 4, A, B and D are constants set at 6.0, 5.0, 5.0 respectively, f(x) is the neuron's feedback equation (f(x) = x; if x > 0 otherwise f(x) = 0), g is the set of excitatory nodes, and h the set of inhibitory nodes. The sets of excitatory and inhibitory nodes are deterrninecl by the contents of the genome. For the target circuit, g was the node itself (.f(x;)), and h consisted of every other node (L:hti f(xh)). 'I'he representation of the matrix in the genome was implemented by allocating two alleles to each connection. So, locations l a.nd 2 specify the type of connectivity between node 1 and itself. Locations 3 and 4 specify the type of connectivity between node 1 and node 2, and so on. The meaning of each pair of alleles is shown in Table 1. In the current experiment, the exact resulting circuit and its response curve were known a priori, so a measure of the difference from this curve was used as the function to be maximized. 2

Allele pair Connection 00 Disconnected 01 Disconnected 10 Inhibitory 11 Excitatory Table 1: Table for allele representation of connection The problem studied with this setup was a network of feedback nodes. The target configuration was a recurrent competitive feedback circuit (Grossberg, 1982), as shown in Figure l. OUTPUT,, INPUT Figure 1: Competitive recurrent circuit. For clarity, only node 2 is shown with all its efferent connections. III. Representation Modifications The setup just described did not converge to an optimal result within a reasonable time ( 400 generations). An <malysis of the schemata. (similarity templates) involved in the representation of connections, reveals that it is more likely for the system to change genetic material from one state to a state with lower fitness, rather than to a state with higher fitness. This is due to the allele representation, in conjunction with the metric used, and the manner in which crossover <md mutation affects genes. For example, let's assume that a given connection was initially set as not connected (00) when the optimal setting is excitatory (11). Since the probabilities of rnutation are quite low (0.0:3) compared with the probabilities of crossover (1.0), it is quite unlikely that both alleles will be mutated during the same generation in the same individual, therefore, making crossover the more likely candidate for improving performance. This means that a population member with values of 10 rnust be combined with another member with values of 01. But a setting of 10 is of lower fitness tha.n the original setting of 00. So the member with the lower fitness is quite unlikely to survive and reproduce, in effect slowing the improvement of genetic material. In order to avoid this problem, the representation must allow for stepwise improvements in performance through the combination of short length schemata. Crossover should be equally likely to move the schemata to any possible state. This can be achieved in a number of ways. One possible solution would be to use a tri-valued allele, where mutation would be equally likely to 3

switch to any state. This option would avoid the problem of crossover effects on schemata, by not allowing crossover to modify the type of connection used. The solution chosen still maintains hi-valued alleles, but the meaning of the alleles has been altered. Table 2 shows the table for a connection under the new configuration. Here, it is quite likely that an excitatory connection will eventually move to a disconnected state, and from a disconnected state it is possible to move to either an excitatory or inhibitory state. Allele pair Connection 00 Inhibitory 01 Disconnected 10 Disconnected 11 Excitatory Table 2: Table for new allele representation of connection,,,,,,,, ~-J '-----------'---~-- x, x, Figure 2: Left: Input sequence. Right: Output of recurrent competitive recurrent circuit A characteristic response curve similar to the one shown in Figure 2 was requested, given the shown inputs. Since this curve docs not contain all possible cornbinations of inputs, the optimal circuit may respond unexpectedly to inputs not tested. The resulting network matched exactly that of Figure 1.. As desired, the network contains both positive feedback within all the nodes <mel inhibitory connections to all other nodes. This shows that all possible inputs need not be tested in order to provide sufficient constraints for a. unique system. The improvement in fitness across generations is shown in Figure 3. The fitness function used was: 4

O(x) = 100 1 + L,(I<, - Yt) where K, is the optimal output value at timet, and y, the actual output from the network at timet. 0. 8 Jr------r-----.------~-----.-----..------r-----.-- max - avg - --~ min - 0.6 0.4 Figure 3: Best, worst, and average population members of the search for i1 recurrent competitive network over 150 generations. IV. Constraint Modification At this point, the fitness function Wi1S simplified in order to provide feedback only when the node's activation had settled after each input had changed. 'I'he output activations were then compared with two threshold levels, giving three possible states: ina.ctive, inhibited and excited. 'I'hc discrete result was then compared with a table of the desired network. The disparity frorn the table was then used as the metric to be optimized. This simpler method of network specification was similarly robust in guiding the GA towm ds the desired network specification. Since the calculation of the metric is now simpler, the system executed i1 similar number of generations in less time (about half). V. Conclusions 'I'he design and use of neural sub-systems is a complex a.re<t that merits further research. In the present study, only sm<tll, fix-sized networks were tre<tted. How these networks c<tn increase in size, how they are maintained, modifted, and coupled to form more complex systems is an important <trei1 that must be investigated to better understand the evolutionary processes. The present study shows the importm1t intemction between schemata <tnd intern<tl represcnt<ttion. As a genetic system grows more complex, a methodical testing of the effects of genetic 5

operators is necessary. If novel genetic operations are to prove their usefulness, work of the type performed by De Jong (1975), is required. Similarly, other network specification representations should be studied, to observe effects such as the one described here. The present study also shows how partially constraining a system may be enough to orient the search in the proper direction. It can't be generalized to all problem areas, but it can be used to simplify a genetic system when it becomes too complex. References De Jong, K. A. (1975). An Analysis of the Behavior of a Class of Genetic Adaptive Systems. (Doctora.l dissertation, University of Michigan). Dissertation Abstracts International 36 (10), 511013. (University Microfilms No. 76-9381) de Garis, H. (1990) Genetic Programming. Mllchine Le<trning: Proceedings of the Seventh Inl.erna.tiomtl Conference, 132-139. Goldberg, D. E. (1989). Addison-Wesley, Reading. Genetic Algorithms in Sea.rch, Optimizlltion a.nd Machine Lca.ming, Grossberg, S. (1982). Studies of Mind and Brain: Neurlll Principles of Lcllrning, Perception, Development, Cognition, and Motor Control, Reidel Press, Boston. Harp, S. A, Sa.mad, T. & Guha, A., (1989) Towards the Genetic Synthesis of Neural Networks. Proceedings of the Third Conference on Genetic Algorithms, Morgan Ka.ufmann, Sa.n Mateo. Ho, M. & Fox, Sidney W. (1988). Processes and Metaphors in Evolution. In Ho, Mae-Wan & Fox, Sidney W. Eds. Evolutionmy Processes and Metaphors John Wiley & Sons Ltd. Chichester. Miller, G. F., 'I'odd, P. M. & l-iegde, S. U. (1989) Designing Neural Networks using Genetic Algorithrns. Proceedings of the Third Conference on Genetic Algorithms, Morgan Kaufmann, San Mateo. 6