Literature Review







2.1 Genetic Algorithms: Precursor to Genetic Programs

A genetic algorithm is simply a search algorithm based on the observation that sexual reproduction, and the principle of survival of the fittest, enable biological species to adapt to their environment and compete effectively for its resources. While it is a relatively straightforward algorithm (Appendix I), the algorithm is an effective stochastic search method, proven as a robust problem solving technique [5] that produces better than random results [24].

This observation was first mathematically formulated by John Holland in 1975 in his paper, "Adaptation in Natural and Artificial Systems" [13]. Usually the algorithm breeds a predetermined number of generations; each generation is populated with a predetermined number of fixed length binary strings. These binary strings are then translated (decoded) into a format that represents suitable parameters either for some controller, or as output.

The product resulting from evolution (whether natural or simulated) is not simply discovered by a random search through the problem state space, but by a directed search from random positions in that space. In fact, according to Goldberg, the simulated evolution of a solution through genetic algorithms is, in some cases, more efficient and robust than the random search, enumerative or calculus based techniques. The main reasons given by Goldberg are the probability of a multi-modal problem state space in non-linear problems, and that random or enumerative searches are exhaustive if the dimensions of the state space are too great [10].

An additional advantage of the genetic algorithm is that the problem solving strategy involves using “the strings’ fitness to direct the search; therefore they do not require any problem-specific knowledge of the search space, and they can operate well on search spaces that have gaps, jumps, or noise” [11, 46]. As each individual string within a population directs the search, the genetic algorithm searches, in parallel, numerous points on the problem state space with numerous search directions.

According to Koza, “the fact that the genetic algorithm operates on a population of individuals, rather than a single point in the search space of the problem, is an essential aspect of the algorithm. The advantage conferred by the existence of a population is not merely the obvious benefit of dropping 1,000 [i.e., population size] parachutists, rather than one, on the landscape. The population serves as the reservoir of the probably-valuable genetic material that the crossover operation needs to create new individuals with probably-valuable new combinations of characteristics” [19, 27].

Genetic algorithms also exhibit “inherent parallelism”. Inherent parallelism is the term Holland gave to the characteristic of genetic algorithms (primarily fitness proportionate reproduction) that they implicitly explore unseen Boolean hyperplanes, limited by the relation of these hyperplanes to the average fitness of the population, which determines the extent to which they will be explored. In other words, the information available to the system is greater than that currently present in the population. This additional information is gained by traits in the population such that individuals not present in the population contribute to the search [13]. According to Schaffer, “since there are many more than N hyperplanes represented in a population of N strings, this constitutes the only known example of the combinatorial explosion working to advantage instead of disadvantage” [26].

2.2 A Closer Look at Genetic Programming

Genetic programming is effectively a ‘class’ of genetic algorithm. For this reason, the algorithm for genetic programming is much the same as that of the genetic algorithm. According to Kinnear, “while GP doesn’t mimic nature as closely as do GAs, it does offer the opportunity to directly evolve programs of unusual complexity, without having to define the structure or the size of the program in advance” [14, 287].

The conventional genetic algorithm, however, manipulates a population of fixed length binary strings. Koza’s ‘genetic programming paradigm’ [16] begins with a population of randomly generated programs (or functions) that are, conventionally, represented by Lisp S-expressions, or a hierarchy of S-expressions. They are, in effect, parse trees and there easy representation as S-expressions makes Lisp a language commonly used to implement the genetic programming paradigm.

These parse trees are manipulated in genetic programming. In this way, the fundamental differences between the genetic algorithm and the genetic program are the ‘genotype’ and the structure of the chromosomes. Both these differences contribute to the major advantages of genetic programming: the dynamic structural complexity of the chromosome; and the variety of genotypes and their position within the chromosome.

The problems solved (or approximated) by the genetic program are, in some ways, symbolic regression problems. In order to perform its function the genetic program can be required to discover the independent variables and their relationship with each other and with any dependent variables. In other words, the genetic program can find “the correct functional form that fits the data and the discovery of the appropriate coefficients” [16, 37]. Solving this type of problem is significantly more complex than solving linear regression or polynomial regression problems.

Using the same Darwinian processes, the parse trees in a genetic program can be bred in much the same way as the character strings of genetic algorithms. Instead of searching the problem space for binary representations, the genetic program searches the program space for the ‘terminals’ and the ‘atoms’ to find a function representation that solves (or approximately solves) the problem [16].

The terminals include all the variable and constant values available to the program, i.e., the numerical values. The atoms are primitive functions that may include mathematical functions and operators, Boolean functions or conditional statements. The expected final output of a genetic program is a program that can solve the problem.

2.3 Tuning

The term tuning, when applied to genetic systems, is the process of selecting the algorithm’s ‘control parameters’. These parameters include population size, number of generations, how the algorithm determines completion of its run, mutation rates, inversion rates, permutation rates, selection rates, fitness function, crossover probabilities, reproduction probabilities and decoding methods. According to Davis, tuning the algorithm is “problematic”: he asserts that “Selecting values for these parameters is itself a difficult CO [combinatorial optimisation] problem ... [and] although ... automatic tuning has performed better than users’ guesses of control parameters in some problems, ... it requires long computer runs” [5, 120].

The problem of tuning a genetic system will, to a great extent, affect that system’s ability to find a solution to a given problem. Noting the performance variations of genetic algorithms across a spectrum of applications, Randall states “genetic algorithms can be used as a tool for finding good solutions to a problem, in which total optimality (i.e., the perfect solution) is not necessary” [24].

In genetic programming, tuning also involves the selection of the terminals and the atoms that comprise the terminal and function sets. Improper definition of the function and terminal sets can at best result in an inefficient system. At worst, poorly chosen terminals and atoms will result in the genetic program being incapable of performing its task.

In effect, tuning is responsible for defining the relative importance of the genetic operators in the system. This responsibility causes the difficulty in tuning, particularly when the relative task suitability of operators (particularly the selection operators) is not clearly defined ([5], [16]).

2.4 Genetic Operators

The term ‘Genetic operators’ is applied to the methods used to simulate nature in computer-based evolutionary systems. The genetic operators that form the basis of simulated genetic systems are crossover, fitness proportionate reproduction, and mutation [5]. The translation of these genetic operations to computers is surprisingly straightforward; their choice and implementation is somewhat less straightforward.

2.4.1 Crossover

Crossover occurs in the genetic algorithm/program when two members of a population (chromosomes) are selected for reproduction, and is usually based upon their relative fitness in solving the problem. Sometimes called recombination, crossover is the process of combining the attributes of two chromosomes to produce two new chromosomes that inherit some (or all) of their attributes from one (or both) of their parents.
 
 

Figure 1: Two Point Crossover of Binary Strings
Two point crossover in binary strings (Genetic Algorithm)

 The crossover technique most commonly used in genetic programming is single point crossover (Appendix III). There has been some recent interest in multiple point crossover in genetic programming [15]; however, “this is relatively difficult to code as selection of a cut point within the same sub-branch of the genetic program is not a simplistic process. Also the improvement such a technique can have to the genetic programming paradigm is not immediately apparent” [6].

2.4.2 Fitness

Fitness is probably the main concept in Darwinian evolution; it is the ‘direction’ the genetic algorithm takes in its pursuit of improvement [5]. Fitness refers to an individual’s ability to compete within an environment for available resources. Goldberg describes the fitness function as “some measure of profit, utility, or goodness that we want to maximise” [10, 10].

In the genetic algorithm, this competition is based on the chromosome’s performance within the problem domain. Some nominal scale is determined that is suitable to the task like ‘time to failure’ [25], or ‘time to balance’ [16]. After a chromosome is applied to the problem, it is awarded a fitness value to reflect its performance. In this way, when the entire population has been tested, the relative ability of each chromosome can be identified.

To avoid ‘verbose’ functions as solutions, parsimony can be included as a fitness criterion. As shown by Koza, this technique can be used to reduce the complexity of the individual chromosome being evaluated [17]. In fact, in the application of genetic programming to feature discrimination “it was observed that a high degree of correlation exists between tree size and performance: among the set of ‘pretty good’ solution trees, those with the highest performance were usually the smallest within that set. It was also observed that most runs achieved a point where the size and complexity of trees eventually began to grow increasingly larger, while performance tapered off to lower values” [25, 309].

In the genetic algorithm, the fitness function determines the extent to which the system must deal with the ‘credit assignment problem’.

2.4.3 Selection

The choice of a selection technique is a major aspect of tuning a genetic algorithm [5]. With the added complexity and variety of genes employed by the genetic programming paradigm, the selection operators would reasonably be expected to have a more noticeable effect on the efficiency of this problem solving method.

As the name suggests, selection is the process of choosing the breeding stock from a population. The standard selection techniques in genetic programming are stochastic sampling with replacement (roulette wheel), and tournament selection. These techniques are by no means the only methods employed in genetic algorithms; they are simply the most commonly employed [17]. The primary task of all selection methods is to measure the relative fitness of each chromosome.

2.4.3.1 Stochastic Sampling with Replacement (Roulette Wheel).

Stochastic sampling with replacement is considered to be the standard method of selection in genetic programming, primarily because it is historically accepted as a versatile technique that has been widely implemented [6]. In addition, this selection method has also proven successful in a variety of problem domains ([5],[10],[15],[16],[25]) and the selection algorithm itself is relatively straightforward (Appendix II.a).

Also known as fitness proportionate and roulette wheel selection, stochastic sampling with replacement operates on the concept that the proportionate fitness of each chromosome should be reflected in that chromosome’s incidence in the mating pool. Thus, each chromosome in the population has a probability of selection for crossover based on its relative fitness. This technique provides the greatest probability of selection to the most fit members of the population.

To evaluate the expected occurrence ( e) of a chromosome ( i ) in the mating pool, the fitness of a chromosome ( f) is divided by the sum of the fitness of all chromosomes in a population ( n) [10], i.e.:

Equation 1: Probability of Selection.
Probability of Selection

Equation 2: Expected Occurrence in Mating Pool
Expected occurence in the mating pool

The name ‘roulette wheel’ has been given to this technique (stochastic sampling with replacement) because the process of selection can be likened to spinning a roulette wheel. Each member of the population is allocated the amount of space on the wheel that reflects its relative fitness. The fitter the individual, the greater the area on the roulette wheel that is occupied by the individual. Each time a parent is needed for reproduction, the selection is based on a ‘spin’ of the wheel.

When an individual is chosen, a replica of that individual is included in the mating pool. The term ‘with replacement’ is used since the original chromosome is available for further selection as additional chromosomes are needed: the system maintains no memory of prior selection. This technique is intended to provide a direct relationship between the replication frequency of an individual in the mating pool and the relative fitness of that individual. The fitter the chromosome, the more copies of the chromosome can be expected in the mating pool. The weighted selection is continued until there is no need for additional chromosomes. In this way, if a chromosome has a fitness equal to 15 per cent of the total fitness and the population size is 100, the mating pool can be expected to have 15 copies of that individual. A problem with roulette wheel selection stems from the difference between the observed and the expected presence of chromosomes in the mating pool.

A variation to roulette wheel selection is known as ‘ Remainder Stochastic Selection with Replacement’. In this technique, after each chromosome is selected to the extent of the integer part of its expected occurrence (the e value), the remaining vacancies in the mating pool are made up by taking the fractional part of the e value (equation 2), and using that value to calculate the relative weights of the individuals. Roulette wheel selection is then repeated until the mating pool is full. According to Goldberg, remainder stochastic sampling with replacement has a greater probability of diversity in the population than the roulette wheel technique [10].

2.4.3.2 Stochastic Sampling Without Replacement

Often referred to as ‘the expected value model’, stochastic selection without replacement was developed by De Jong to minimise the stochastic errors that can result from roulette wheel selection [10]. De Jong identified two main problems with roulette wheel selection. “First, since we cannot practically calculate actual schema average fitnesses, we are forced to estimate them through sequential finite sampling. Second, the selection scheme (roulette wheel selection) is itself a high-variance process with a fair amount of scatter between expected and actual numbers of copies” [10, 115].

The expected value model monitors the number of times a chromosome is selected for either reproduction or crossover. The expected incidence of the chromosome, the e value (equations 1 and 2), is stored (an offspring count) and decremented each time the chromosome is selected. If the chromosome is selected for reproduction (budding), the offspring count is reduced by 1. If the chromosome is selected for crossover, the offspring count is reduced by 0.5. When the offspring count falls below zero, the chromosome is no longer eligible for selection (Appendix II.b).

Another, similar, technique is known as ‘remainder stochastic selection without replacement’ . In this technique, the individual is still guaranteed representation in the mating pool to the extent of the integer value of its expected occurrence ( e value). However, after each individual has been allocated space according to the integer portion of its e value, available space in the mating pool is filled using a “weighted coin toss” based on the fractional component of the individual’s expected incidence in the mating pool. This selection continues with additional Bernoulli trials until the mating pool is full [10].

2.4.3.3 Tournament Selection.

Tournament selection can be likened to the natural process of individuals competing with each other in order to mate [5]. When moose rut, the competition between the bucks determines which will mate. So too, the chromosomes in the population are selected for competition, usually randomly, with the victor increasing its expected incidence in the mating pool.

Tournament selection should not be confused with tournament fitness. According to Angeline and Pollack, “rather than exhaustively testing each member of the population, in tournament fitness a single elimination, binary tournament is run to determine a relative fitness ranking. Initially the entire population is in the tournament. Two members are selected at random to compete against each other with only the winner of the competition progressing to the next level of the tournament” [1, 266]. When the tournament is over (i.e. only one individual is left), the relative fitness of each member of the population is awarded according to the level of the tournament it has reached.

The actual tournament most commonly used in tournament selection is the same as that described above for tournament fitness (figure 2). However, as is the case with all selection techniques, tournament selection is used to decide the composition of the mating pool (Appendix II.c). The choice of chromosomes to be paired for competition need not be a random selection from those chromosomes that remain in the tournament. In some cases, the chromosomes are paired for competition in a similar way to roulette wheel selection (stochastic tournament selection). This technique selects unique individuals based on their relative fitness [10].

In either case, the final victor of the competition will always be the fittest chromosome in the population. The more successful the chromosome is in competition, the more often that chromosome is expected to appear in the mating pool.
 
 

Figure 2: Tournament Competition between Chromosomes

Example of tournament selection









 A situation that can be expected in tournament selection is the pairing of two chromosomes that display relatively equal fitness. Competition between well matched, fit individuals in the later stages of the tournament is beneficial to the selection process. In the early stages of the tournament this ‘equal’ pairing can be detrimental. The most extreme case would occur if the two fittest chromosomes were paired in the first round of the tournament If the two chromosomes are identical, the resulting competition will avoid convergence around this chromosome. If however, the two chromosomes represent distinctly different solutions, their early pairing will result in the less fit competitor (the loser) being awarded no representation in the mating pool: the extinction of a species.

2.4.3.4 Deterministic Sampling

Deterministic sampling is a selection technique that guarantees at least fitness proportionate representation in the mating pool. Using equations 1 and 2, the e value is determined for each individual. The integer value of the expected occurrence is then copied directly into the mating pool. The fractional portion of the e value is then used as a sort order for the individuals in the population, i.e., the individuals with the greatest fractional component head a list. The remaining individuals required to make up the balance of the mating pool are then selected from the top of the list until the mating pool is full.

2.4.4 Fitness Proportionate Reproduction

Reproduction in genetic programming is asexual, thus imitating the process of budding in biology. Through reproduction, an identical copy of the individual selected is carried over into the next generation: survival of the fittest. Fitness proportionate reproduction is the asexual reproduction of chromosomes selected stochastically from the population. According to Koza, “the operation of fitness proportionate reproduction for the genetic programming paradigm is the basic engine of Darwinian reproduction and survival of the fittest” [16, 11]. In other words, the selection of individual an chromosome is based on a probability that is relative to that chromosomes relative fitness within its population.

2.4.5 Demetic Grouping

Demetic grouping involves subdividing the population into smaller populations (demes) that evolve independently. All the genetic operators are applied to each deme as though it were a separate genetic program. The logic behind this technique is that populations will be allowed to evolve along different paths to avoid concentration (convergence) of the search on a particular area in the problem space. Demetic grouping is expected to achieve a similar result to speciation in nature: separate evolutionary paths are promoted in the competition for like resources.

A diagram that displays the process of demetic grouping is shown below in figure 3.

Figure 3: Wanderers in Demetic Grouping.
Demetic Grouping

 Based on a predefined parameter (probabilistic wanderer selection), there is the probability of a ‘migrator’ jumping from one deme to another and so increasing that deme’s chromosomal diversity.

2.4.6 Mutation

Mutation in genetic programming is essentially the same as that occurring in the genetic algorithm (figure 4). It is a random alteration of a gene, or genes, in a chromosome. The primary difference between mutation in genetic algorithms and in genetic programming is the mutant gene’s type. In the genetic algorithm, mutation involves selecting a random point, or points, in the chromosome and substituting the value found at that point with an appropriate value that has been randomly generated.

Figure 4: Mutation in Binary Strings
Mutation in binary string

 In genetic programming, any atom can be replaced by any other atom. Any terminal can be replaced by either another terminal or a subfunction (i.e., a sub-tree of atoms and terminals). Unlike mutation in the genetic algorithm, mutation in genetic programming must consider the genotype, the range of appropriate values for that genotype, and the number of arguments the new function receives if the function is to remain valid (figure 5).
 
 

Figure 5: Mutation in Genetic Programming

Mutation in a binary tree









 Like demetic grouping, the main purpose of mutation is to ensure diversity in the population and avoid premature convergence. Convergence, by another name, is allele loss. As was shown by De Jong, the rate of allele loss is inversely proportional to the mutation rate of a genetic system [10]. The most commonly used type of mutation used in genetic programming is allele mutation [7].

In biology, the term allele refers to the range of values a particular gene can assume. In the genetic algorithm, an allele is known as the feature value [10]. In genetic algorithms, an allele refers to the value, usually binary, of a gene (or genes). In genetic programming, allele mutation is the process of replacing (or adding) nodes or branches of the parse tree with suitable values so the final function is still valid [7]. So, although the term ‘allele mutation’ may imply that this type of mutation intentionally diversifies the gene pool by favouring genes not present (or low) in the current population, this is not the case. The mutation points and the selection of genes is random. The actual feature values of genes are only important to the extent of maintaining the structural integrity of the genetic program (its closure property).

One problem with mutation in genetic programs is the tendency of the solution to increase in structural complexity [16]. Recently a technique known as ‘shrink mutation’ has been put forward [27]. Shrink mutation is intended not only to stop this growth, but to reduce the structural complexity of the parse tree. Because “any random change to a program qualifies as mutation” [27, 174], in shrink mutation a node is randomly selected and moved ‘up’ the tree to the position of its parent. In this way, mutation occurs and complexity is reduced.

There is some argument as to the importance of mutation in genetic systems. The argument in favour of mutation states that the mutation operator prevents premature convergence of the population [6]. On the other hand, in tests using only fitness proportionate reproduction and mutation (i.e., no crossover), the genetic system failed to find solutions to its problems ([13], [10], [16]). In fact, the search strategy of the genetic algorithm has been shown to approach that of a random search in proportion to the mutation rate of that genetic algorithm [10]. For this reason, mutation is sometimes called a background operation, considered to have ‘relative unimportance’ on the outcome search [16].

2.4.7 Permutation

Inversion, in the genetic algorithm, is the process of selecting two points within a chromosome and reversing the order of the genes that lie within those two points. Permutation is an extension of inversion that can be applied to genetic programming. In permutation, a single parse tree is manipulated during the operation (i.e., it is an asexual operation). The selection of that tree is made on the basis of its proportional fitness.

The operation requires the random selection of an internal point in the tree, an atom, and manipulating the order in which that atom receives it arguments. Unlike inversion, the new order of the arguments to the atom is based on a random selection from the set of possible permutations of the arguments.

According to Koza, “The permutation operation can potentially bring closer together elements of a relatively high fitness individual so that they are less subject to later disruption due to crossover. However, like the mutation operation, our [Koza’s] experience, after including the permutation operation in numerous runs of the various problems described herein, is that the benefits of this operation are purely potential and have yet to be observed” [16, 100].

2.4.8 Steady State Genetic Programming

The standard generational genetic system uses two populations at the selection stage of the genetic algorithm. One population of the system contains the parents to be selected and a second population is generated to hold their progeny. The steady state genetic system uses the same population for both parents and their progeny.

Steady state genetic programming writes the next generation of the genetic algorithm to the same population from which the parents were selected. When the genetic operation on the parents is completed, the new offspring take the place of members of the previous generation within that population. In other words, the new members of the population are merged with the old members of the population. The process of generating the new population continues until no members of the previous generation remain or a generational equivalent has been reached [14].

Although the steady state technique was introduced in genetic algorithms in 1975 [5], the application of this technique to genetic programming (steady state genetic programming) is a recent development. And, although recent, the technique is receiving increasing attention in both the field of genetic algorithms and that of genetic programming [14].

This interest is partly attributable to the fact that steady state techniques can offer a substantial reduction in the memory requirements of a system [7]: the technique abolishes the need to maintain two populations during the evolutionary process. In this way, genetic systems have greater portability to a variety of computer environments because of the reduced memory overhead.

Another reason for the increased interest in steady state techniques is that, in some cases, steady state genetic programming has been shown to be more effective than generational (or generational replacement) genetic programming. Although the steady state techniques vary somewhat, this improved performance can be attributed to factors such as diversity of the population [14] and the immediate availability of superior individuals [5]. However, the benefits of steady state genetic programming appear to be a function of population size. In smaller populations, the advantages of the steady state method are outweighed by “the negative effects of genetic drift” [14, 288].

2.4.9 Automatic Function Definition

The idea of automatic function definition was recently put forward by Koza as a means to increase the efficiency of the genetic programming paradigm [19]. This process is designed to identify those subroutines that are of value to the outcome of the system and keep them as additional members of the gene pool.

In short, automatic function definition takes a hierarchical approach to problem solving. Koza identifies three main steps in the problem solving process of automatic function definition: the main problem is divided into subproblems; the solutions to these subproblems are then discovered; these solutions are then combined to form a solution to the problem ([18], [19]).

According to Fraser, “Rather than using a single root branch of the genetic program the system allows further branches to be added. Each branch has a separate function and terminal set with the root branch being used as a simple function call for the other branches ... Recombination is constrained to act within particular branches only as the differing function and terminal sets could cause conflict of the closure property. The genetic programs can be considered as having co-evolving branches” [7, 4].

Koza compared genetic programming with and without automatic function definition against a number of problems including the ‘San Mateo trail’ problem. The results of this research show some advantages of using automatic function definition. In particular, Koza observed a 50 per cent reduction in the computational effort (number of runs) when automatic function definition was used. In addition, the average structural complexity of the solution produced without automatic function definition was approximately 21 per cent greater [18].

2.5 The Cart-Pole Problem

The nature of the problem is simple. The base of a pole is attached to a cart that travels on a horizontal plane. One end of the pole is attached to the cart; the other end is free to swing in the vertical plane along the longitudinal axis of the cart. In order to balance the pole, the controller is allowed to apply a predefined force to the cart at each time step. Known as the "bang-bang" control method, this force can only be applied from either side of the cart along the horizontal plane at a predefined and constant magnitude (figure 6).

Weiland states, “the protypical control problem that has been used with neural networks is pole balancing (also known as the cart-pole, broom balancer, or inverted pendulum problem) ... This problem is of interest because it describes an inherently unstable system and is representative of a wide class of problems” [29, II-669]. For this reason, the cart-pole problem has become the “de facto benchmark” [8].

In 1963, Widrow and Smith used the cart-pole problem to show that a single McCulloch-Pitts type neuron can be trained to balance a pole [9]. In 1983 Barto, Sutton and Anderson, testing their Adaptive Heuristic Critic, described what have become the most commonly used parameters for the problem [3] (table 2).

Figure 6, below, illustrates the cart-pole problem. A description of the symbols used in this diagram is shown in tables 4 and 5.
 


Figure 6: The Cart-Pole Problem

Diagram illustrating the cart-pole problem

 According to Davis, “From the viewpoint of optimisation, an important question to ask is how hard is the problem? We consider a problem hard if it cannot be solved by standard linear search techniques” [5, 179]. According to Koza, “the problems of centring a cart and balancing a broom (inverted pendulum) by applying a ‘bang bang’ force are well known problems in control theory.” Further, “there is no known [optimal] solution to this problem” [16, 57 - 65]. However, the cart-pole problem can be solved when a force F is applied to the cart as [9]:

Equation 3: Linear Dynamic equation to solve the Cart-Pole Problem.

The linear equation above can also represent a four input McCulloch-Pitts neuron [25].

Geva and Sitte also state that “comparison of the results of past research on the cart-pole control [problem] is very difficult, if not outright impossible, because researchers did not adhere to a uniform specification of the learning task” [9, 42]. Such is the case in the application of genetic programming to this problem. Presently, the applications of genetic programming to the cart-pole problem have been designed primarily to prove that this paradigm can be applied to successfully balance the pole or to centre the cart ([16], [20], [21]).

Geva and Sitte recommend conforming to the following criteria when conducting the cart-pole experiment as a benchmarking tool [9]:

In their recently published work, "The Cart-Pole experiment as a benchmark for trainable controllers", Geva and Sitte analysed the cart-pole problem to test its effectiveness in proving and evaluating the ability of unsupervised training methods. As a result of their research, Geva and Sitte recommend that the controller be started from a difficult position and be required to centre the cart [9]. The reasoning behind this conclusion is that this starting configuration and success criterion were shown to make the problem non-trivial and minimise the effect of randomness in the search for a solution.

The starting configuration recommended by Geva and Sitte was concluded after testing 10,000 random weight controllers against the cart-pole problem (table 1). The successful random controllers represent random weights applied to a linear control law (equation 3). In the more trivial configurations, the results of this test shows a high incidence of randomly generated solutions to the cart-pole problem. In such cases it is difficult to justify that the solution was derived by machine learning rather than by a random search. Although the results given by Geva and Sitte in table 1 do not show the controllers found for the starting configuration suggested (table 5), the results do show that the complexity of the problem is affected by the starting configuration [9].

Table 1: Random Controllers Applied to the Cart-Pole Problem
 
 
Start State  Random Controllers 
(x, v, Ø, w) Found
0,0,0,0 906
1,0,0,0 344
0,1,0,0  241
0,0,0.17,0 352
0,0,0,1 182
1,1,0.17,1 52

 The research conducted in genetic programming on the cart-pole problem to date conformed with neither the configuration parameters nor the reporting recommendations of the benchmark version described here ([16], [20], [21]). Randall et al. took the additional step of identifying the random controller that represented performance at the 95th percentile mark, over 10,000 runs, to establish confidence in the significance of any performance difference in the evaluation of the various strategies in their comparison [25].

With a standard experiment to test the ability of an adaptive system to conduct unsupervised learning, Koza's genetic programming paradigm can be tested against the cart-pole experiment. The results of the test conducted by Randall et al . showed that the genetic algorithm controlling a McCulloch-Pitts neuron had the greatest success in both balancing the pole and centring the cart [25]. The success of the genetic algorithm promotes confidence that Koza's genetic programming paradigm can perform the more difficult task of the cart-pole problem as specified by Geva and Sitte [9]. To date, the genetic programming paradigm appears to have only been applied to less demanding versions of the problem ([16], [17], [20], [21]).

Although the cart-pole problem may seem simple, it is a non-linear and therefore complex problem for an adaptive system. That this problem is often applied in robotics to test balancing strategy also suggests that it is a non-trivial problem. For adaptive systems, the problem itself is aimed to demonstrate an adaptive systems ability to conduct unsupervised learning, particularly in exploring a noisy, non-linear problem space.

2.6 Advantages of Genetic Systems
 

Genetic algorithms are an important problem solving technique. The genetic programming paradigm is a recent designation of a new class of genetic algorithm. The algorithm uses a strategy of a directed search through a problem state space from a variety of points in that space [10]. For this reason, Davis has identified three main advantages of the genetic algorithm in optimisation:

“First, they generally find nearly global optima in complex spaces. This is important because the search spaces for our problems are highly multimodal, a property that leads hill-climbing algorithms to get stuck in local optima. Second, genetic algorithms do not require any form of smoothness [i.e., the problem state space need not be continuous] ... Third, considering their ability to find global optima, genetic algorithms are fast, especially when tuned to the domain on which they are operating” [5, 287]. Another advantage of genetic algorithms is their inherently parallel nature, i.e., the evaluation of individuals within a population can be conducted simultaneously, as in nature.

One of the major advantages of genetic programming over genetic algorithms is their applicability to symbolic regression problems ([16], [17]). According to Kinnear, “while GP doesn’t mimic nature as closely as do GAs, it does offer the opportunity to directly evolve programs of unusual complexity, without having to define the structure or size of the program in advance” [14, 287]. Thus, the genetic program is suited to problems requiring unsupervised learning, and symbolic regression.

2.7 Applications of Genetic Programming

The genetic programming paradigm has been successfully applied to applications such as pole balancing ([16], [20], [21]), cart centring ([16], [20]), the Boolean 11-multiplexer, Neural Network design [16], Game Playing [1], the Artificial Ant problem ([16], [18], [6]), the Fibonacci sequence, feature discovery and image discrimination [28], sorting ([14], [15]), and a detector for - helices in protein sequences [12]. The problems to which genetic programming have been applied are mostly in the field of research.

As yet, if genetic programming has been applied to the cart-pole problem as set out by Geva and Sitte [7], the results of that research are not published, and work-in-progress on this problem application has not been advertised. In particular, most of the work in this field has been confined to applying the standard genetic program to prove ‘satisfactory’ performance of the paradigm in a variety of problem domains. In fact, much of the research available in this area seems to be concerned with testing whether genetic programming can perform the same tasks that genetic algorithms have already been shown to perform.

The interest that motivated this research can be attributed to two main factors: that genetic programming is a more distant approximation of nature’s (Darwinian) problem solving strategy than genetic algorithms [14]; and that the field of genetic programming is relatively new and less proven than genetic algorithms.

With these factors in mind, while there has been some comparison of various selection techniques ([1], [7], [15], [28]) conducted in a number of problem domains, the suitability of selection techniques can be domain specific [5]. Until selection techniques have been compared across a variety of domains, no pattern is likely (or can be expected) to emerge that will result in a separate paradigm for identifying the most suitable of these operators for a given task.
 
 

[Chapter 1] [Chapter 3]
[Beer]