12

Click here to load reader

Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

Embed Size (px)

Citation preview

Page 1: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

Self-Adaptive Parameters in Genetic Algorithms

Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier), Information and Knowledge Management Section,

Val-Bélair (Québec), Canada, G3J 1X5; b: Département de mathématiques et d’informatique,

Université du Québec à Trois-Rivières, Québec, Canada, G9A 5H7; [email protected]; [email protected]; [email protected]

ABSTRACT Genetic algorithms are powerful search algorithms that can be applied to a wide range of problems. Generally, parameter setting is accomplished prior to running a Genetic Algorithm (GA) and this setting remains unchanged during execution. The problem of interest to us here is the self-adaptive parameters adjustment of a GA. In this research, we propose an approach in which the control of a genetic algorithm’s parameters can be encoded within the chromosome of each individual. The parameters’ values are entirely dependent on the evolution mechanism and on the problem context. Our preliminary results show that a GA is able to learn and evaluate the quality of self-set parameters according to their degree of contribution to the resolution of the problem. These results are indicative of a promising approach to the development of GAs with self-adaptive parameter settings that do not require the user to pre-adjust parameters at the outset.

Keywords: Genetic algorithms, self-adaptive parameters, learning system, parameter settings, adaptation, evolution mechanism.

1. INTRODUCTION Based on Charles Darwin’s evolution theory of the species [1], genetic algorithms (GAs) are powerful search algorithms often used to solve optimization problems [2, 3, 4]. John Holland’s initial work goes up to the 1960’s although his best-known contribution was made in 1975 with the publication of Adaptation In Natural And Artificial Systems [5]. However, it is Goldberg’s seminal work that largely contributed to the development of genetic algorithms [6] as we know them nowadays. The GA’s basic mechanism depends on the choice of several key parameters such as crossover operators, mutation operators, crossover probability, mutation probability, mechanism of selection, and population size. All these parameters have a great impact on the GA’s performance [7 ,8 ,9, 10]. However, the parameters settings must be adapted to each type of problem, which constitutes a significant burden on the user’s part. To facilitate the task of finding the most appropriate parameters settings, some studies have shown that the parameters of genetic algorithms can be determined and optimized by another genetic algorithm [3, 11, 12, 13, 14]. In practice, the setting of GA parameters is set to standard values frequently used to test algorithms’ optimization performance. Indeed, the work of [15] led to the development of parameters standards for GAs. One can thus define the parameters quickly and, thereafter, adjust the parameters in a more precise way, according to the particular problem at hand. These standard parameters are general and not adapted to any specific problems. What are the best parameters settings? What are the best genetic operators and what are their associated rates? A high rate of crossover and low rate of mutation might be very good in the exploration of new solutions for the first generations produced by the algorithm. On the other hand, the same rate is unfavourable when the algorithm is close to the optimal solution. One of the potential solutions to determine the best set of parameters resides in the use of learning. This is the problem we consider here: the learning-based self-adjustment of a GA’s parameters. In recent years, Pigeon proposed a GA-based learning approach for the optimization of a multi-agent information fusion system [16]. This work is precursor to the present research and to the design of an artificial intelligent component intended for urban operations command and control [17, 18]. For this purpose, robustness is required, involving an enhancement of the GA’s performance. We suggest that this enhancement should be based on self-

Page 2: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

adaptive parameters in GA. In this study, this hypothesis was demonstrated by introducing the self-adaptive parameters concept in GA, with dependence on the problem context. A comparison with simple genetic algorithms established a clear advantage in using the adaptive parameters version. In this paper, we present a self-adaptive approach to GA parameter settings which takes into account the particulars of the problem being solved. Subsequent sections are organized as follows. Section 2 presents the parameter adaptation in genetic algorithms and some adaptive methods. Section 3 describes the proposed self-adaptive parameter approach. Section 4 exhibits some experimental results and the analysis of the results. Finally, section 5 presents some conclusions.

2. BACKGROUND 2.1. Self-adaptive parameter in genetic algorithms In this section, we briefly review the classification of different self-adaptive methods proposed in the literature. Although some works offer a different standpoint, the most frequently used classification is based on two major approaches: empirical and adaptive [10, 19, 20]. The empirical approaches are based on experimentation and observation only [10, 21]. They measure the performance of GAs by test and error, while varying the algorithm’s parameters. It is thus necessary to find good values for the parameters before executing the algorithm, and these values remain constant during execution. On the other hand, the approaches by adaptation use an initial parameter setting that is modified during execution of the algorithm. This initial “parameterization” does not require user participation. Within the adaptive approach, there are two sub-categories: limited adaptive parameter and self-adaptive parameter. The limited adaptive approaches analyze the isolated effects of one or two parameters without taking into account the others [3, 22, 23, 24]. Parameters self-adaptation refers to techniques that allow evolution or adaptation of the various GA parameters throughout its execution. We next review some studies in more detail. Grefenstette, 1986 [13]

This work shows how to select the parameters’ values by using meta-learning based on genetic algorithms. The meta-learning is characterized by two levels of genetic algorithms. The meta-level evolves a population of sets of parameters, while the basic level operates on the best set of parameters obtained by the meta-level. The best set of parameters found with meta-learning is slightly better in performance than the sets of standard parameters found by De Jong [15].

Pham, 1995 [25]

Pham used a competitive strategy to find the best choice of operators and parameter values. The competitive evolution method is applied between several subpopulations that use various sets of parameters. Several independent populations evolve in the same environment by using their own sets of parameters. These populations are in competition for only one processor. Thus, the populations those have better parameter settings than others will receive more time processor to further evolve.

Tongchim & Chulalongkorn, 2002 [11]

The authors proposed an adaptive algorithm for dynamically adjusting the parameters of GA during execution. Two levels of GA are used. Large subpopulations evolve in parallel by using different parameter sets in the lower level GA. The higher level GA is applied to the evolution of the parameter sets. Evolution of parameter sets occurs if the fitness of another parameters settings is better than the current best parameters set. Their results show that reliability and the lowest number of generations are crucial in finding the optimal solution.

These last three studies used meta-learning to adjust a GA’s parameters. The same concept of meta-learning is at the center of our proposed approach here. However, we suggest using an autonomous individual to evolve parameters. Compared to the work of Tongchim & Chulalongkorn [11] and Pham [25], this is different since they concentrate on the evolution of parameters sets. In our approach, each parameter in individuals is independent and evolves in

Page 3: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

interaction with the problem context. The strong dependence to the problem improves the meta-learning of the GA. The next section describes our approach in more details.

3. PROPOSED APPROACH TO SELF-ADAPTIVE PARAMETERS 3.1. Proposed approach We propose a self-adaptive approach as a solution to the problem of learning the adjustment of the parameters of a GA—see Figure 1 below. We suggest that the control of the parameters of a genetic algorithm could be encoded within the chromosome of each individual. The individual who is best adapted to the environment will transmit its genetic parameters to the next generation. The values of the parameters are entirely dependent on the evolution mechanism. Self-adaptation of the parameters is applied to the design of an autonomous individual. This individual is able to learn and to make decisions according to the constraints of the environment and the fitness function.

The indof new second interactaccordiparame

���

Population

Performanceeffector

Reinforcement GA

Parameters1 set 12 set 22 set 34 set 48 set 5

GAFitness Evaluation

Performanceevaluator

Figure 1. Self-Adaptive Parameters of a Genetic Algorithm

ividual is based on two learning levels. On the first learning level, a genetic algorithm is applied to the learning sets of parameters. This results in an increase of the individual’s adaptation to the problem to be solved. In the learning level, reinforcement learning is used to learn the best parameter settings. This learning occurs in ion with the problem context. Finally, the individual evaluates the quality of learned parameter settings ng to their degree of contribution to the solution of the problem at hand. An algorithm of the self-adaptive ter approach is defined in Figure 2 below; it is a modified version of the basic genetic algorithm structure [26].

Page 4: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

1. [Start] Generate random population 2. [Fitness] Evaluate the fitness of each individual in the population 3. [New population] Create a new population by repeating the following steps until the end condition

A. [Reinforcement] Reinforcement learning is used to learn the best parameters settings in each individual with the context of the problem

B. [Selection] Select two parents from a population according to their fitness C. [Best crossover] Search in the space of genetic crossover to find the best parameter for individual according

to the fitness and the problem D. [Crossover] Crossing of two individuals by using the parameter of the individual with best fitness to form new

offspring E. [Best mutation] Search in the space of genetic mutation to find the best parameter for individual according to

the fitness and the problem F. [Mutation] Mutation of new offspring at each position in chromosome by using the best parameter G. [Accepting] Place new offspring in the new population H. [Parameter evolution] Create a new set of parameters for each individual in population

i. [Selection] Select two parameters settings from a population according to their reinforcement position ii. [Crossover] Crossing of two parameter sets by using the parameter of the individual iii. [Mutation] Mutation of new offspring by using the parameter of the individual iv. [Accepting] Place new offspring in the new sets parameters population

4. [Replace] Use newly generated population for a further run of the algorithm 5. [Test] If the end condition is satisfied, stop, and return the best solution in current population 6. [Loop] Go to step 2

Figure 2. Structure of the Self-Adaptive Parameters Algorithm The proposed approach is motivated in part by the principle of autonomous individual. Several authors have suggested that the control of GAs’ parameters could be encoded in the chromosome of each individual of the population [27, 28, 29, 30]. This suggests the inclusion of a mechanism that inserts the parameters in the individual’s chromosomes; the value changes of the parameters are thus entirely dependent on the evolution mechanism. Another part of the approach we propose here is the meta-learning or self-adaptation of the parameters. This differs from work done by Grefenstette [13]. We focus on the learning of the GA whereas Grefenstette’s work tries to find the best parameter settings. There are no best parameter settings for any problem in general. We suggest that the introduction of a learning system concept into the GA will allow the algorithm to learn the best parameters settings adapted to a specific problem. 3.2. Structure of individuals In nature, the structure of eucaryotic genes is characterized by an alternating sequence of introns and exons. Exons constitutes a section of the gene that encodes the protein sequence. Exons are separated by non-coding regions named introns. In recent years, the concept of intron and gene expression was incorporated into several researches on genetic algorithms [31, 32, 33]. We also use these concepts here: the structure of the individual is separated in two parts: the intron and the exon. The exon is encoded with the solution part; the intron is the portion of the chromosome that contains the parameters settings (see Figure 3). The parameters that are involved in the intron are crossover operators (Oc), mutation operators (Om), crossover probability (Pc) and mutation probability (Pm).

Page 5: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

3.3. A In the lo

1.

2. The onlpreventto have 3.4. A The mu

1.

2. The parbest par 3.5. A The adimprovenotion ofor eachin the pthis casrate of m 3.6. E To test known among instance

The folexperim

X = ( X1, X2, X3, ……….., Oc, Pc, Om, Pm )

EXON INTRON

X = ( X1, X2, X3, ……….., Oc, Pc, Om, Pm )

EXON INTRON

Figure 3. The Structure of the Individual

daptive crossover

wer level GA (see Figure 1), the crossover evolution mechanism is performed in two steps: Search in the space of genetic crossover to find the best parameter for an individual, relative to the fitness and the problem. Crossing of two individuals using the parameter of the individual with the best fitness.

y set of parameters that survives is the one that gives good individuals. A good parameter setting of crossover s the fastest convergence of the algorithm. If the convergence of the algorithm is too quick, all individuals tend the same genetic code, a situation that must be avoided in general.

daptive mutation

tation operation in the lower level GA is performed as follows: Search in the space of genetic mutation to find the best parameter for an individual, relative to the fitness and the problem. Mutation of new offsprings at each position in the chromosome using the best parameter.

ameters that survive a long time result from many successful successive mutations. Only the intron with the ameters can survive. A good parameter setting of mutation increases the genetic diversity of the population.

daptive crossover and mutation probabilities

aptive crossover and mutation probabilities depend on the genetic diversity of the population. In a study on ments to genetic algorithms [9], a procedure to perform a dynamic adaptation of probabilities is based on the f genetic diversity measure (gdm). The gdm is the ratio between the means and maximum values of the fitness generation. The value of gdm is in the range [0,1]. The more the gdm tends toward 1, the more the individuals opulation all tend to have the same genetic code and, consequently, the GA tends to converge too rapidly. In e, the learning rate prevents a too quick convergence in reducing the crossover probability and increasing the

utation.

xperimental model

the learning capabilities of our approach, we applied it the traveling salesman problem (TSP). It is the best-classical optimization problem in which a salesman must travel through many cities, using the shortest route successive choices in different roads. Several studies have successfully applied GAs to the TSP (see, for , [3, 34]).

lowing genetic parameters were employed in the TSP experiments, as shown in Table 1 below. In this ent, we chose five crossover operators and five mutation operators. The uniform crossover is an operator that

Page 6: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

decides, based on probabilities, which parent will attribute each gene value in the offspring’s chromosome. Generally, the probability is 0.5, half of the genes in the offspring will come from one parent and the other half will come from of the other parent. The probability of the uniform crossover may be different than 0.5 [11]. The uniform mutation is a basic operator, where each gene has a probability to be mutated. Some mutation operators have been modified according to the notion of improvement. In this case, the mutation occurs only if the mutation improves the individual. This notion of improvement is based on the difference between the fitness of an individual before mutation and the fitness of the same individual after mutation.

Table 1. Genetic Parameters for GA with Self-Adaptive Parameters

The crossover and mutation probabilities are set randomly in the algorithm. The values of random numbers are in the range [0.0, 1.0]. The adaptation of crossover and mutation probabilities depends on the gdm. The values of parameters in the computation of gdm are the same as those of Vasconcelos [9]. In the traveling salesman problem, the population size is set to 50 individuals and the number of cities is variable. To test the performance of our approach, the genetic operators applied to simple GA are described in Table 2 below.

Crossover operator Two points crossover Mutation operator One mutation at a random position

Crossover probability 0.75 Mutation probability 0.01

Table 2. Genetic Parameters for simple GA

4. RESULTS Preliminary results were obtained from experimentation with the TSP and confirm that the GA, modified as explained above, is able to learn the best set of parameters for this specific problem. The choice between various genetic operators increases the performance of the GA, when compared to the simple genetic algorithm. Below, Table3 shows the comparison for 10 runs between a simple genetic algorithm and a GA with self-adaptive parameters. The results clearly indicate that the GA with self-adaptation outperforms the other when trying to find the best solution. Similar results are reported in Mernik et al. [3] on the combination of operators that may outperform a single operator. They suggest that different crossover operators preserve different proprieties, which could be very useful for search in different directions in the problem space.

Crossover operator (Oc) 1. One point crossover 2. Two points crossover 3. Uniform crossover with a probability of 0.5 4. Uniform crossover with a probability of 0.1 5. Uniform crossover with a probability of 0.2

Mutation operator (Om) 1. One mutation at a random position 2. Two mutations at a random position 3. Two mutations at a random position, only improving 4. Uniform mutation 5. Uniform mutation, only improving

Crossover probability (Pc) Random number in the range [0.0, 1.0] Mutation probability (Pm) Random number in the range [0.0, 1.0]

Page 7: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

The next table, Table 4, shows the results obtained from the GA with self-adaptive parameters, after three executions on the same problem. The results give different sets of parameters for each case, but all of them offering the very same solution. This shows that there is not necessarily only one parameters settings to solve a given problem, at least not in the case of the TSP. In most cases, we can see that the genetic algorithm can find one set of parameters with a low variation in the population of parameters. However, in some cases (cas 3 in table 4), we can observe some fluctuation in the evolution of parameters settings and at the end of the run, the GA does not converge to one set of parameters. Further results on the evolution of genetic operators during execution are presented in the next subsections.

Adaptive crossover Adaptive mutation Case

1 One point crossover

Rate = 0.99 Uniform mutation, only improving

Rate = 0.07 Case

2 Uniform crossover with a probability of 0.5

Rate = 0.95 One mutation at a random position

Rate = 0.60 Case

3 Two points crossover

Uniform crossover with a probability of 0.5 Rate in [0.60, 0.80]

Uniform mutation Rate in [0.40, 0.50]

Table 4. Three Different Executions of the GA with Self-Adaptive Parameters on the Same Problem

4.1. Adaptive crossover and mutation operators Figure 4 shows the evolution of the crossover operator during the successive generations. Each point on the graph illustrates one or more operators present in the parameters population and their corresponding generation. The selection of only one operator by the GA appears after 20 generations. This convergence to one operator suggests that the GA, in interaction with the problem-solving component, learns that this operator is beneficial. At any time during the execution of the algorithm, the choice of other operators is possible and the evaluation of the operator takes place at each generation.

Simple GA Distance 32 32 32 33 29 31 31 33 32 35 Nbgen 367 274 273 225 401 229 224 236 202 211

GA with self-adaptive parameters Distance 29 30 29 29 29 29 29 29 30 29 Nbgen 213 362 299 226 303 243 226 313 209 208

Table 3. Comparison Between Simple Genetic Algorithm and GA with Self-Adaptive Parameters (note: ‘distance’ is the solution to the traveling salesman problem (TSP) and ‘Nbgen’ indicates the number of

generations required to obtain this solution.)

Page 8: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

0

1

2

3

4

5

6

0 5 10 15 20Generations

cros

sove

r op

erat

ors

5. Uniform crossover with aprobability of 0.2

4. Uniform crossover with aprobability of 0.1

3. Uniform crossover with aprobability of 0.5

2. Two points crossover

1. One point crossover

Figure 4. Evolution of the Crossover Operator

In Figure 5 below we can see the evolution of different mutation operators at each generation. The results are similar to Figure 3 and this evolution converges in the choice of a mutation operator that has more positive impact on the resolution of the problem.

0

1

2

3

4

5

6

0 5 10 15 20Generations

Mut

atio

n op

erat

ors

5. Uniform mutation, onlyimproving

4. Uniform mutation

3. Two mutations at a randomposition, only improving

2. Two mutations at a randomposition

1. One mutation at a randomposition

Figure 5. Evolution of the Mutation Operator

Page 9: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

4.2. Adaptive crossover and mutation probabilities The evolution of two operators probabilities is illustrated in Figure 6 below. Each point of this graph was obtained by computing the means of probabilities at each generation. Clearly, we can see that parameters stabilization appears after only 10 generations. The behaviour of the curves indicates that the evolution of the crossover rate (the “above” curve in Figure 6) and mutation rate (the “under” curve in Figure 6) are both dependants on each other. This behaviour was expected since these operators are defined to maintain the genetic diversity in the population. When the values of a probability operator change, the values of the other operator must also change to obtain a good gdm ratio. Thus, the learning of the crossover and mutation probabilities is dependant on the genetic diversity implied in the problem considered.

In thisparthein i

Thappis asoltheparvalpri

0

0,1

0,2

0,3

0,4

0,5

0,6

0,7

0,8

0,9

1

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Generations

Prob

abili

ty

CrossoverMutation

Figure 6. Evolution of the Crossover and Mutation Probabilities

this experimentation, the problem remains stable during execution and the GA finds the best set of parameters for situation. If the problem were to change, we suppose that the GA would be able to adapt itself and change the ameters settings accordingly. Indeed, our results indicate that there is no specific parameter selected more often than others. The choice of a set of parameters is performed by the combination of parameters with other parameters and nteraction with the problem and the evolution mechanism.

5. CONCLUSION

is paper shows that a GA can learn by itself the adjustment of it parameters. The proposed self-adaptive parameters roach is based on individuals and on interaction with the problem context. Our preliminary results show that the GA ble to learn and evaluate the quality of learned parameters settings according to their degree of contribution to the

ution of the problem. The results also indicate that the self-made choice between various genetic operators increases performance of the GA. This choice of parameters is performed by the combination of parameters with other ameters and interaction with the problem and the evolution mechanism. The approach we put forward could be very uable in order to construct self-adaptive parameters-setting GAs that do not require the user to adjust parameters a ori. Genetic algorithms could thus embed adaptation capacities to variable problems in the course of time. Our

Page 10: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

experimentation was made only with the traveling salesman problem and on a relatively small scale. In this sense, we consider our results preliminary and further work is required to reach more definitive conclusions. However, we think these preliminary results are very encouraging.

REFERENCES 1. Darwin Charles, L’origine des espèces, petite collection maspero, Paris, 1980. 2. Zhu K.Q., “A Diversity-Controlling Adaptive Genetic Algorithm For The Vehicle Routing Problem With Time Windows”, Proceedings of the 15th IEEE International Conference on Tools for Artificial Intelligence, 176-183, 2003. 3. Ahn C. W. and Ramakrishna R.S., “A Genetic Algorithm For Shortest Path Routing Problems And Sizing Of Populations”, IEEE Transactions on Evolutionary Computation, 6(6): 566-579, 2002. 4. Mernik, M., Crepinsek, M.,Zumer, V., “A Metaevolutionnary Approach In Searching Of The Best Combination Of Crossover Operators For TSP”, V: HAMZA, M. H. (ur.). Proceedings of the IASTE Iinternational Conference on Neural Networks NN'2000, 32-36, 2000. 5. Holland J.H., Adaptation In Natural And Artificial Systems: An Introductory Analysis With Applications To Biology, Control, And Artificial Intelligence, Cambridge, Mass., MIT Press, 1992. 6. Goldberg, D.E., Genetic Algorithms In Search, Optimization, And Machine Learning, 412 p., Addison-Wesley, 1989. 7. Gao Y., “Population Size and Sampling Complexity in Genetic Algorithms”, Proceedings of the Bird of a Feather Workshops (GECCO2003), Learning, Adaptation, and Approximation in Evolutionary Computation, 178-181, 2003. 8. Gomez J. and Dasgupta D., “Using Competitive Operators And A Local Selection Scheme In Genetic Search,” in Late-breaking papers GECCO 2002, 2002. 9. Vasconcelos, J. A., Ramirez, J.A., Takahashi, R.H.C., Saldanha, R.R., “Improvements in Genetic Algorithms”, IEEE Trans. on AP, 37(1):3414–3417, 2001. 10. Eiben, A.E., Hinterding, R., Michalewicz, Z., “Parameter Control In Evolutionary Algorithms”, IEEE Transactions on Evolutionary Computation, 3(2):124-141, 1999. 11. Tongchim, S., Chongstitvatana, P., “Parrallel Genetic Algorithm With Parameter Adaptation”, Information Processing Letters, 82(1):47-54, 2002. 12. Friesleben, B. and Hartfelder, M., “Optimisation Of Genetic Algorithms By Genetic Algorithms”, Albrecht, R., Reeves, C., and Steele, N. (eds), Artifical Neural Networks and Genetic Algorithms, 392–399, 1993. Springer Verlag. 13. Grefenstette, J. J., “Optimization Of Control Parameters For Genetic Algorithms”, IEEE Trans. Systems, Man, and Cybernetics, SMC-16(1):122-128, 1986. 14. Mercer, R. E.,Sampson, J. R., “Adaptive Search Using A Reproductive Meta-Plan”, Kybernetes, 7:215-228, 1978. 15. De Jong, K.A., An Analysis Of Behavior Of A Class Of Genetic Adaptative Systems, PhD thesis, University of Michigan, Ann Arbor, 1975. 16. Pigeon L., Inglada J., Solaiman B., “Genetic Algorithms For Multi-Agent Fusion System Learning”, Proceedings of SPIE, Sensor Fusion: Architectures, Algorithms, and Applications V, International Symposium on Aerospace/Defense Sensing, Simulation, and Controls, Orlando, 4385:87-95, 2001.

Page 11: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

17. Pigeon L., “An Advanced C2 Concept For Urban Operations”, Proceedings of the 7th International Command and Control Research and Technology Symposium, Quebec City, September 2002. 18. Pigeon L., “A Conceptual Approach For Military Data Fusion”, Proceedings of the 7th International Command and Control Research and Technology Symposium, Quebec City, September 2002. 19. Lobo, F.G., The Parameter-Less Genetic Algorithm : Rational And Automated Parameter Selection For Simplified Genetic Algorithm Operation, PhD thesis, University of Lisbon, Portugal, 2000. 20. Smith, J.E. and Fogarty T.C., “Operator and Parameter Adaptation in Genetic Algorithms”, Soft Computing , 1(2):81-87, 1997. 21. Schaffer, J.D., Caruana, R.A., Eshlman, L.J., Das, R., “A Study Of Control Parameters Affecting Online Performance Of Genetic Algorithms For Function Optimization”, Proceeding of the Third International Conference on Genetic Algorithms, 51-60, 198 22. Lis, J., “Parallel Genetic Algorithm With The Dynamic Control Parameter”, Proceedings of IEEE International Conference on Evolutionary Computation, 72:324-329, 1996. 23. Bäck, T., “Self-Adaptation In Genetic Algorithms”, In Varela, F. J., & Bourgine, P. (Eds.), Toward a Practice of Autonomous Systems: Proceedings of the First European Conference on Artificial Life, 263-271, Cambridge, MA: The MIT Press, 1992. 24. Mühlenbein, H., “How Genetic Algorithms Really Work: 1. Mutation And Hill Climbing”, Manner, R. and Manderick, B. (eds), Proceedings Of The Second Conference On Parallel Problem Solving From Nature, 2:15–25. Elsevier Science, 1992.

25. Pham, Q.T., “Competitive Evolution: A Natural Approach To Operator Selection”, Progress in Evolutionary Computation, Lecture Notes in Artificial Intelligence, X. Yao(ed.), Springer-Verlag, Heidelberg, 956:49-60, 1995. 26. Obitko, M. and Slavik, P. “Visualization Of Genetic Algorithms In A Learning Environment”, Spring Conference on Computer Graphics, SCCG'99. Bratislava: Comenius University, 101-106, 1999. 27. Gómez, J., Dasgupta, D.,. González, Fabio A., “Using Adaptive Operators In Genetic Search”, GECCO 2003, 1580-1581, 2003. 28. Hinterding, R., “Self-adaptation using Multi-chromosomes”, in Proceedings of the 4th IEEE International Conference on Evolutionary Computation, 1997 29. Spears, W. M., “Adapting Crossover In Evolutionary Algorithms”, McDonnel J.R., Reynolds, R.G., and Fogel, D.B.. (eds), Proceedings of the Fourth Annual Conference on Evolutionary Programming, 367–384, MIT Press, 1995.

30. Bagley, J. D., The Behavior Of Adaptive Systems Which Employ Genetic And Correlation Algorithms, Doctoral dissertation, University of Michigan. (University Microfilms No. 68-7556), 1967. 31. Chen Y, Goldberg D.E., “Introducing Start Expression Genes To The Linkage Learning Genetic Algorithm”, University of Illinois at Urbana-Champaign, 2002. 32. O’Neill M, Ryan C., “Incorporating Gene Expression Models Into Evolutionary Algorithms”, Proceedings Of The Workshops Of GECCO 2000, 2nd Genetic and Evolutionary Computation Conference. With Conor Ryan, 2002

Page 12: Self-adaptive parameters in genetic algorithms · Self-Adaptive Parameters in Genetic Algorithms Eric Pellerin*a, Luc Pigeona, Sylvain Delisleb a: Defence R&D Canada (DRDC Valcartier),

33. Lobo, F.G., Deb K., Goldberg D.E., Harik G.R., “Compressed Introns In A Linkage Learning Genetic Algorithm”, University of Illinois at Urbana-Champaign, 1998. 34. Freisleben B., Merz, P., “A Genetic Local Search Algorithm For Solving Symmetric And Asymmetric Traveling Salesman Problems”, Proceedings of the 1996 IEEE International Conference on Evolutionary Computation, IEEE Press, 616-621, 1996.