Plenary Talk.- Evolutionary Algorithms.- Rule-Based Fuzzy Inference.- Invited Session: Data Characterization through Fuzzy Clustering.- Plenary Talk.- Fuzzy Control.- Invited Session: Recent Advances in Theoretical Soft Computing.- Invited Session: Towards Intelligent Decision Support Systems via Soft Computing.- Fuzzy Logic in Decision Support.- Plenary Talk.- Applications of Fuzzy Systems.- Invited Session: Aggregation Operators.- Invited Session: Intelligent Techniques for Knowledge Extraction and Management.- Plenary Talk.- Fuzzy Image Processing.- Plenary Talk.- Invited Session: Evolutionary Algorithms.- Connectives.- Neural Networks.- Neuro-Fuzzy Systems.- Fuzzy Mathematics.- Fuzzy Optimization.- Poster Contributions.
2 The principle of the new evolutionary algorithm. (p. 4)
2.1 The structure of the algorithm.
Hybrid EAs are frequently used for solving combinatorial problems. These methods improve the quality of the descendent solution for example with the application of a local search procedure, SA, or TS. The constitution of these systems corresponds to an extension of an EA: for instance a local search procedure is applied at every step of the EA cycle.
The new EA unlike former hybrid EAs based on a single stage, uses a 2-stage algorithm structure in order to speed up convergence and to produce higher quality results. The first stage is a quick "preparatory" stage that is designated to improve the quality of the initial population. The second stage is a hybrid EA with some special operators.
Let us discuss the 2 EAs (stages) in more detail:
1. The first stage forms some solutions at random and then tries to improve them by randomly generating descendents. The descendent may replace the most similar one of the former solutions.
2. The second stage is a hybrid ES. The algorithm uses two different recombination operations, and concatenated, complex neighbourhood structures for the mutations. The recombination operation is a uniform or single-point recombination or otherwise simple copy-making.
In selecting the parents, priority is given to the best, highest objective/fitness function value: the algorithm selects the fittest solution with 0.5 probability and another solution with 0.5/t probability (where t is the size of the population).
By mutation we applied varying number of bit-flip and a special bit-flip (bit- flip-flop). We form the neighbourhood structure using: some bit-flip-flops + some bit-flips.
The quality of the solutions is improved with a local search procedure. We applied the randomized k-opt local search (Merz and Katayama 2001). Finally in order to keep the diversity of the population we use a filter and a restart procedure. The filter selects only the best of the solutions close to each other, the other ones are deleted.
The restart begins the second stage again, if the fittest solution didn’t change in the last generations. It replaces the weakest solutions with new ones (70% of the population), and it applies the local search procedure on a part of the new individuals.
3 The new algorithm
3.1 The characteristics of the EAs
The main functions and characteristics of the EAs are the following:
Initial population. The same population and individuals are used in all stages. The first individuals of the P population are randomly generated from S. These are the first "solutions".
Fitness function. The algorithm uses the objective function f(x) as fitness function.
Selection operator. In the first stage descendents are randomly selected from S, without the application of any further operators (recombination, mutation). In the second stage the algorithm selects two different parents from the population: the first of them is the most appropriate solution with 0.5 probabilities.