A Non-disruptive Crossover Operator for Multi-layer Perceptron Networks using Parametric Similarity
Summary
Functionally equivalent multi-layer perceptron networks (MLPs) can be written in many different forms. This presents difficulty when trying to recombine these networks using the crossover operator.
This thesis aims at finding a method to identify similar neurons in different MLPs by their parameters. This allows for defining crossover operators that do not depend on the forms in which different networks are written.
Two new crossover operators are presented that use similarity between the parameters of different neurons to facilitate a non-disruptive crossover. These crossover operators have been found to be relatively non-disruptive compared to uniform and one-point crossover.