Мы используем файлы cookie.
Продолжая использовать сайт, вы даете свое согласие на работу с этими файлами.
Evolutionary acquisition of neural topologies
Другие языки:

    Evolutionary acquisition of neural topologies

    Подписчиков: 0, рейтинг: 0

    Evolutionary acquisition of neural topologies (EANT/EANT2) is an evolutionary reinforcement learning method that evolves both the topology and weights of artificial neural networks. It is closely related to the works of Angeline et al. and Stanley and Miikkulainen. Like the work of Angeline et al., the method uses a type of parametric mutation that comes from evolution strategies and evolutionary programming (now using the most advanced form of the evolution strategies CMA-ES in EANT2), in which adaptive step sizes are used for optimizing the weights of the neural networks. Similar to the work of Stanley (NEAT), the method starts with minimal structures which gain complexity along the evolution path.

    Contribution of EANT to neuroevolution

    Despite sharing these two properties, the method has the following important features which distinguish it from previous works in neuroevolution.

    It introduces a genetic encoding called common genetic encoding (CGE) that handles both direct and indirect encoding of neural networks within the same theoretical framework. The encoding has important properties that makes it suitable for evolving neural networks:

    1. It is complete in that it is able to represent all types of valid phenotype networks.
    2. It is closed, i.e. every valid genotype represents a valid phenotype. (Similarly, the encoding is closed under genetic operators such as structural mutation and crossover.)

    These properties have been formally proven.

    For evolving the structure and weights of neural networks, an evolutionary process is used, where the exploration of structures is executed at a larger timescale (structural exploration), and the exploitation of existing structures is done at a smaller timescale (structural exploitation). In the structural exploration phase, new neural structures are developed by gradually adding new structures to an initially minimal network that is used as a starting point. In the structural exploitation phase, the weights of the currently available structures are optimized using an evolution strategy.

    Performance

    EANT has been tested on some benchmark problems such as the double-pole balancing problem, and the RoboCup keepaway benchmark. In all the tests, EANT was found to perform very well. Moreover, a newer version of EANT, called EANT2, was tested on a visual servoing task and found to outperform NEAT and the traditional iterative Gauss–Newton method. Further experiments include results on a classification problem

    External links


    Новое сообщение