Co-evolution of Fitness Predictors and Deep Neural Networks
Deep neural networks proved to be a very useful and powerful tool with many applications. In order to achieve good learning results, the network architecture has, however, to be carefully designed, which requires a lot of experience and knowledge. Using an evolutionary process to develop new network...
Saved in:
Published in | Parallel Processing and Applied Mathematics Vol. 10777; pp. 555 - 564 |
---|---|
Main Authors | , |
Format | Book Chapter |
Language | English |
Published |
Switzerland
Springer International Publishing AG
2018
Springer International Publishing |
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
ISBN | 9783319780238 3319780239 |
ISSN | 0302-9743 1611-3349 |
DOI | 10.1007/978-3-319-78024-5_48 |
Cover
Loading…
Summary: | Deep neural networks proved to be a very useful and powerful tool with many applications. In order to achieve good learning results, the network architecture has, however, to be carefully designed, which requires a lot of experience and knowledge. Using an evolutionary process to develop new network topologies can facilitate this process. The limiting factor is the speed of evaluation of a single specimen (a single network architecture), which includes learning based on a large dataset. In this paper we propose a new approach which uses subsets of the original training set to approximate the fitness. We describe a co-evolutionary algorithm and discuss its key elements. Finally we draw conclusions from experiments and outline plans for future work. |
---|---|
ISBN: | 9783319780238 3319780239 |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-319-78024-5_48 |