Further Reading

Genetic Algorithms and its use-cases in Machine Learning

This article by Srivignesh Rajan details the basics of genetic algorithms, and their applications in machine learning. Analytics Vidhya, the blog site this article was published on, is a publishing website dedicated to data science and related fields. Rajan illustrates the basics of genetic algorithms with a classic example: the array of bits problem. The example is well detailed and is well-connected to the steps of the genetic algorithm. Rajan then details the advantages as well as the pitfalls of using genetic algorithms for the optimization of machine learning models. Finally, Rajan illustrates a genetic algorithm in action via some example code in python. Although this article is a little overwhelming due to the density of information given, it is very informative on the subject of genetic algorithms for machine learning.

Introduction to Genetic Algorithms — Including Example Code

Vijini Mallawaarachchi, in his article Introduction to Genetic Algorithms — Including Example Code, is very similar to the previous article, but with a few key differences. Both articles illustrate genetic algorithms using the array of bits example. The chief difference is that this article is a lot easier to read than the previous one. It uses less technical terminology but still provides the same amount of detail and clarity. Also, this article does not cover the applications of genetic algorithms to machine learning. This article, instead of having the example code in python, has the example code in java, which may be easier to interpret for some readers. Therefore, this article should be read to understand genetic algorithms, and the previous article should be read to understand the applications to machine learning.

Dota 2

OpenAI is Elon Musk’s artificial intelligence startup. They have created an artificial intelligence system to play MOBA (Multiplayer Online Battle Arena) game Dota 2. OpenAI’s bot was trained using reinforcement learning techniques (such as genetic algorithms) to create a bot that is able to beat top Dota 2 players in a 1v1 match. Dota 2, like all MOBA games, is incredibly complex, requiring a substantial rate of actions in order to succeed. The success of this bot provides an exciting example of the application of NeuroEvolution.

Evolving Neural Networks through Augmenting Topologies

This paper introduces a new method of creating machine learning models using genetic algorithms: NeuroEvolution of Augmenting Topologies (NEAT). This paper was written by researchers form the University of Texas at Austin Kenneth O. Stanley and Risto Miikkulainen. Stanley and Miikkulainen propose NEAT as an improved method of artificial NeuroEvolution because of the increased complexity of the optimization process compared to traditional genetics algorithms and the dynamic approach to the structure of the models. While traditional NeuroEvolution uses a fixed model structure that is optimized over time, NEAT models start simple and grow in complexity over time. Because of these reasons, NEAT is more analogous to biological evolution, which is why the researchers believe NEAT performs better than simpler methods. This paper shows how the improvement on already existing methods can drastically increase performance.

The NeuroEvolution of Augmenting Topologies (NEAT) Users Page

This website, published by Kenneth Stanley, an assistant professor at the University of Central Florida, gives developers a guide to the NEAT method and its implementation. While the previous paper describes NEAT and shows experimental results to prove its effectiveness, Stanley’s page gives comprehensive instructions on how to actually implement and use NEAT algorithms. While Stanley glosses over some of the finer details that the previous paper covers in depth, it takes a practical approach to the understanding of NEAT. There are also many links leading to other papers describing NEAT, implementations of NEAT, and a link leading to a NEAT users discussion group, which are all very helpful for aspiring NEAT developers.

Genetic Algorithms + Neural Networks = Best of Both Worlds

While all the previous approaches to genetic algorithms and their use in machine learning have involved problems where training data isn’t available (unsupervised/reinforcement learning), this article covers an application of genetic algorithms to machine learning where the training data is available (supervised learning). In supervised learning, there is no need for trial and error; traditional methods are used to optimize the model on already existing data, requiring no need for the model parameters to be adjusted by some external method (a genetic algorithm). However, there are more, higher-level parameters of a machine learning model that cannot be easily optimized using unsupervised learning methods. These parameters are called hyper-parameters. Hyper-parameters are numbers such as the learning rate, model structure, and other miscellaneous numbers that have to do with model structure and learning. This article proposes using a genetic algorithm to find the optimal hyper-parameters for a particular machine learning model. This article is important to genetic algorithms for machine learning as it expands its application to supervised learning methods.