Neural networks learning from player

This week we played a little with concept of neural networks as potential solution for artificial intelligence learning from player. For our testing we used standard 3×3 Tic Tac Toe field.

Neural networks learning from player

If you are new and want to read more about genetic algorithms here is our previous post.

We decided to split experiments in two parts – learning phase and test phase. During learning phase, player would make move and propose response for that move in specific situation. During the test we would play plain Tic Tac Toe with previously trained neural network.

Goal of this experiment is to test whatever it is possible to achieve satisfying results from very few samples created by player.tictactoe


For neural network we used recently published TensorFlow library, straight from Google. It uses quite specific approach to machine learning problems by creating computation graph system. Google published quite nice documentation with examples, which you can find at:

After making few trials we found out that teaching neural network by hand is quite a chore. Whats even worse – it actually does not produce very good results. Of course, with more samples and doodling with neural network parameters it probably would be possible to teach neural network how to play Tic Tac Toe and always tie, but our goal was to test whatever is it possible within short training session.

So, for now we need to look for another way to implement AI 🙂

At least it was quite fun doodling with TensorFlow library.

Do you guys know any other way to learn from players moves? Leave a comment below 🙂