Post image

Self-Driving Cars in the Browser

Published May 1, 2017

This is a project I have been working on for quite some time now. These cars learned how to drive by themselves. They got feedback on what good and what bad actions are based on their current speed as a form of reward. Powered by a neural network.

You can drag the mouse to draw obstacles, which the cars must avoid. Play around with this demo and get excited about machine learning!

The following is a more detailed description of how this works. You may stop reading here and just play with the demo if you’re not interested in the technical background!

Figure 1: You should see cars that race around and avoid obstacles. They learned to do that by themselves. To make the cars’ life harder, you can draw obstacles with your pointer, try it out! Source code available here 1.

Sensors

The state of the agent consists of two time-steps, the current time-step tt and the previous time-step t1t-1. This helps the agent make decisions based on how things moved over time. For each time-step tt the agent receives information about its environment. This includes 19-distance sensors dt\vec{d}_{t}, which are arranged in different angles. You can think of these sensors as beams, that stop when they hit an object. The shorter the beam, the higher the input to the agent, 0 – for no hit, 1 – for an object that is directly in front of the car. In addition, a time-step contains the current speed of the agent vtv_t. In total, the input to the neural networks is 158-dimensional.

Imagine sitting in a room with a computer, looking at 158-numbers on the screen and having to press left or right in order to increase some kind of number, namely the reward. That is what this agent is doing. Isn’t that crazy?

Exploration

A major issue with DDPG is exploration. In regular DQN (deep Q‑networks) you have discrete actions from which you can choose from. So you can easily mix up your action-state-space by epsilon-greedily randomising actions. In continuous spaces (as the case with DDPG) this is not as easy. In this project I used dropout as a way to explore. This means dropping some neurons of the last layer of the actor network randomly and therefore obtaining some kind of variation in actions.

Multi-agent learning

In addition to applying dropout to the actor network, I put 4 agents into the virtual environment at the same time. All these agents share the same value network, but have different actors and therefore have different approaches to different states. Therefore every agent explores different areas of the state-action space. All in all this resulted in better and faster convergence.

If you want to hear more on the progress of the project as I add new features, I encourage you to follow me on Twitter @janhuenermann! Additionally feel free to share the project in social media, so more people can get excited about AI!


  1. The code for the demo above along with the JavaScript library neu­ro­js I specif­i­cal­ly made for this project is avail­able on GitHub.