In any case, I couldn't help feeling inspired, so yesterday I started writing my very own brain simulator. No, it doesn't really do all that much. But here's what it does do:
- Implements a brain using a neural network. The neural network is actually just a fully connected, weighted graph, where the weights represent the strengths of the synapses. The neural network is currently static (it remains unchanged during the lifetime of the organism), and the model is extremely simple. It supports two types of neurones, excitatory and inhibitory neurones, but that's about it.
- Implements an environment for the brain, e.g. a body and a physical world with which the creature can interact. Actually, this particular environment consists of an empty space (no gravity), and a target, which, when touched by the creature, regenerates in a different position. The body of the creature has two sensors (relative position, i.e. distance to the target and relative velocity, i.e. the speed at which we are moving towards the target) and four outputs. Each of the four outputs can be thought of as a jet pack that can be used to accelerate the creature in a particular direction (up, down, left, right).
This creature's purpose in life is therefore, as you might have guessed, to reach targets.
I've been using two programs to simulate my worlds, these are:
- The "evolver": This program is responsible for developing the "genes" of the creature. Those genes are actually just the NxN matrix that represents the neural network graph (where N is the number of neurones, or nodes in the graph). It works quite simply by randomizing the genes and running a simulation with a creature that has this brain. By checking the number of times that the creature reaches the target (each simulation runs only a fixed number of steps), we have a measure of how successful that particular set of genes is.
When we have found a reasonable brain configuration (e.g. a creature that can reach a target in around 2-3% of the simulations), we start duplicating it and perturbing the configuration slightly. In this way, we have, in each generation, a set of brain configurations corresponding to the original brain plus nine different variations of the original brain. Now each of the brain configurations are given their chances to reach their targets; the most successful configuration becomes the parent of the next generation of creatures. And we have evolution!
It actually took about 8 hours to find the configuration of the creature that is shown in the video below.
- The "player": This program will run a simulation, but also display it on the screen. By default, it uses the most successful brain configuration found so far. This program is what generated the video below. There's not much more to it; the simulation is exactly the same as in the "evolver". (But without visualization, the evolver can run much faster.)
So without much further ado, here's the result:
Notice how it often overshoots a little. A problem with an earlier brain configuration was that it would start circling around the target in bigger and bigger orbits, eventually disappearing from the screen altogether. I think it's really interesting, though, to see that it actually works. It surprised me.
The source code for these programs can be found at the GitHub project website. I'm not really planning to make a project out of this (other than what you see here), but patches are welcome, as always :-) Source code is GPL version 2.
That's pretty damn awesome ;).
Found this talk, might interest you http://www.ted.com/index.php/talks/michael_merzenich_on_the_elastic_brain.html
Thanks. Just watched it, was very interesting indeed. It seems that the moral of the story is just to rinse and repeat, and also to keep learning new things :-)
Nice :) There is a small glitch, though, I think. I remember having a course on neural networks, and the professor said that if you don't have loops in the neuron network, then the system cannot be Turing-complete. This was, in fact, one of the reasons research stopped for about ~20 years on neuron networks -- some scientific bigwig proved such a theorem, and killed all research, and only later did it emerge that with feedback, all of that restriction can be avoided.
So, I think you are missing this feedback. The problem with the feedback, though, is that it can create an unstable system (also see: mad people -- sometimes it can be demonstrated that parts of their brain have reached a sort of unstable equilibrium). So, this makes the problem harder, but also potentially leads to a much better algorithm.
Yep, there are two kinds of feedback here:
1. There can be loops in the graph (where neurons are nodes and synapses are edges).
2. The output of a neuron can change the synapse strengths (or edge weights).
Only the first is done by my very simple implementation, thus my creatures are unable to rewire themselves -- to learn. I suppose they are a bit like very simple insects.
I haven't really studied neural networks, but it sounds reasonable that with #2 you also get more computational power.
What I did try to implement was a system where active synapses would have their strength increased and inactive synapses would have their strength decreased. But this was a bad idea -- the positive feedback loop would eventually turn all weights into either 0 or 1 (the minimum and maximum strengths, respectively).
This is definitely something I'd look into, but time is limited for now :-) Metaplasticity is also something I would try to implement.
Post a Comment