Building a (Rodent) Brain
There has been a great deal of effort, in recent years towards the simulation of a fully working brain. The drive to understand the workings of the human brain has never been greater, with actual neuroprosthetic devices in existence to drive research. We are a long, long way from recreating a human brain with it's billions of neurons and trillions of connections, however that does not mean we are incapable of building a brain.
Babies never run before they learn how to stand. In the same way, rather than dive in and immediately try to replicate the most advanced processing device we know - although some are doing just that - it makes sense to start out on a similar structure of lesser complexity.
Enter the Latin muris, otherwise known as the humble laboratory mouse. With a brain half the size of a small peanut, the mouse brain is mammalian and thus has the same general divisions and specialisations as ours. However, it only has about sixteen million neurons total. This is the barest fraction of the complexity of a human brain.
However, because the mouse brain is a mammal brain, it has the distinct advantage of having a hippocampus, a cerebellum, prefrontal cortex, brainstem in short, most of the major divisions the human brain has, just at a far reduced size and complexity. Likewise, the mouse body, whilst diminutive and quadruped, is not so different in general layout from a human's, in that it has four limbs, internal organs in similar configuration for the most part, general areas of the body performing similar functions.
There are profound differences of course. However, the relatively simple complexity, and the similarities of structure make the mouse brain a good place to start from.
Replicating sixteen million neurons
Sixteen million neurons is still a vast number by modern computing standards. It's not just the number of neurons either. Each communicates directly with thousands of it's neighbours, forming a vast web of potential complexity which must be simulated exactly. In fact, it is so complex that the most capable supercomputing system in 2007 can only simulate half of it.
Containing eight million neurons each one of which can have up to 8,000 synapses, or connections, with other nerve fibres, the ability to simulate even one half of a mouse brain is no mean feat. To simulate this vast network, an IBM Blue Gene L had to be used.
The Blue Gene/L is IBM's latest offering in the supercomputer family, and operates at an average speed of about 200 teraflops (200 trillion operations per second), making it the fastest type of supercomputer on the planet at this time.
The Blue Gene/L used to simulate half a mouse brain had 4096 processors, each one of which used 256MB of memory. Even given this enormous processing power, it was unable to simulate the brain in real-time, instead taking ten seconds for every second of simulation.
Rate of Progress
Whilst this disparity does highlight the distance we have yet to go in recreating even a simple <I>muris</I> brain, it is worth considering just how far we have come, as well as the rate of progress.
Taking a brief look at a selection of developments that have occurred over the years, we can see just how quickly progress is being made.
Looking at the growth in complexity of brain material simulations in just the past few years, it swiftly becomes apparent that our understanding of the underlying principles - and thus the ability to model them effectively - is increasing at a rate which far exceeds that of Moore's law, controlling the computer equipment itself.
Moore's law and its Effect
Moore's Law is the 'law' governing computer equipment growth as coined by Gordon Moore, co-founder of Intel. Coined in 1965, and remaining unbroken in the 42 years since then, it states that the number of transistors per integrated circuit will double every 18 months to two years.
As the number of transistors doubles, so to does the computational power of the computer circuit. In layman's terms, this means that computing power itself - used as the basis for all simulations, will practically double every 18 months to two years.
Thus, if a Blue Gene /L using the latest processors available, can only render half a mouse brain with 4096 processors now, in 18 months time, a similar supercomputer with the latest processors available then, should be able to simulate a whole mouse brain, at the same speed. This is of course assuming we don'' learn in the meantime, how to simulate the connections between neurons in a less processor intensive, more accurate way. If that happens, as is likely, the amount we will be able to accurately simulate in 18 months, increases still further.
The ultimate goal of course, is to fully simulate a human brain, with its billions of neurons, in order to accurately and fully understand how such function, so we may truly repair, and interface with the human nervous system directly.
At current rates of progress, this should be achievable within a handful of decades.
Simulating Brains and AI Neural Nets
On a final note, it is worth pointing out that the development of brain simulations is largely separate to the development of neural nets for AI usage. AI does not in general require the full complexities of detailed simulation that a brain model requires. Whilst this field stands to benefit the same as many others, an AI intelligence, even one the equivalent of a human agency, is not going to require the same degree of fine simulation. Indeed, there will be massive chunks of the brain - autonomous systems regulation, and image processing - which are completely unnecessary for an entirely VR based entity.