Untitled Document
Not a member yet? Register for full benefits!

 Computational neuroscience: Memory-making is all about the connection

This story is from the category Artificial Intelligence
Printer Friendly Version
Email to a Friend (currently Down)



Date posted: 12/11/2012

Exactly how memories are stored and accessed in the brain is unclear. Neuroscientists, however, do know that a primitive structure buried in the center of the brain, called the hippocampus, is a pivotal region of memory formation. Here, changes in the strengths of connections between neurons, which are called synapses, are the basis for memory formation. Networks of neurons linking up in the hippocampus are likely to encode specific memories.

Since direct tests cannot be performed in the brain, experimental evidence for this process of memory formation is difficult to obtain but mathematical and computational models can provide insight. To this end, Eng Yeow Cheu and co-workers at the A*STAR Institute for Infocomm Research, Singapore, have developed a model that sheds light on the exact synaptic conditions required in memory formation1.

Their work builds on a previously proposed model of auto-associative memory, a process whereby a memory is retrieved or completed after partial activation of its constituent neural network (see image). The earlier model proposed that neural networks encoding short-term memories are activated at specific points during oscillations of brain activity. Changes in the strengths of synapses, and therefore the abilities of neurons in the network to activate each other, lead to an auto-associative long-term memory.

Cheu and his team then adapted a mathematical model that describes the activity of a single neuron to incorporate specific characteristics of cells in the hippocampus, including their inhibitory activity. This allowed them to model neural networks in the hippocampus that encode short-term memories. They showed that for successful formation of auto-associative memories, the strength of synapses needs to be within a certain range: if synapses become too strong, the associated neurons are activated at the wrong time and networks become muddled, destroying the memories. If they are not strong enough, however, activation of some neurons in the network is not enough to activate the rest, and memory retrieval fails.

As well as providing insight into how memories may be stored and retrieved in the brain, Cheu thinks this work also has practical applications. “This study has significant implications in the construction of artificial cognitive computers in the future,” he says. “It helps with developing artificial cognitive memory, in which memory sequences can be retrieved by the presentation of a partial query.” According to Cheu, one can compare it to a single image being used to retrieve a sequence of images from a video clip.

See the full Story via external site: www.research.a-star.edu.sg

Most recent stories in this category (Artificial Intelligence):

03/03/2017: Application of Fuzzy Logic Teaches Drones to land on Moving Targets

02/03/2017: Poker-playing AI program first to beat pros at no-limit Texas hold 'em

05/02/2017: Google's driverless cars make progress

04/02/2017: Study Exposes Major Flaw in Turing Test

31/01/2017: Artificial intelligence uncovers new insight into biophysics of cancer

31/01/2017: Hungry penguins help keep smart car code safe

12/01/2017: First ever perched landing performed using machine learning algorithms

12/01/2017: AI takes on humans in marathon poker game