While our current computers usually perform pre-programmed actions, this stands in contrast to our brains, which are highly adaptive. Our adaptability is heavily reliant on synaptic plasticity, with synapses being the connection points between neurons. Neuroscientists are deeply intrigued by synaptic plasticity since it is key to learning processes and memory.
Researchers in neuroscience and artificial intelligence (AI) develop models for the mechanisms of these underlying processes in order to better understand the brain. These models help us gain insight into biological information processing, and they are key to helping machines learn faster.
Researchers at the Institute of Physiology at the University of Bern have now developed a new approach based on “evolutionary algorithms,” and these computer programs search for solutions by mimicking the process of biological evolution.
The research team was led by Dr. Mihai Petrovici of the Institute of Physiology at the University of Bern and Kirchhoff Institute for Physics at the University of Heidelberg.
The study was published in the journal eLife.
All of this means biological fitness, which is the degree to which an organism adapts to its environment, can be a model for evolutionary algorithms. With these algorithms, the “fitness” of a candidate solution is dependent on how well it can solve the underlying problem.
Three Learning Scenarios
The new approach is called “evolving-to-learn” or “becoming adaptive.” The team focused on three typical learning scenarios, the first of which involved a computer having to detect a repeating pattern in a continuous stream of input without receiving feedback about its performance.
The second scenario involved the computer receiving virtual rewards when carrying out a desired behavior.
The third scenario involved “guided learning” where the computer was told exactly how far its behavior deviated from the desired one.
Dr. Jakob Jordan is corresponding and co-first author from the Institute of Physiology at the University of Bern.
“In all these scenarios, the evolutionary algorithms were able to discover mechanisms of synaptic plasticity, and thereby successfully solved a new task,” Dr. Jordan said.
The algorithms demonstrated strong creativity.
Dr. Maximilian Schmidt is co-first author of the study.
“For example, the algorithm found a new plasticity model in which signals we defined are combined to form a new signal. In fact, we observe that networks using this new signal learn faster than with previously known rules,” Dr. Schmidt said.
“We see E2L as a promising approach to gain deep insights into biological learning principles and accelerate progress towards powerful artificial learning machines,” said Petrovoci.
“We hope it will accelerate the research on synaptic plasticity in the nervous system,” Dr. Jordan commented.
The team says the new findings will provide deeper insight into how healthy and diseased brains work, and they could aid in the development of intelligent machines that can adapt to users.
- NFL and AWS Close Out AI Safety Challenge
- IBM Acquires Envizi, Looks Toward Sustainability and Environmental Initiatives
- Overinterpretation May Be a Bigger and More Intractable Threat Than Overfitting
- Navrina Singh, CEO and Founder of Credo AI – Interview Series
- BioNTech, InstaDeep Develop Early Warning Detection System for COVID Variants