Neuron Bursts Could Imitate a Popular AI Learning Strategy


But for this instructional signal to solve the credit assignment problem without a “pause” in sensory processing, their model needs another key piece. Naud and Richards ’team suggested that neurons have different compartments in their upper and lower extremities that process neural code in completely different ways.

“[Our model] indicates that you have two signals, one going up and one going down, and they can pass each other, ”Naud said.

To make this possible, their model states that tree-like branches that receive inputs at the ends of neurons only listen to bursts — the internal teaching signal — to modify their connections and reduce error. Tuning occurs from the top down, as in backpropagation, because in their model, the neurons at the top regulate the likelihood that the neurons below it will send out an explosion. Researchers have shown that if a network has multiple explosions, neurons are likely to increase the strength of their connections, while the strength of connections is likely to decrease if the explosion signals are less frequent. The idea is that the burst signal tells the neurons that they need to be active during the task, strengthening their connections, if doing so will reduce error. The absence of bursts tells neurons that they should be inactive and that their connections should be weakened.

At the same time, the branches beneath the neuron treat the explosion like lone spikes — the normal, external signal of the world — allowing them to continue to transmit sensory information up the circuit without stopping.

“In retrospect, the idea presented seems reasonable, and I think it speaks for itself for its beauty,” he said. Juan Sakramento, a computational neuroscientist at the University of Zurich and ETH Zurich. “I think that’s obvious.”

Others have tried to follow the same logic in the past. Twenty years ago, Konrad Kording at the University of Pennsylvania and Peter King Located at Osnabrück University in Germany proposed a learning framework with two-compartment neurons. But their proposal lacks many of the specific details of the newer model that relate biologically, and it’s just a suggestion — they can’t prove it can actually solve the credit assignment problem.

“Back then, we lacked the ability to test these ideas,” Kording said. He considered the new role “great work” and followed it up in his own lab.

With today’s computing power, Naud, Richards, and their collaborators have successfully simulated their model, with exploding neurons playing the role of the learning rule. They show that it solves the credit assignment problem of a classic task known as XOR, which requires learning to respond when one of the two inputs (but not both) is 1. Shown they also suggest that a deep neural network constructed with their explosion rule can be estimated. the creation of backpropagation algorithms in challenging image classification tasks. But there is still room for improvement, as the backpropagation algorithm is more accurate, and does not fully fit human capabilities.

“We have to have details that we don’t have, and we have to make the model better,” Naud said. “The main purpose of the paper is to say that the kind of learning performed by machines can be estimated by physiological processes.”



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *