How Computationally Complex is a Neuron?

Our brain is the brain as a far cry from solid silicon chips in computer processors, but scientists have a long history of comparing the two. As Alan Turing put it on in 1952: “We are not interested that the brain has the consistency of cold porridge.” That is, the medium does not matter, only the ability to calculate.

Today, the most powerful artificial intelligence system uses a class of machine learning called deep learning. Their algorithms learn by processing large amounts of data through hidden layers of contiguous nodes, referred to as deep neural networks. As their name suggests, deep neural networks are inspired by real neural networks in the brain, whose nodes are modeled next to real neurons — or, at least, after neuroscientists know about of neurons in the 1950s, when an influential model of the neuron was called the perceptron was born. Since then, our understanding of the computational complexity of single neurons has greatly increased, so biological neurons are known to be more complex than artificial ones. But to what extent?

To find out, David beniaguev, Idan Segev and Michael London, all at the Hebrew University in Jerusalem, trained an artificially deep neural network to mimic the calculation of a simulated biological neuron. they show that a deep neural network requires between five and eight layers of contiguous “neurons” that represent the complexity of a biological neuron.

Even the authors did not expect such complexity. “I think it’s simpler and smaller,” Beniaguev said. He expects that three or four layers will be enough to get the calculations made inside the cell.

Timothy Lillicrap, who plans to develop decision-making algorithms at Google-owned AI company DeepMind, says the new outcome suggests it may be necessary to rethink the old tradition of malaw- is the comparison of a brain neuron to a neuron in machine learning. “This role really helped force the issue to think about that more carefully and adjust to how much you can make comparisons,” he said.

The most basic similarity between artificial and real neurons consists in how they handle future information. The same variety of neurons receive incoming signals and, based on the information, judge whether to send their own signal to other neurons. While artificial neurons rely on a simple calculation to make this decision, decades of research have shown that the process is more complex than biological neurons. Computational neuroscientists use an input-output function to model the relationship between inputs received by a long treelike branch of a biological neuron, called dendrites, and the neuron’s decision to send a signal.

This activity is what the authors of the new work taught about an artificial deep neural network to be simulated to determine its complexity. They began by making a series of simulations of the input-output function of a class of neurons with distinct dendritic trunks at its top and bottom, known as a pyramidal neuron, from a cortex. in the rat. They then fed the simulation into a deep neural network with up to 256 artificial neurons in each layer. They continued to increase the number of layers until they reached 99 percent accuracy at the millisecond level between the input and output of the simulated neuron. The deep neural network successfully predicts the behavioral input-output function of neurons with at least five-but no more than eight-artificial layers. In most networks, that equates to about 1,000 artificial neurons for a single biological neuron.

Neuros scientists now know that the computationality of a single neuron, like the pyramidal neuron on the left, relies on dendritic treelike branches, which are flooded with incoming signals. This results in local voltage changes, represented by changes in the color of the neuron (red means high voltage, blue means low voltage) before the neuron decides whether to transmit itself. signal called a “spike.” One of these spikes is three -fold, as shown by the markings on each of the branches on the right, where the colors represent the locations of the dendrites from top (red) to bottom (blue).

Video: David Beniaguev

“[The result] builds a bridge from biological neurons to artificial neurons, ”he added. Andreas Tolias, a computational neuros scientist at Baylor College of Medicine.

However the authors of the study caution that this is not a straightforward writing. “The relationship between how many layers a neural network has and the complexity of the network is not obvious,” according to London. We can never say how much more complexity is gained by moving from, say, four layers to five. Nor can we say that the need for 1,000 artificial neurons means that a biological neuron is exactly 1,000 times more complex. Finally, it is possible that the use of more artificial neurons within each layer will later lead to a deeper neural network with one layer-but it may take a lot of data and time to learn the algorithm. .

Source link


Leave a Reply

Your email address will not be published. Required fields are marked *