Why Tesla Is Designing Chips to Train This Technology Itself
on a promotional activity Last month, Tesla revealed details of a custom AI chip called D1 for training the machine learning algorithm behind this Autopilot self-management system. The event focused on Tesla’s AI work and featured a dancing human pretending to be a humanoid robot the company intends to build.
Tesla is the latest non-traditional chipmaker to design its own silicon. As AI has become more important and expensive to ship, other companies that have invested heavily in the technology – including Google, Amazon, and Microsoft – are also now making their own chips.
In the event, the Tesla CEO Elon Musk is said to have driven more production from the computer system used in the company’s training neural network can be the key to auto-independent development. “If it takes a whole day for a model to train against a couple, it’s a factor,” she said.
Tesla is already designing chips that translate sensor input into its cars, after moving away from using Nvidia hardware in 2019. But creating a powerful and complex chip class is necessary to master the algorithms. in AI is more expensive and challenging.
“If you believe that the solution to auto -independent is to train a large neural network, what follows is really the kind of vertically integrated strategy you need,” he said. Chris Gerdes, director of Stanford Center for Automotive Research, who attended the Tesla event.
Many automotive companies use neural networks to detect things on the road, but Tesla relies heavily on the technology, with a giant neural network known as a “transformer” that receives input from eight cameras once.
“We’re effectively making a synthetic animal from the bottom up,” Tesla chief Andrej Karpathy said at the August event. “The car can be considered an animal. It moves alone, feels the environment and moves independently.”
Transformer models offer great advances in areas such as language comprehension in recent years; the gains were derived from making the models more numerous and more data -hungry. Training in most AI programs requires many millions of dollars amount of cloud computer power.
David Kanter, a chip analyst at Real World Technologies, says Musk bet that by speeding up training, “then I can make this whole machine-the self-management program-speed up first. Cruises and the Waymos of the world, ”referring to Tesla’s two rivals in autonomous driving.
Gerdes, of Stanford, says Tesla’s approach is built around its neural network. Unlike many self -driving car companies, Tesla does not use the sensor, a more expensive class of sensor that sees the world in 3D. It instead relies on interpreting the scenes by using neural network algorithms to parse input from its cameras and radar. This is much more demanding because the algorithm has to build a map around it from camera feeds than rely on sensors to be able to capture the image directly.
But Tesla also collects more training data than other auto companies. Each of the more than a million Teslas on the road sent back to the company videofeeds from eight cameras. Tesla says it employs 1,000 people to mark the images-say cars, trucks, traffic signs, lane markings, and other features-to help train the large transformer. At the August event, Tesla also said it will be able to automatically select which images to prioritize to mark to make the process more efficient.
Gerdes said one danger with Tesla’s approach is that, at a certain point, adding too much data may not improve the system. “Is it just a matter of a lot of data?” as he said. “Or are the neural networks’ capabilities plateau at a lower level than you expected?”
Answering that question is likely to be expensive in any way.
The rise of large, expensive AI models has not only encouraged some large companies to make their own chips; it has also provided many well -funded startups working with specialist silicon.