AI also invented what computers are


Autumn 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These consumer tech calendar fixtures no longer inspire surprise and surprise in the earlier days. But behind all the marketing glitz, there is something unique.

Google’s latest offering, the Pixel 6, is the first phone to have a separate AI-dedicated chip sitting next to its standard processor. And the chip that runs the iPhone has for the last few years included what Apple calls a “neural engine,” also dedicated to AI. Both chips are better suited to the computing classes involved in training and running machine learning models on our devices, such as the AI ​​that runs your camera. Almost without us noticing, AI has become a part of our daily lives. And that changes how we think about computing.

What does that mean? Today, computers have not changed much in 40 or 50 years. They are smaller and faster, but still there are boxes with processors that run instructions from people. AI has changed that in at least three parts: how computers are made, how they are programmed, and how they are used. Eventually, it will change what they are for.

“The core of computing is changing from number-crunching to decision-making,” said Pradeep Dubey, director of Intel’s parallel computing lab. Or, as MIT CSAIL director Daniela Rus said, AI is freeing computers from their boxes.

Faster, less fast

The first change was concerned with how computers were made-and the chips that controlled them. The usual computer gains are achieved because the machines are much faster at running each calculation individually. For decades the world has benefited from chip speed-ups with metronomic regularity as chipmakers have followed Moore’s Law.

However the deep learning models that make up today’s AI application require a different approach: they need multiple counts that are less accurate calculations to accomplish everything at the same time. That means there needs to be a new type of chip: one that can process data as quickly as possible, making sure it’s available when and where it’s needed. When deep learning exploded on the scene a decade or so ago, there were already specialized computer chips that could be used very well at it: graphics processing units, or GPU, designed to display a full screen of pixels ten times a second.

Anything can be a computer. In fact, most household appliances, from toothbrushes to light switches to doorbells, are already in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which provide many of the first GPUs, are working to make the hardware adaptable for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find a niche in AI through hardware.

For example, the Pixel 6’s internal chip is a new version of Google’s mobile tenor processing unit, or TPU. Unlike traditional chips, which are prepared with ultrafast, precise calculations, TPUs are designed for the high volume but not exact calculations required by neural networks. Google has been using these internal chips since 2015: they process photos of people and natural language search queries. They are used by Google’s sister company DeepMind to train its AIs.

Over the past two years, Google has made TPUs available to other companies, and these chips – as well as similarities made by others – have become the default within the world’s data centers.

AI even helps design its own computing infrastructure. In 2020, Google will use a learning-enhancing algorithm-a class of AI that knows how to solve a task through trial and error-to design the plot of a new- ong TPU. Later AI created weird new schemes that no one thought of-but it worked. This type of AI will one day develop better, more efficient chips.

Show up, don’t talk

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will train them, said Chris Bishop, head of Microsoft Research in the UK.

Traditionally, in order for a computer to get something like language recognition or object recognition in an image, programmers must first create rules for the computer.

In machine learning, programmers no longer write rules. Instead, they create a neural network that knows the rules for itself. It’s a basic way of thinking.



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *