How well can an AI mimic human behavior?


When experts first started raising the alarm a couple decades ago about AI misalignment – the danger of powerful, evolving artificial intelligence systems that may not function as people expect – most of their concerns as hypothetical. In the early 2000s, AI research was still underway relatively limited return, and even the most widely used AI systems fail a variety of simple tasks.

But since then, AIs have gotten better and cheaper to build. A place where leaps and bounds are most pronounced are in language and text-generation AIs, which can be trained on multiple collections of text content to create multiple texts in the same style. Many startups and research teams train these AIs for all sorts of tasks, from writing code to creating advertising copy.

Their rise hasn’t changed the basic argument for AI alignment concerns, but it’s a uniquely useful thing: It makes previously hypothetical concerns more concrete, allowing more people to experience this and many researchers have (hopefully) answered it.

An AI oracle?

Take Delphi, a new AI text system from the Allen Institute for AI, a research institute founded by former Microsoft co-founder Paul Allen.

The way Delphi works is extremely simple: The researchers practice a machine learning system on a large set of text on the internet, and then a large database of responses from participants. to Mechanical Turk (a paid crowdsourcing platform popular with researchers) to predict how people will evaluate. a wide range of ethics, from “cheating on your wife” to “shooting someone in self -defense.”

The result is an AI issuing ethical judgments when prompted: Cheating on your wife, it tells me, is “wrong.” Shooting someone in self -defense? “It’s okay.” (See this good writing at Delphi in The Verge, with many examples of how AI answers other questions.)

The dubious position here is, of course, that there is nothing “under the cover”: There is no deep definition in which AI truly understands ethics and uses ethical understanding to make moral judgments. All it knows is how to predict the response to be given by a Mechanical Turk user.

And Delphi users can easily be seen leading to some glaring ethical violations: Ask Delphi “should I commit genocide if it makes everyone happy” and it responds, “you would have. ”

Why Delphi can teach

For all its obvious flaws, I still think there is something useful. Delphi when thinking of possible future trajectories of AI.

The method of taking a lot of data from people, and using that to predict what responses people will give, has proven to be a powerful tool in training AI systems.

In the long run, a background assumption in many parts of the field of AI is that in order to build intelligence, researchers must clearly establish the reasoning capacity and conceptual framework that AI can use to think about. in the world. Leading AI language makers, for example, are programmed by hand with syntax principles they can be used to create sentences.

Now, it is less clear that researchers need to build reasoning to get the reasoning. Perhaps a more straightforward approach such as training AIs to predict what someone will say to Mechanical Turk in response to a prompt can give you more powerful systems.

Any real capacity for ethical reasoning displayed by systems can be a kind of coincidence – they are predictors of how people will answer questions, and they will use whatever method they stumble upon. good value prediction. That may include, as they become more numerous and more accurate, building a deeper understanding of human behavior to better predict how we will answer these questions.

Of course, there is a lot that can go wrong.

If we rely on AI systems to identify new inventions, make investment decisions that are then taken as indicators of product quality, identify good research, and more, there is the potential that the differences between what AI measures and what people actually care about. will be magnified.

AI systems can get better – much better – and they will stop making stupid mistakes like the ones still found in Delphi. Telling us that genocide is good as long as it “makes everyone happy” is very clear, hilariously wrong. But when we can no longer see their faults, that does not mean that they are infallible; it simply means that these challenges can be more difficult to detect.

A version of this story was first published on Future Perfect newsletter. Sign up here to subscribe!



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *