A Stanford Proposal on the ‘Foundations’ of AI Burns Controversy
Last month, Stanford the researchers declared a new era of artificial intelligence came, one built upon much more neural networks and oceans of data. They say a new research center at Stanford will build – and study – these “basic models” of AI.
Critics of the idea quickly emerged-including a workshop organized to mark the launch of the new center. Some are opposed to the limited abilities and sometimes bad manners of these models; others warn of focusing too much on one way to make the machines more brainy.
Malik acknowledges that a class of model identified by Stanford researchers – large speech models that can answer questions or create text from a single prompt – has many practical uses. But he says evolutionary biology suggests that language is involved in other aspects of intelligence such as interaction with the physical world.
“These models are really air castles; they don’t have any foundation whatsoever,” Malik said. “The language that is in these models is baseless, it has an explanation, there is no real understanding.” He declined an interview request.
A research paper written by dozens of Stanford researchers describes “an emerging model for building artificial intelligence systems” labeled “foundational models.” Larger AI models have generated some surprising advances in AI in recent years, in areas such as vision and robotics as well as language.
Large language models are also based on the preferences of tech companies Google and Facebook, which is used in areas such as search, advertising, and content moderation. Building and training large language models can require millions of dollars of cloud computing power; Currently, their development is limited and used by some good technology companies.
But large models also have a problem. Speech models inherited bias and bad text from the data they trained, and they had zero understanding of common sense or what was true or false. Given a prompt, a large language model is possible spitting out bad language or misinformation. There is also no guarantee that these large models will continue to make advances in machine intelligence.
Stanford’s proposal divides the research community. “Calling them“ foundation models ”completely disrupts the discourse,” he added. Subbarao Kambhampati, a professor at Arizona State University. There is no clear path from these models to the more common form of AI, according to Kambhampati.
Thomas Dietterich, a professor at Oregon State University and former president of Association for the Advancement of Artificial Intelligence, says he has “great respect” for the researchers behind the new Stanford center, and he believes they are genuinely concerned about the problems posed by these models.
But Dietterich knows if the idea of foundation models isn’t about getting funding for the resources needed to build and use them. “I’m amazed that they gave these models such a beautiful name and made a centerpiece,” he said. “That’s the reason for planting the flag, which can have a lot of benefits in terms of fundraising.”
Stanford also suggested making a National AI Cloud to make industry -scale computer resources available to academics working on AI research projects.
Emily M. Bender, a professor in the linguistics department at the University of Washington, says he’s concerned that the idea of fundamental models reflects an investment bias in the industry-favored AI data-centric approach.
Bender said it’s even more important to study the risks posed by large AI models. He wrote an a ROLE, published in March, which drew attention to the problems of many models of language and contributed to the departure of two Google researchers. But he said the review should come from multiple disciplines.
“There are all the other neighbors, really important farms that are just starving for funding,” he said. “Before we throw money into the cloud, I want to find money in other disciplines.”