Soon Your Google Searches Will Be able to Combine Text and Images


In May, Google executives revealed the new experiment artificial intelligence trained in text and images they say the internet can do FINDING very intuitive. Wednesday, Google offers an insight into how technology is changing the way people find people on the web.

Starting next year, the Multitask Unified Model, or MUM, will enable Google users to combine text and image searches using Lens, a smartphone app that is also included in Google search and other products. So you can, for example, take a picture of a shirt with a Lens, then look for “socks with this pattern.” Finding “how to fix” an image on a part of the bike see instructional videos or blog posts.

Google will include MUM in search results to suggest additional channels for search users. If you ask Google how to paint, for example, MUM can be detailed with step -by -step instructions, styling tutorials, or how to use home -made materials. Google also plans in the coming weeks to bring MUM the YouTube searchable videos, where AI can find search suggestions at the bottom of the videos based on video transcripts.

MUM is trained to create factors about text and image. The integration of MUM with Google search results also represents an ongoing march toward the use of language models that rely on a lot of text scraped from the web and a class that neural network architecture called Transformer. One of the first such efforts came in 2019, when Google injected a language model called BERT into search results to change web rankings and summarize the text in the bottom lines. consequences.

The new Google tech will run web searches that start as a photo or screenshot and continue as a text query.

Photo: Google

Google vice president Pandu Nayak said BERT represents the most change in search results for the better part in a decade but MUM has led to the understanding of the AI ​​language used by Google search results to the next level.

For example, MUM uses data from 75 languages ​​instead of English alone, and it is trained on image and text instead of text alone. This is 1,000 times larger than BERT when measured by the number of parameters or connections between artificial neurons in a deep learning system.

While Nayak called MUM a major milestone in language comprehension, he also recognized that large language models have known challenges and dangers.

BERT and other Transformer -based models have been shown to absorb bias found in the data used to train them. In some cases, researchers have found that the more the language model, the more severe the increase in bias and toxic text. People who work to identify and modify racist, sexist, and otherwise problematic output in large language models say that textual analysis used to train these models is important. to minimize damage and the way the data is filtered can have a negative impact. In April, the Allen Institute for AI reported that block lists used in a popular data set used by Google to model that this T5 speech could lead to nothing to do with the whole groups, such as people who identify as how many, which makes it difficult for language models to understand the text by or part of those groups.

YouTube videos of search results will soon recommend more search ideas based on the content of the transcripts.

Courtesy of Google

Last year, several AI researchers at Google, including former Ethical AI team coleads Timnit Gebru and Margaret Mitchell, say they face opposition from executives to their work who have shown that too many models of language can harm people. Among Google employees, Gebru’s dismissal follows a controversy about a role critical of environmental and social costs in the many speech models that led to allegations of racism, calls for unification, and the need for stronger whistleblower protection for AI ethics researchers.

In June, five U.S. senators cited numerous incidents of algorithmic bias in the Alphabet and the expulsion of Gebru among reasons to question whether Black people are safe with Google products such as search. or Google workplace. In a writing to executives, the senators wrote, “We are concerned that algorithms will rely on data that reinforces negative stereotypes and may not engage people in viewing ads for housing, employment, credit, and education or just show the predators the opportunity. “



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *