At the AI event held in New York City (USA) recently, Google announced an ambitious new project to develop a single AI language model that supports the world’s 1,000 most spoken languages. The newly revealed project shows the huge ambition of Google, promising to bring great benefits to the entire ecosystem of the company.
Why Google wants to develop AI language model
According to Jeff Dean, a senior member of Google, there are currently more than 7,000 languages spoken worldwide, but only a few languages are present online. Accordingly, Google is aiming to expand the translation function as well as data mining about new languages.
According to a source, in the first step of the project, Google will publish an AI model trained on the basis of more than 400 languages – considered “the largest language model at the current time”.
Language and AI technology have always been at the heart of Google products, but recent advances in machine learning – especially the development of (LLMs) have added a new focus to the field.
Despite criticisms of the system’s operability, Google has begun to integrate these language models into its Google Search product. Existing language models still have many flaws, including raising negative issues in society such as racism or xenophobia. At the same time, AI is also not capable of analyzing language with human sensitivity. Google itself once fired its researchers directly after they published articles on these issues.
Even so, these models are capable of a wide variety of tasks: from language generation (like OpenAI’s GPT-3) to translation. Google’s idea to integrate up to 1,000 languages will not focus on any specific function but instead create a single system with a huge amount of knowledge about the different languages of the world.
Zoubin Ghahramani, vice president of Google AI research, said the company believes that creating such a large-scale model will make it easier to integrate AI features into less common languages online and train AI data.
Language machine is a trend in the AI field
It is known that large-scale projects in languages are becoming typical of many technology companies’ ambition to dominate the AI research field. One notable recent project is Meta’s Universal Speech Translator, whose goal is to translate languages from speech instead of text.
In Google’s case, having too many languages will potentially make data access more difficult. On the other hand, to be able to integrate 1,000 languages, the company will have to fund the collection of data from less common languages, including audio recordings and written text.
The Applications for Google’s AI language model
At the moment, Google said that there are no specific plans for applications that will use this translation function. However, company representatives hope it will soon be integrated across Google products, from Google Translate to YouTube subtitles and more.
In addition to the 1,000-language translation model, Google also shared new research on the text-to-video model, an AI writing assistant called Wordcraft, and a new update to the AI application – Test Kitchen. In addition, Google also allows users to try these “under-development” AI applications such as the text-to-image model – Imagen.
Credit: Techie