Wu Dao 2.0

Wu Dao 2.0 is 10x larger than GPT-3

About Wu Dao 2.0

The Beijing Academy of Artificial Intelligence, styled as BAAI and known in Chinese as 北京智源人工智能研究院, launched the latest version of Wudao 悟道, a pre-trained deep learning model that the lab dubbed as “China’s first,” and “the world’s largest ever,” with a whopping 1.75 trillion parameters. It's able to simulate conversational speech, write poems, understand images and even generate recipes, pick up the South China Morning Post newspaper.

(Numbers don't tell a full story, but just for the sake of it: Wudao has 150 billion more parameters than Google's Switch Transformers, and is 10 times that of OpenAI's GPT-3, which is widely regarded as the best model in terms of language generation.)

The model develops both in Chinese and English acquired skills as you have ‘studied’ 4.9 terabytes of images and texts, including 1.2 terabytes of text in those two languages. WuDao 2.0 already has 22 partners, such as smartphone maker Xiaomi or short video giant Kuaishou.

They bet on GPT-like multimodal and multitasking models to reach AGI. Without a doubt, Wu Dao 2.0 — as GPT-3 before it — is an important step towards AGI. Yet, how much closer it will take us is debatable. Some experts argue we’ll need hybrid models to reach AGI. Others defend embodied AI, rejecting traditional bodiless paradigms, such as neural networks, entirely.

No one knows which is the right path. Even if larger pre-trained models are the logical trend today, we may be missing the forest for the trees, and we may end up reaching a less ambitious ceiling ahead. The only clear thing is that if the world has to suffer from environmental damage, harmful biases, or high economic costs, not even reaching AGI would be worth it.

Getting to know China's first AI-powered virtual student Hua Zhibing, Wu Dao 2.0’s child, is the first Chinese virtual student. She can learn continuously, compose poetry, draw pictures, and will learn to code in the future. In contrast with GPT-3, Wu Dao 2.0 can learn different tasks over time, not forgetting what it has learned previously. This feature seems to bring AI yet closer to human memory and learning mechanisms.

Tang Jie went as far as to claim that Hua Zhibing has “some ability in reasoning and emotional interaction.” People’s Daily Online reported that Peng Shuang, a member of Tang’s research group, “hoped that the virtual girl will have a higher EQ and be able to communicate like a human.”

When people started playing with GPT-3, many went crazy with the results. “Sentient”, “general intelligence,” and capable of “understanding” were some of the attributes people ascribed to GPT-3. So far, there’s no proof this is true. Now, the ball is in Wu Dao 2.0’s court to show the world it’s capable of “reasoning and emotional interaction.” For now, I’d be prudent before jumping to conclusions

Sources:

  • https://marketresearchtelecast.com/what-is-wudao-2-0-chinas-artificial-intelligence-model-capable-of-writing-poems-and-generating-recipes-that-surpassed-google-and-musks-openai/64523/
  • https://towardsdatascience.com/gpt-3-scared-you-meet-wu-dao-2-0-a-monster-of-1-75-trillion-parameters-832cd83db484
  • https://en.pingwest.com/a/8693

Wu Dao 2.0 features

Multimodality Wu Dao 2.0 is multimodal. It can learn from text and images and tackle tasks that include both types of data (something GPT-3 can’t do).

1.75 Trillion Parameters

Wu Dao 2.0 screenshots