Skip to main content

Long a pioneer, alongside the USA, in artificial intelligence (AI), China seems to have fallen behind in the second instance, that of Generative AI.

ODDO BHF AM would like to devote this Fund Insight to the strengths and weaknesses of China’s Generative AI model; distinguishing the communicationwork fromthe fundamentals.

THE COMMUNICATION WORK…

Last June, the Chinese authorities claimed that their capacity in generative AI would be greater than that of the USA; after having communicated a month earlier that the country currently had at least 79 LLMs (Large Language Model of the GPT Chat type) and that this number was set to grow.

These Chinese initiatives are the fruit of three types of players: 1) the digital giants (Baidu, the leader in the field, Alibaba and Tencent, who each have their own LLM); 2) Computer Vision players, such as SenseTime and Yoncong, who have launched their LLMs; 3) Startups whose founders emerge either from the digital giants (Meituan, Sogou) or from academia.

Assertions of Chinese dominance in this field seem to us to be in contradiction with the analysis that follows (next paragraph), which aims to show that China has, depending on the case, little or none of the four founding blocks required for LLM training.

Finally, China, like the rest of the world, will eventually have to regulate its Generative AI. The form this regulation might take in China could, once again, weaken its development model.

…AND THE FUNDAMENTALS

When it comes to Generative AI, it seems to us that all the founding blocks of the Chinese ecosystem are weakened:

  • Data sets are the raw material for large language models (LLMs) such as ChatGPT, and hence for generative AI. After years of China’s relative isolation fromthe rest of theworld, the quality and completeness of these data sets is now questionable. As a result, Chinese LLMs may have a greater tendency to hallucinate (where the chatbot invents an answer when it doesn’t know it) than those in theWest.
  • Software for accessing data in the public cloud, such as SnowFlake and Databricks, are the cornerstones of “industrial”, optimized exploitation of the long data series required for LLM training. These software publishers, providers of state-of-the-art solutions born in the public cloud, are American and are not distributed in China. The latter has equipped itself with equivalent solutions (Oushu DB, for example), but it’s clear that the Chinese software industry is several decades behind the USA, which in our view will considerably slow down the development of Generative AI in China.
  • The Chinese public cloud sector: In China, as in the United States, the public cloud is driven by digital giants. Alibaba has its subsidiary Alicloud, while Tencent has its subsidiary Tencent cloud. These digital giants are just emerging from several years of regulation, the most severe forms of which have led to the beginnings of dismantling, as in the case of Alibaba. The resulting financial fragility is compounded by the poor optimization of their public cloud subsidiaries, which suffer from a market that is still too largely “on premise”, i.e. where corporate customers still own their own servers.
  • Leading-edge semiconductors or AI chips: Nvidia’s success on the stock market and at fundamental level has underscored the fact that Generative AI crowns the era of accelerated computation, in this case Nvidia’s graphics cards (also known as GPUs) which form, if anything, the founding block of Generative AI. Nvidia and its GPUs, but also all the companies in the same family who are potential suppliers of AI chips (Broadcom, Marvell, AMD and Intel) are all companies under the American flag. China does not have a leading-edge semiconductor industry capable of satisfying its needs for AI chips. In fact, the US administration is not allowing it to develop one, since it has banned American and European equipment suppliers (such as ASML and Applied Materials) from supplying China. The same administration also prohibited Nvidia from selling China its accelerated computation chips, which have become the worldwide standard for LLM (Large LanguageModel) training.

Discover the full Fund Insight of ODDO BHF AM here.

Author IT Topics

More posts by IT Topics