Tesla, Intel and other giants are "fighting" AI chips: GPU "defenses", storage and computing integration "outflanks", domestic chip players also want to share the Nvidia cake

**Source: **Financial Association

Edit Ruoyu

Image source: Generated by Unbounded AI‌

Nvidia is "dominant" with its GPU, and more and more companies are trying to seize the "blue ocean" of AI chips. Musk recently said that Tesla is developing its own chip, but it will not be called GPU or 100s, H100s, etc., and Dojo2 will focus on large models. Earlier Intel launched the "China Special Edition" Gaudi2 chip, which is more cost-effective than H100. Its ** and Inspur Information jointly developed AI servers.

The industry generally believes that it is not easy for Nvidia to keep this cake. Great Wall Securities Hou Bin pointed out in a research report on July 13 that compared with overseas countries, my country's AI chip market will grow at a higher rate in the next three years, and there is a large room for development and a broad market space. According to China Merchants Securities Zhang Xia’s research report on July 18, my country’s AI chip market size will reach 178 billion yuan in 2025, an increase of nearly 100%** compared with 2022. From 2021 to 2025, my country’s AI chip market size** CARG is 42.9%**, which is faster than the global market growth rate (32.1%) in the same period.

According to the market structure, there are currently three types of players in the field of AI chips. One is the old chip giants represented by Nvidia and AMD, which have made huge acquisitions in recent years to enhance the strength of their artificial intelligence product lines; the other is cloud computing giants represented by Google, Baidu, and Huawei. According to IDC data, China's AI accelerator card shipments in 2022 will be about 1.09 million, of which Nvidia's market share in China's AI accelerator card market is 85%, Huawei's market share is 10%, Baidu's market share is 2%, and Cambrian and Suiyuan Technology both have 1%.

▌AI chip market vying for the top spot: Nvidia A800 prices soared, domestic companies "fired" GPU to find a way out Cambrian took up the banner of domestic AI chips but still couldn't get out of the predicament of consecutive years of losses

This year, the AIGC market is hot for GPUs. Under the strong demand, GPUs are always in short supply, and the tightness of the supply has made many terminal companies feel overwhelmed. YouKede stated on the investor interaction platform on July 3 that the GPUs ordered by the company are currently arriving one after another, and the contribution to the company is limited. The delivery time and quantity of the remaining GPUs are uncertain; Inspur Information disclosed the semi-annual report forecast last week.

"Everything is waiting for Nvidia." An executive of an AI company told a reporter from the Financial Associated Press that his company placed an order for server products in April, but because the server company's GPU has not yet arrived, there is no exact delivery date.

Misfortunes never come singly, the GPU market has once again experienced a storm. On the one hand, the price of Nvidia A800 has risen by more than 30%** in a week, and even the price has no market. Lenovo Group said at the MWC Shanghai exhibition that the high-end server equipped with the A800 chip will be delivered within 10 months. **

According to industry sources, apart from the strong demand for the A800 and policy factors, there is also Nvidia's own "selfishness" desire, "NVIDIA is currently reducing the shipment of the A800 and pushing the more profitable H800." The price of a H800 single-card GPU is as high as more than 200,000 yuan, which is much higher than the A800 after the price increase. From June this year, the H800 will be officially promoted on a large scale.

In this context, many people are concerned about whether domestic GPU companies will have a chance to get a share in the future. Gai Lujiang, chairman of Tianshu Zhixin, said that, in fact, regardless of whether Nvidia’s products can be sold to China, our products can already be used. Shang Junman, an analyst at Xinmou Consulting, said that he has a relatively positive attitude towards the development of domestic GPUs as a whole, but there is a certain gap** between domestic and foreign industrial chains in design, foundry, and ecological software platforms**.

According to the incomplete statistics of the Financial Associated Press, the A-share listed companies with a layout in the GPU field include Jingjiawei, VeriSilicon, Hangjin Technology, Zowee Technology, Haoli Technology, Allwinner Technology and Tongfu Microelectronics**, etc. The details are as follows:

As the "first AI chip stock on the Science and Technology Innovation Board", Cambrian previously responded on the interactive platform that the smart chip designed and developed by the company is not a GPU, but a chip specially designed for the field of artificial intelligence. The performance and energy efficiency advantages of smart chips are mainly concentrated in smart applications. In the field of artificial intelligence, it can replace GPU chips, but it is not applicable to other fields outside of artificial intelligence.

It is worth noting that on May 25, Nvidia released its financial report for the first quarter of the 2024 fiscal year, with revenue of US$7.19 billion, a year-on-year decrease of 13%, but still exceeding market expectations of US$6.52 billion. In stark contrast to Nvidia's performance, the Cambrian had a net loss of 255 million yuan** in the first quarter of 2023, compared with a loss of 287 million yuan in the same period last year.

In fact, since 2019, the Cambrian net profit** has always been in a state of loss**, or affected by this, the cumulative maximum drop of 84.35% since the stock price was listed. The company once stated in its 2022 annual report that high-quality R&D investment is a solid foundation for the long-term development of the chip industry. In the whole year of 2022, the Cambrian** research and development expenses will reach 1.523 billion yuan, a year-on-year increase of 34.11%**.

▌Domestic AI chips with large computing power change lanes? Intel, Huawei and other global players accelerate the deployment of integrated storage and computing

As far as the large-computing AI chips required by the currently hot AIGC model are concerned, is it possible to develop AI chips that can benchmark performance with Nvidia's GPGPU through existing technical approaches? Some "surprisingly upright" technologies include: software-defined chips, chiplets, 3D stacking and advanced packaging, integration of storage and calculation, etc. According to industry analysis, only by deeply integrating computing, storage, network, and software resources, accelerating data sharing and integration can we better support computing and fully tap the value of data.

On July 14, Huawei released the new AI storage product "OceanStor A310 Deep Learning Data Lake Storage"** in the era of large models. This product is oriented to basic/industry large-scale model data lake scenarios, and realizes the AI full-process massive data management from data collection, preprocessing, model training, and inference applications. It can realize multi-protocol lossless intercommunication and simplify the data collection process; realize near-data preprocessing through near-memory computing, reduce data migration, and improve pre-processing efficiency by 30%. **

The so-called near-memory computing (PNM) belongs to the integration of storage and computing. The latter is also known as "the next pole of AI computing power". Founder Securities believes that it is expected to become the "third pole"** of computing power architecture after CPU and GPU. In addition to Huawei, many domestic and foreign companies have carried out research and development of storage-computing integration technology, including Intel, IBM, SK Hynix, Micron, Samsung, TSMC, Ali** and other major manufacturers, almost all of which are deploying PNM; and ** Zhicun Technology, Yizhu Technology, Zhixinke** and other start-up companies are betting on PIM (in-memory processing), CIM (in-memory computing) and other more intimate storage-computing integration technology routes.

Under the background that the weak versatility of ASIC chips is difficult to cope with the rapid evolution of downstream algorithms, and GPGPU is constrained by high power consumption and low computing power utilization, the memory-computing integrated chip is becoming a rising star in the chip industry by virtue of its low power consumption but high energy efficiency ratio. According to incomplete statistics from the Financial Associated Press, the A-share companies involved in the integration of deposit and calculation include Dongxin Co., Ltd., Hengshuo Co., Ltd., Raput, Capital Online, Changdian Technology, Montage Technology, and Runxin Technology, etc. The details are as follows:

In terms of the primary market, the integration of storage and calculation is also the most popular track for chip investment in the past two years. According to the insight and statistics of SI Rui, seven integrated storage and calculation players, including Yizhu Technology and Zhicun Technology, are favored by capital. It is worth noting that the four start-up companies under the integrated storage and computing track ** Yizhu Technology, Zhicun Technology, Pingxin Technology, and Houmo Intelligence have obtained financing** for two consecutive years.

Analysts believe that GPU and storage are more competitive than competitive: GPU, as the most mature solution at present, cannot be given up, and a group of companies are required to carry it forward; while storage computing is an outflanking and interspersed attack, breaking foreign technical barriers and realizing new technologies.

Looking forward to the future, the industry pointed out that China's computing power has become an increasingly scarce resource. In order to meet the demand of large models for large computing power, computing power clustering will be the future trend. At the 2023 World Artificial Intelligence Conference, Huawei announced that the Ascend AI cluster has been fully upgraded. The cluster size has been expanded from the initial 4,000-card cluster** to 16,000-card**, with faster training speed and a stable training cycle of more than 30 days. Based on Ascend AI, more than 30 large-scale models have been natively incubated and adapted. So far, about half of the large-scale model innovations in China are supported by Ascend AI**.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)