Chinese semiconductor industry

Status
Not open for further replies.

Blitzo

Lieutenant General
Staff member
Super Moderator
Registered Member
A leaked transcript of a recent interview with CITIC securities equity research MD.

Please, Log in or Register to view URLs content!

Lots of information (has now been deleted unfortunately), but some takeaways:

1. Big stimulus expected in Q2 2023.
2. MIIT is now in charge of China's semiconductor projects instead of MOST
3. A high-level working group on ICT industry was founded in April and is headed by Vice Premier and MIIT senior officials - selection of expert team and enterprises began in Q4 2022
4. Big Fund will not be abolished but will prioritize technological breakthroughs over market returns
5. Projects will be enterprise-focused and led by enterprises instead of academia
6. 20% of SMIC's equipment is now domestic and is being used in production; YMTC's domestic equipment share is 30%
7. YMTC does not need a high-end lithography machine. Company is expected to build a de-Americanized, domestic equipment line in 2024 with a 1000-wafer production (per month?) capacity for testing purposes
8. Shenyang Fortune and Kunshan Kinglai, apparently, are semiconductor parts suppliers to Lam, AMAT, and TEL (50% of their revenue comes from US clients)
9. Huawei is in charge of lithography machine development and 28nm machine will take 2-3 years to develop
10. No need to worry about semiconductor materials and gases as there are many domestic alternatives
11. AI data centers are all required to use Chinese chips; Domestic GPU makes up 3-5% for companies like Baidu and Alibaba
12. Compatibility issues with LoongArch is being resolved - domestic companies developing compatible software and foreign software can be accessed by emulator
13. All levels of government and SOEs will be required to use domestic equipment and CPU in the next few years so huge boost for domestic companies
14. MOF plans to establish a RMB 500 billion fund dedicated to equipment and materials but recipients will be classified for obvious reasons.
15. New strategy will focus on M&A and integration to create state champions
16. Chinese equipment companies allow fabs to trial-run equipment for free vs. Overseas equipment companies that require payment upfront

Take it with a grain of salt but more exciting news is coming for sure...

I'm not sure how 9. works out given SMEE and SSA800 is a thing...
 

ChongqingHotPot92

Junior Member
Registered Member
Please, Log in or Register to view URLs content!


U.S. Sanctions Drive Chinese Firms to Advance AI Without Latest Chips​

Research in China on workarounds, such as using software to leverage less powerful chips, is accelerating​


By
Please, Log in or Register to view URLs content!

Follow

in Hong Kong and
Please, Log in or Register to view URLs content!

Follow

in Singapore
May 7, 2023 10:00 am ET



Please, Log in or Register to view URLs content!
 

european_guy

Junior Member
Registered Member
It seems that H800 is 3 times less powerful than H100

Please, Log in or Register to view URLs content!

Industry is quickly realizing that the assumption that more and more computing power will be needed to train bigger and bigger models, is not necessarily true.

Indeed there is a new wave of fresh research today that aims at (1) smart up smaller models to the same performance of bigger ones and (2) training models with reduced compute resources.

There is an interesting leak of an internal Google document that is very clarifying in this regard:

Please, Log in or Register to view URLs content!


Googlers are worried because they realized they have no "moat", i.e. they cannot take advantage of their huge computation power leverage to keep a leading gap between them (with very few others big US corporate) and the rest of the industry.

In a nutshell, training big models is done in 2 steps. The first is very compute intensive and needs weeks or months and data centers with thousands of GPUs and is called pre-training. The second is called finetuning and can still take days or weeks for big models.

Today the first wall already crumbled: new powerful techniques like
Please, Log in or Register to view URLs content!
can fine tune a big model in hours on few GPUs, instead of weeks in big data centers (computation effort reduced by X100!)

If also the pre-trainig big wall will fall down, then this will be a paradigm change of current state of AI oligarchy: there will be an explosion in the number of institutions / companies able to train a big model, in particular a lot of Chinese firms will be able to do it with the hardware resources they already have.

The bottom line is that current US policy of slowing down and cripple China advance in AI may be based on technically unsound hypothesis and be rendered obsolete soon.
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
Industry is quickly realizing that the assumption that more and more computing power will be needed to train bigger and bigger models, is not necessarily true.

Indeed there is a new wave of fresh research today that aims at (1) smart up smaller models to the same performance of bigger ones and (2) training models with reduced compute resources.

There is an interesting leak of an internal Google document that is very clarifying in this regard:

Please, Log in or Register to view URLs content!


Googlers are worried because they realized they have no "moat", i.e. they cannot take advantage of their huge computation power leverage to keep a leading gap between them (with very few others big US corporate) and the rest of the industry.

In a nutshell, training big models is done in 2 steps. The first is very compute intensive and needs weeks or months and data centers with thousands of GPUs and is called pre-training. The second is called finetuning and can still take days or weeks for big models.

Today the first wall already crumbled: new powerful techniques like
Please, Log in or Register to view URLs content!
can fine tune a big model in hours on few GPUs, instead of weeks in big data centers (computation effort reduced by X100!)

If also the pre-trainig big wall will fall down, then this will be a paradigm change of current state of AI oligarchy: there will be an explosion in the number of institutions / companies able to train a big model, in particular a lot of Chinese firms will be able to do it with the hardware resources they already have.

The bottom line is that current US policy of slowing down and cripple China advance in AI may be based on technically unsound hypothesis and be rendered obsolete soon.
that Chinese link and WSJ link are the same as the one I posted earlier.

I'm not sure how they measured H800 to be at 1/3 the performance of H100. Tencent's own press release just said it was quite the improvement and worked well in their HCC.

I just think this is certain people acknowledging that the sanctions so far haven't really stopped China's AI advances at all. Probably some people behind the scenes pushing for tougher sanctions.

That's all I get from the article.
 

huemens

Junior Member
Registered Member
I'm not sure how they measured H800 to be at 1/3 the performance of H100. Tencent's own press release just said it was quite the improvement and worked well in their HCC.
I think that calculation is more or less correct. The ban was to block anything starting from A100. So the rules were based on A100. That was anything which can do 600 TOPS of INT8 and interconnect bandwidth of 600GB/s. A100 has 624 TOPS INT8 and 600GB/s interconnect. So in A800 interconnect was reduced to 400GB/s. I don't know compute was reduced by how much.
For H100 all the TOPS and FLOPS values are 3 times or more that of A100. Also H100 interconnect speed is 900GB/s. So at least on the compute front H800 will have about 1/3 performance of H100. On bandwidth side it may be a little bit better than 1/3.

I think for A800 NVidia reduced the spec to more than is actually necessary by the US limits (for example they could have kept interconnect speed just under 600GB/s), so that they can later sell the H800 with a better spec than A800. But it is still going to be within the limits US set to block original A100.
 

tphuang

Lieutenant General
Staff member
Super Moderator
VIP Professional
Registered Member
I think that calculation is more or less correct. The ban was to block anything starting from A100. So the rules were based on A100. That was anything which can do 600 TOPS of INT8 and interconnect bandwidth of 600GB/s. A100 has 624 TOPS INT8 and 600GB/s interconnect. So in A800 interconnect was reduced to 400GB/s. I don't know compute was reduced by how much.
For H100 all the TOPS and FLOPS values are 3 times or more that of A100. Also H100 interconnect speed is 900GB/s. So at least on the compute front H800 will have about 1/3 performance of H100. On bandwidth side it may be a little bit better than 1/3.

I think for A800 NVidia reduced the spec to more than is actually necessary by the US limits (for example they could have kept interconnect speed just under 600GB/s), so that they can later sell the H800 with a better spec than A800. But it is still going to be within the limits US set to block original A100.
It doesn't work like that. H800 has the same computational speed as H100. The only difference is that the interconnect speed is about half. How do they get to H800 having 1/3 performance of H100, I have no idea.

here is a little bit on how NVLink works
Please, Log in or Register to view URLs content!

Now depending on the application and algorithm, you may never need to breach the H800 interconnect speed. We would need some real world study to really compare H100 and H800 in application.

Now what we do know from Tencent published is that H800 with its new ultrahigh bandwidth interconnect speed between servers reduced AI training from 11 to 4 days!
Please, Log in or Register to view URLs content!
 

huemens

Junior Member
Registered Member
It doesn't work like that. H800 has the same computational speed as H100. The only difference is that the interconnect speed is about half. How do they get to H800 having 1/3 performance of H100, I have no idea.

You are right, that was off the top of my head, I remembered there being a cap on both the Computational speed (600 TOPS INT8) and interconnect speed (600GB/s). I just re-read the official USG document and the language they have used is an "AND" rather than an "OR", which means computational restriction will only apply if interconnect reaches 600.

So you are right if they just keep the Interconnect speed under 600GB/s they can throw any amount of computational power they want into the card.
 

tokenanalyst

Brigadier
Registered Member
The 5.5 billion yuan Jingyin ultra-thin precision flexible film packaging substrate project started construction

The ultra-thin precision flexible film packaging substrate production line project invested and constructed by Zhejiang Jingyin Electronic Technology Co., Ltd. has started construction.
image.png

It is reported that the project will be attracted and landed in Lishui Economic Development Zone in February 2023, the foundation stone laying ceremony will be held in March, and the construction permit will be obtained in April. With a total investment of 5.5 billion yuan and a land area of 250 mu, it will build an ultra-thin precision flexible film packaging substrate production line and a COF research institute (including a quality inspection analysis and technology certification center).
The project will be constructed in two phases, with an investment of 2.1 billion yuan in the first phase and a land area of about 94 mu. It will mainly build an ultra-thin precision flexible film packaging substrate production line with an annual production capacity of 1.8 billion pieces. After the first phase is completed and put into production, it is expected to achieve an annual output value of 3.4 billion Yuan.

Please, Log in or Register to view URLs content!
 

tokenanalyst

Brigadier
Registered Member

Maolai Optics: semiconductor revenue accounts for about 1/3​


On March 9, 2023, Maolai Optical was listed on the Science and Technology Innovation Board. As a provider of comprehensive solutions for precision optics, the prospectus of Maolai Optical shows that it mainly produces industrial-grade precision optical products according to customer needs and customization, covering the full spectrum of deep ultraviolet DUV, visible light to far infrared, including precision optical devices, optical There are three categories of lenses and optical systems, which are used in semiconductors, life sciences, AR/VR detection and other fields.

Maolai Optical disclosed that its operating income in 2022 was 438.7254 million yuan, an increase of 32.36% over the same period of the previous year, of which domestic sales revenue was 93.7905 million yuan, accounting for 21.38%, an increase of 21.86% over the same period of the previous year; overseas sales revenue was 344.9348 million yuan , accounting for 78.62%, an increase of 35.54% over the same period last year. Realized a net profit of 59.0148 million, an increase of 25.07% over the same period of the previous year.

According to the prospectus, Maolai Optical’s main customers include Camtek, KLA, Onto Innovation, CYBEROPTICS, Shanghai Microelectronics, ALIGN, Thermo Fisher, Bio-Rad, MGI, Google’s parent company Alphabet’s autonomous driving platform Waymo, Microsoft, Facebook, IDEMIA, Beijing Institute of Space Mechatronics (508 Institute), etc.

It is reported that the semiconductor inspection optical module developed by Maolai Optics for world-renowned semiconductor inspection equipment manufacturers such as Camtek and KLA can be used for chip inspection and provide support for accelerating the performance optimization and integrated configuration of chip inspection equipment; the developed AR/VR optics Test modules and optical inspection equipment are used by customers such as Facebook and Microsoft for optical performance inspection of their AR/VR wearable devices.

According to Frost & Sullivan, the global market size of industrial-grade precision optics in 2021 will be 13.57 billion yuan. 13.8%; Zeiss, Nikon, Canon, Newport, Jenoptik, Leica, Olympus and other international companies accounted for more than 70% of the market share, leading the industry. According to Shafrost & Sullivan's data, Maolai Optical's revenue in 2021 will be 331 million yuan, accounting for about 2.4% of the global industrial-grade precision optics market.


Regarding the composition of revenue in 2022, Fan Hao, chairman of Maolai Optical, said that the current revenue structure is stable, and the revenue in the semiconductor field accounts for about one-third.

Maolai Optics pointed out that it has formed 3D digital optical module design and manufacturing technology, high-throughput integrated circuit test equipment optical technology, high-resolution fluorescence microscopy system technology, human eye bionic optical system technology, and space-borne aerospace optical design. The layout of nine key core technologies including manufacturing technology, lithography machine exposure objective lens ultra-precision optical element processing technology, and industrialization has been realized.

According to the disclosure, Maolai Optical plans to invest 225 million yuan in the construction of high-end precision optical product production projects, improve product technology and expand production of precision optical devices, lenses, and complete machines; it plans to invest 78.55 million yuan in the construction of high-end precision optical product research and development projects, focusing on optical active The principle and implementation of centering measurement system, the principle and implementation of large numerical aperture objective lens measurement technology, 200~300mm large-aperture interferometer, 300mm caliber interferometer spherical standard mirror and other 30 technical subjects have been developed and improved.

Please, Log in or Register to view URLs content!
 

pevade

Junior Member
Registered Member
Following on the above post,
Did anyone see this little section about EUV optics from MLOptics website?
Please, Log in or Register to view URLs content!
The preparation of mirrors in extreme ultraviolet lithography is mainly divided into two steps: one is to grind and polish the substrate surface of mirrors to the required surface shape and surface roughness; the other is to coat multilayer optical films on polished mirrors to meet the requirements of reflection.


At present,Mloptic has the ability of ultra smooth surface mirror processing and thin film preparation. It can process coaxial and off-axis mirrors with an aperture of less than 400mm. The RMS value of surface shape is better than 5nm, and the surface roughness of components is better than 0.5nm.


The mirror image is as follows:

1683558316859.png1683558308176.png
 
Status
Not open for further replies.
Top