Chinese semiconductor industry

Status
Not open for further replies.

9dashline

Senior Member
Registered Member
I'm still young enough to remember when GTX1080 back in 2017 was advertised as the first true "4k" gaming card, and also 2 years ago when Nvidia tried to pass off the RTX3080 as an "8k" card...

Fact of the matter is, even for the "next gen" RTX4090Ti card that is due to be coming out next month in October, if playing at 4k resolution with RTX (ray tracing) turned on (of course you could turn rtx off, but that would defeat the whole point of buying a card that has RTX literally in the title of its name lol) the actual framerate is between 15 fps to 20 fps (unplayable) without using the DLSS 3 "magic" of basically using AI to cheat by not only upscaling the resolution from a 1080p native image, but also doing some morphing with the help of AI to fill in the gaps, by artifically inflating frames per second via creating frames based on guesswork and passing that off as raw performance improvements...

Less than half a decade ago Nvidia flagship GPUs were in the $500 bucks range... now they are $1600+ base.... it used to be a gpu took up only one slot on the motherboard, now the RTX4090 is taking up all four slots of the computer case, if it even fits in a case at all... Not that long ago the TDP power usage of flagship GPU like the GTX980 etc were "just" 250 watts but now the RTX4090 is 450 watts.... almost twice the power usage, taking up 4x the space, and costing 3x+ the money....but still cannt play at native 4k resolution at 60fps

In truth, not taking the DLSS 3 cheating aside, in terms of actual direct raw raster rendering performance side by side comparisons the RTX4090 is only a 40% improvement over the RTX3090, and yet it underwent a die shrink whereby the process node went to 4nm TSMC for the RTX4000 series compared to the previous/current generation which is Samsung 8nm for the RTX3000 gen.... where is the expected 200% raw native improvement?

So it seems Nvidia CEO was right about one thing, Moores Law truly IS dead, we knew it died long ago for CPU but now it appears obvious that even for GPU it is flatlined and in moribund condition and soon to undergo agonal breathing before it completely hits the dust.

We have past the point of deminishing returns... and with the post peak of global energy and crattering EROEI, it appears obvious that true technology advancements have ground to a halt....

Now without the CIAcoins to prop up the prices of GPUs, Nvidia will fall on some hard times when it runs out of gamers to scam... pretty hard to shell out thousands for a gpu when soon the target demographic will be out of a job and those in the EU freezing to death

Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!

No one can afford $1600+ for a 20% perf/power improvement taking up their entire computer motherboard and using enough electricity to run a heater, all for nonexistent next-gen games like the Portal mod and Nvidia racing games that Nvidia hatched up just to cherrypick exaggerate the magical "DLSS3" performance of its latest cards
 
Last edited:

tokenanalyst

Brigadier
Registered Member
So it seems Nvidia CEO was right about one thing, Moores Law truly IS dead, we knew it died like ago for CPU but now it appears obvious that even for GPU it is flatlined and in moribund condition and soon to undergo agonal breathing before it completely hits the dust.
If I am not wrong it think Nvidia has yet to adopt the Chiplet model for their GPUs, they just keep jamming transistors into dies, so they are dealing not just with Moore law but also with diminishing returns, pretty unsustainable for them, gamers who are they main revenue source they do not have deep pockets to afford expensive GPUs.
 

gelgoog

Brigadier
Registered Member
The AMD RDNA3 GPU designs seem a lot more realistic in that they do not have such massive die areas. They should be a lot cheaper to manufacture. To me NVIDIA seems to be repeating all the same mistakes 3dfx made in its late years, from competing with their customers with their own branded cards, to having a hard to manufacture, hot like heck graphics chip, in 3dfx's case it was the Voodoo 5 which people used to joke required its own external power supply.
 

tokenanalyst

Brigadier
Registered Member

Jingce Electronics: Shanghai Jingce Semiconductor Optical Measurement and New Products Help the Development of the Domestic Semiconductor Equipment Industry​


Recently, Shanghai Jingce Semiconductor, a subsidiary of Jingce
Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!
, once again held a new product delivery and delivery ceremony to deliver TGs in the optical topography measurement TG series to a major customer in East China, one of the largest wafer fabs in China. The TG300IF equipment. With its own performance advantages, TG 300IF has successfully entered the field of silicon wafer topography measurement, filling the gap of such equipment in the domestic semiconductor manufacturing field and enhancing the autonomy of domestic equipment in this field.
According to reports, TG 300IF was developed by the topography measurement team of Shanghai Jingce Semiconductor Optical Division for three years. The team has profound optical system technology and software and hardware development capabilities, overcame many technical challenges, and successively established many important milestones. Deliver the equipment to the customer on schedule.
As the size of semiconductor devices continues to shrink, the differences in wafer warpage, flatness and surface topography have a particularly significant impact on the integrated circuit manufacturing process, especially the lithography process, so the demand for wafer surface measurement has been greatly upgraded. At the 28nm node, the focal depth of the advanced lithography optical system will be reduced to the scale of ~100nm. The smaller focal depth has extremely strict requirements on the wafer flatness and the tolerance of nano-topography changes, and the subtle differences in wafer flatness will Consuming up to 50% of the lithography depth of focus (DOF) budget, necessitating tighter control of wafer flatness and topography parameters.
In response to the outbreak of market demand, and also in response to the country's call for domestic substitution in the field of semiconductors, TG 300IF came out. This device has nanometer-level flatness measurement accuracy, and can measure the entire There are tens of millions of points on the wafer, quickly and accurately obtain the information of wafer warpage, flatness and nano-topography distribution, and provide high-standard measurement tools for chip inspection in advanced manufacturing process, so as to help chip manufacturers directly hit the depth of focus challenge.
In addition, TG 300IF is also equipped with WaveLink, a silicon wafer topography and flatness data analysis and management system independently developed by Shanghai Jingce Semiconductor, which can dynamically display the two-dimensional/three-dimensional silicon wafer topography and flatness information through a graphical interface; In addition, it provides the function of classification, editing and management of measurement data; supports defining new recipes in offline mode to complete batch re-analysis of silicon wafer morphology; also configures the Stress module to obtain the stress distribution of silicon wafers; realizes flexible configuration of test results. Output category and output type. At the same time, based on the database system shared with the whole machine, WaveLink can also update the measured wafer results in real time and deliver the measurement data in time. In addition, WaveLink also provides MSA (Measure System Analysis) function to help customers to quantitatively analyze data and optimize the production process.
The TG 300IF equipment released this time has been assembled and debugged in the new equipment manufacturing base of Shanghai Jingce Semiconductor R&D headquarters. The newly completed R&D headquarters of Shanghai Jingce Semiconductor covers an area of more than 50 acres. It consists of four Grade A office buildings and a high-cleanliness manufacturing base. It is expected to be put into use by the end of the year, which will meet the development needs of Shanghai Jingce Semiconductor's business in a new stage. .


Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!
 

Fedupwithlies

Junior Member
Registered Member
I'm still young enough to remember when GTX1080 back in 2017 was advertised as the first true "4k" gaming card, and also 2 years ago when Nvidia tried to pass off the RTX3080 as an "8k" card...

Fact of the matter is, even for the "next gen" RTX4090Ti card that is due to be coming out next month in October, if playing at 4k resolution with RTX (ray tracing) turned on (of course you could turn rtx off, but that would defeat the whole point of buying a card that has RTX literally in the title of its name lol) the actual framerate is between 15 fps to 20 fps (unplayable) without using the DLSS 3 "magic" of basically using AI to cheat by not only upscaling the resolution from a 1080p native image, but also doing some morphing with the help of AI to fill in the gaps, by artifically inflating frames per second via creating frames based on guesswork and passing that off as raw performance improvements...

Less than half a decade ago Nvidia flagship GPUs were in the $500 bucks range... now they are $1600+ base.... it used to be a gpu took up only one slot on the motherboard, now the RTX4090 is taking up all four slots of the computer case, if it even fits in a case at all... Not that long ago the TDP power usage of flagship GPU like the GTX980 etc were "just" 250 watts but now the RTX4090 is 450 watts.... almost twice the power usage, taking up 4x the space, and costing 3x+ the money....but still cannt play at native 4k resolution at 60fps

In truth, not taking the DLSS 3 cheating aside, in terms of actual direct raw raster rendering performance side by side comparisons the RTX4090 is only a 40% improvement over the RTX3090, and yet it underwent a die shrink whereby the process node went to 4nm TSMC for the RTX4000 series compared to the previous/current generation which is Samsung 8nm for the RTX3000 gen.... where is the expected 200% raw native improvement?

So it seems Nvidia CEO was right about one thing, Moores Law truly IS dead, we knew it died long ago for CPU but now it appears obvious that even for GPU it is flatlined and in moribund condition and soon to undergo agonal breathing before it completely hits the dust.

We have past the point of deminishing returns... and with the post peak of global energy and crattering EROEI, it appears obvious that true technology advancements have ground to a halt....

Now without the CIAcoins to prop up the prices of GPUs, Nvidia will fall on some hard times when it runs out of gamers to scam... pretty hard to shell out thousands for a gpu when soon the target demographic will be out of a job and those in the EU freezing to death

Please, Log in or Register to view URLs content!
Please, Log in or Register to view URLs content!

No one can afford $1600+ for a 20% perf/power improvement taking up their entire computer motherboard and using enough electricity to run a heater, all for nonexistent next-gen games like the Portal mod and Nvidia racing games that Nvidia hatched up just to cherrypick exaggerate the magical "DLSS3" performance of its latest cards
Nvidia just keeps shooting themselves in the foot. It's hilarious to me that the national heroes the US govt have designated, Intel and Nvidia, first get shived by that same govt re: export bans to China and then they shoot themselves in the foot re: greedy business practices leading to them not keeping up with the cutting edge.
 
Status
Not open for further replies.
Top