Chinese semiconductor industry

Status
Not open for further replies.

tokenanalyst

Brigadier
Registered Member
Personally I would love for this thread to be more focused on technical stuff rather than political stuff.
I don't care about the political stuff as long is not repeated again and again and again and is about Chinese semiconductors. Posting it once is more than enough, if someone is not sure is something is already posted or if the post is redundant, just go back a few pages to check.
 

ansy1968

Brigadier
Registered Member
Specs of Biren GPU from Tom's Hardware

Chinese Biren's New GPUs Have 77 Billion Transistors, 2 PFLOPS of AI Performance​

By
Please, Log in or Register to view URLs content!
published 1 day ago
Chinese GPU developer introduces a 77 billion transistors chip.


Biren Technology

(Image credit: Biren Technology)


Please, Log in or Register to view URLs content!
has formally
Please, Log in or Register to view URLs content!
its first GPUs designed primarily for artificial intelligence (AI) and high-performance computing (HPC). According to the company, the top-of-the-range BR100 GPU can challenge Nvidia's A100 and even H100 chips in certain workloads, yet its complexity is comparable with that of Nvidia's
Please, Log in or Register to view URLs content!
.


Biren's initial family of compute GPUs includes two chips. The BR100 promises up to 256 FP32 TFLOPS or 2 INT8 PetaFLOPS performance, whereas the BR104 is rated for up to 128 FP32 TFLOPS or 1 INT8 PetaFLOPS performance.

The top-of-the-range
Please, Log in or Register to view URLs content!
comes with 64GB of HBM2E memory with a 4096-bit interface (1.64 TB/s), while the midrange
Please, Log in or Register to view URLs content!
with 32GB of HBM2E memory with a 2048-bit interface (819 GB/s).



Biren BR104Biren BR100Nvidia A100Nvidia H100
Form-FactorFHFL CardOAM ModuleSXM4SXM5
Transistor Count?77 billion54.2 billion80 billion
NodeN7N7N74N
Power300W550W400W700W
FP32 TFLOPS12825619.560
TF32+ TFLOPS256512??
TF32 TFLOPS??156/312*500/1000*
FP16 TFLOPS??78120
FP16 TFLOPS Tensor??312/624*1000/2000*
BF16 TFLOPS512102439120
BF16 TFLOPS Tensor??312/624*1000/2000*
INT810242048??
INT8 TFLOPS Tensor??624/1248*2000/4000*


Both chips support the INT8, FP16, BF16, FP32, and TF32+ data formats, so we're not talking about supercomputing formats (e.g., FP64) even though Biren says that its TF32+ format provides higher data precision than traditional TF32. Meanwhile, the BR100 and BR104 offer rather formidable peak performance numbers. In fact, if the company had incorporated GPU-specific functionality (texture units, render back ends, etc.) into its compute GPUs and had designed proper drivers, these chips would have been rather incredible GPUs (at least BR104, which is presumably a single-chip configuration).

In addition to the compute capabilities, Biren's GPUs can also support H.264 video encoding and decoding.

Biren's BR100 will be available in an OAM form-factor and consume up to 550W of power. The chip supports the company's proprietary 8-way BLink technology that allows the installation of up to eight BR100 GPUs per system. In contrast, the 300W BR104 will ship in a FHFL dual-wide PCIe card form-factor and support up to 3-way multi-GPU configuration. Both chips use a PCIe 5.0 x16 interface with the CXL protocol for accelerators on top, reports
Please, Log in or Register to view URLs content!
(via
Please, Log in or Register to view URLs content!
).


Biren says that both of its chips are made using TSMC's 7nm-class fabrication process (without elaborating whether it uses N7, N7+, or N7P). The larger BR100 packs 77 billion transistors, outweighing the 54.2 billion with the Nvidia A100 that's also made using one of TSMC's N7 nodes. The company also says that to overcome limitations imposed by TSMC's reticle size, it had to use chiplet design and the foundry's CoWoS 2.5D technology, which is completely logical as Nvidia's A100 was approaching the size of a reticle and the BR100 is supposed to be even larger given its higher transistor count.

Given the specs, we can speculate that BR100 basically uses two BR104s, though the developer has not formally confirmed that.

To commercialize its BR100 OAM accelerator, Biren worked with Inspur on an 8-way AI server that will be sampling starting Q4 2022. Baidu and China Mobile will be among the first customers to use Biren's compute GPUs.
 

tokenanalyst

Brigadier
Registered Member

CETC ion implanters used in the research of next gen wide band gap semiconducting sensing devices.

Generation of Spin Defects by Ion Implantation in Hexagonal Boron Nitride​

Optically addressable spin defects in wide-band-gap semiconductors as promising systems for quantum information and sensing applications have recently attracted increased attention. Spin defects in two-dimensional materials are expected to show superiority in quantum sensing due to their atomic thickness. Here, we demonstrate that an ensemble of negatively charged boron vacancies (VB–) with good spin properties in hexagonal boron nitride (hBN) can be generated by ion implantation. We carry out optically detected magnetic resonance measurements at room temperature to characterize the spin properties of ensembles of VB– defects, showing a zero-field splitting frequency of ∼3.47 GHz. We compare the photoluminescence intensity and spin properties of VB– defects generated using different implantation parameters, such as fluence, energy, and ion species. With the use of the proper parameters, we can successfully create VB– defects with a high probability. Our results provide a simple and practicable method to create spin defects in hBN, which is of great significance for realizing integrated hBN-based devices.

Please, Log in or Register to view URLs content!
 
Status
Not open for further replies.
Top