News on China's scientific and technological development.

broadsword

Brigadier
Completion of China’s first quantum computer on the horizon

By Li Yan (People's Daily Online) April 11, 2017
Completion of China’s first quantum computer on the horizon
CAS-Alibaba Quantum Computing Laboratory established in July 2015 (File photo)

Development of the nation's first quantum computer is moving steadily forward, putting China ahead of other countries in the quantum communications arena, according to an industry authority.

China’s first quantum computer is expected to be completed within the next several years, said Bai Chunli, president of the Chinese Academy of Sciences (CAS). Bai made his prediction in a just-released report about the advanced applications of China’s science and technology development.

Bai notes that Chinese scientists have already succeeded in regulating and controlling single particles. The quantum computer could potentially solve equations within 0.01 seconds, much faster than domestically-developed supercomputer Tianhe-2.

Last year, China launched the world's first quantum communications satellite, proving that the country has become a leader in IT
 

antiterror13

Brigadier
New generation Chinese communication satellites launched
Please, Log in or Register to view URLs content!


Launch of satellite marks new communications era
Technology makes transmission of high-resolution video possible
China launched its most advanced communications satellite at 7:04 pm on Wednesday, marking the start of the country's large-capacity communications network in space.
Shijian 13, which was developed by the China Academy of Space Technology and based on its DFH-3B communications satellite platform, was lifted atop by a Long March 3B carrier rocket from the Xichang Satellite Launch Center in Sichuan province.
The satellite, which weighs 4.6 metric tons, is expected to stay for 15 years in a geostationary orbit about 36,000 kilometers above Earth, the academy said.
The satellite features a Ka-band broadband communications system capable of transmitting 20 gigabytes of data per second, making it the most powerful communications satellite the nation has developed.
Zhao Jian, a space program official at the State Administration of Science, Technology and Industry for National Defense, which oversees China's space programs, said the transmission capacity of Shijian 13 exceeds the total capacity of all of the country's previous communications satellites.
He said that in addition to its communications missions, the satellite will also be tasked with conducting space-to-ground laser communications experiments.
Zhou Zhicheng, head of the China Academy of Space Technology's Institute of Telecommunication Satellite, said Shijian 13 would use electric propulsion after it enters orbit, which will extensively reduce the chemical fuel the satellite carries.
Currently, most satellites rely on chemical propulsion, which requires a lot of fuel that occupies a large proportion of carrying capacity, and restricts the number and weight of scientific instruments a satellite can carry.
Zhou said Shijian 13 is able to cover most parts of China's land territory and will help to enable passengers on jetliners and high-speed trains to use the internet.
The use of Ka-band broadband allows Shijian 13 to transmit high-resolution video, a feature that previous Chinese satellites do not have. In addition, the Ka-band handheld devices are smaller and more portable than those based on other frequencies, which means they are more convenient to carry and use, Zhou said.
He added that developing nations have huge demand for DFH-3B-based communications satellites such as Shijian 13, not only because of its capability, but also because it can accommodate various kinds of equipment in accordance with clients' requirements.
The academy has made seven communications satellites for overseas users, including Pakistan, Venezuela and Laos, and is finalizing export contracts for more than 10 such satellites, he said.
However, Zhou said China still needs to catch up with the top developers of communications satellites in the United States and Europe, such as Boeing and Thales Alenia Space, when it comes to satellite technology and capacity.
In response, the country will develop several new types of advanced communications satellites, including multimedia broadcasting satellites and ultra-large-capacity broadband satellites, he said.
In June, the Shijian 18 communications satellite, the first to be developed based on China's new-generation DFH-5 satellite platform, will be launched by a Long March 5 heavy-lift rocket at the Wenchang Space Launch Center in Hainan province, according to Zhao.
He said the overall capability of the Shijian 18 will be better than that of all communications satellites currently used by other nations, and its service will improve internet connectivity and accessibility for Chinese users as well as reduce users' costs.
China now operates 16 communications satellites, according to Zhou.
 

delft

Brigadier
"Another advantage is that QLED emits very narrow wave band light, that means very pure and accurate color, once again because of particle size is fixed to Nano-meter level. OLED can not match that pureness, its single color still covers a relatively broad band width."
The three or four types of sensors in our eyes are also broad band but with shifted tops in the wavelength space. ( women can have an extra sensor type ). So how important can be that "very pure and accurate colour"?
 

taxiya

Brigadier
Registered Member
"Another advantage is that QLED emits very narrow wave band light, that means very pure and accurate color, once again because of particle size is fixed to Nano-meter level. OLED can not match that pureness, its single color still covers a relatively broad band width."
The three or four types of sensors in our eyes are also broad band but with shifted tops in the wavelength space. ( women can have an extra sensor type ). So how important can be that "very pure and accurate colour"?
delft, I think you are asking me? Since the first paragraph was part of my post.

The answer can be made in a rough analogue. If I want to have 6, I want 1 + 2 +3, QLED gives me just that. OLED may give me 1.01 + 1.98 + 3.02 = 6.01, that is close to 6, but not. The pureness make sure the fidelity of reproduction, if I want to show 7.123, I will get exactly that, nothing else.

It does not matter our eye sensors are broad band. The display's job is to accurately reproduce what it is supposed to (the original). If that reproduction is not accurate (OLED or others), we won't see the true original regardless how our eyes work.
 

N00813

Junior Member
Registered Member
Please, Log in or Register to view URLs content!

(emphasis mine)
China Pushes Breadth-First Search Across Ten Million Cores
April 14, 2017
Please, Log in or Register to view URLs content!


sunway-taihulight-bw-200x152.jpg



There is increasing interplay between the worlds of machine learning and high performance computing (HPC). This began with a
Please, Log in or Register to view URLs content!
and
Please, Log in or Register to view URLs content!
story since many supercomputing tricks of the trade play well into deep learning, but as we look to next generation machines, the bond keeps tightening.

Many supercomputing sites are figuring out how to
Please, Log in or Register to view URLs content!
their existing workflows, either as a pre- or post-processing step, while some research areas
Please, Log in or Register to view URLs content!
traditional supercomputing simulations altogether eventually. While these massive machines were designed with simulations in mind, the strongest supers have architectures that parallel the unique requirements of training and inference workloads. One such system in the U.S. is the
Please, Log in or Register to view URLs content!
coming to Oak Ridge National Lab later this year, but many of the other architectures that are especially sporting for machine learning are in China and Japan—and feature non-standard processing elements.

The Sunway TaihuLight supercomputer, which is the most powerful on the planet according the Top 500 rankings of the world’s top systems, might be a powerhouse in China for traditional scientific applications, but the machine is also primed for moving the country along the bleeding edge in deep learning and machine learning.

Back in June, 2016,
Please, Log in or Register to view URLs content!
the wide range of applications set to run on China’s top system, noting significant progress in adding deep learning libraries and tooling. In the meantime,
Please, Log in or Register to view URLs content!
to spur machine learning developments on the system among Chinese researchers have cropped up, with similar emphasis on machine learning for current and
Please, Log in or Register to view URLs content!
systems
Please, Log in or Register to view URLs content!
, as well. In short, supercomputing in Asia has taken a turn toward AI—and as it turns out, the Sunway TaihuLight system might be the right tool for doing double-duty on both scientific and machine learning applications [scientific likely meaning traditional physics simulations -- me].

Last summer, when we described
Please, Log in or Register to view URLs content!
of the Sunway TaihuLight supercomputer, we noticed a few interesting things when it came to
Please, Log in or Register to view URLs content!
. It turns out, it’s architecture was well-suited (as large supercomputers go, anyway) to graph and irregular algorithms—something that makes it prime for the next generation of neural network and other non-traditional HPC applications. This observation was confirmed by the system’s performance on the
Please, Log in or Register to view URLs content!
, which measures performance and efficiency of graph traversals—something important for data-intensive HPC workloads.

While the results of the benchmark were published last year, researchers and system engineers have just released a detailed paper describing the
Please, Log in or Register to view URLs content!
to achieve those results. In doing so, they also provided more insight into the architecture than we have seen to date—and provided rationale for why the machine performs well on workloads with irregular accesses and other features well-aligned with machine learning.

Please, Log in or Register to view URLs content!

The Sunway TaihuLight machine has a peak performance of 125.4 petaflops acrpss 1-,649,600 cores. It sports 1.31 petabytes of main memory. To put the peak performance figure in some context, recall that the current (by far top) supercomputer until this announcement had been Tianhe-2 with 33.86 pea petaflop capability. One key difference, other than the clear peak potential, is that TianhuLight came out of the gate with demonstrated high performance on real-world applications, some of which are able to utilize over 8 million of the machine’s 10 million-plus cores.

As a refresher, the processors in the Sunway TaihuLight system have a highly heterogeneous manycore architecture and memory hierarchy. Every processor in TaihuLight contains four general purpose cores, and each of which has 64 on-chip accelerator cores. Each accelerator core has a 64 KB on-chip scratch pad and memory. The four general purpose cores and the 256 accelerator cores can both access 32GB shared off-chip main memory. The system designers say that for chip design and manufacturing reasons the machine does not have cache coherence in exchange, opting instead for more area of the chip for computing. This creates challenges for implementing BFS across over ten million cores with accelerators added to the mix, but it does bode well for future machine learning workloads on TaihuLight.

The researchers who implemented BFS across the system note that this is the best-performing heterogeneous architecture on the Graph 500—something that has both benefits and challenges. The algorithm itself has many aspects that aren’t well-suited to traditional supercomputing approaches, including frequent and random data accesses, which plug up I/O, heavy data dependencies as well as irregular data distribution. However, even with accelerators on the system, it was able to achieve 23755.7 giga-traversed edges per second, the top among other accelerated systems on the list.

While the Graph 500 benchmark is not necessarily a measure of a system’s capability to do large-scale machine learning, it is an indicator. The current top spot on the Graph 500 list is the K Computer at RIKEN in Japan, which will be getting its own upgrade at a time when leaders for the system are talking a great deal about
Please, Log in or Register to view URLs content!
.

In the supercomputing world, the dominant benchmark is still the Top 500, which ranks the top machines based on double-precision floating point performance. A companion benchmark, which is getting more attention for its focus on how actual applications behave on these machines is
Please, Log in or Register to view URLs content!
—which the TaihuLight system performed well on performance-wise, but with
Please, Log in or Register to view URLs content!
relative to the other top tier machines. In short, what the machine might offer in the way of exceptional machine learning performance on a unique supercomputing architecture, it might lack in terms of efficiency. Of course, this is true with almost all of the machines on all of the benchmarks listed–efficiency and utilization figures pale in comparison to the performance figures.

What might be needed is yet another benchmark for the supercomputing community. Dr. Jack Dongarra (behind the Top 500, HPCG, and an emerging ranking that looks at single precision performance) might weep at the thought of yet another metric, but with deep learning entering the HPC sphere in such a dramatic way, adding a machine learning-centric set of baselines could make sense. Such a metric could be based on a classification problem or existing neural network benchmark and provide results for both training and inference. With so many GPU accelerated machines on the Top 500 (and more coming that feature the latest Pascal and future Volta GPUs) it would be interesting to gauge their performance at scale and more important, get a sense of how well these models can actually scale across some of the world’s largest machines. Multi-GPU scaling is one problem the HPC world has figured out–but what about multi-custom accelerator scaling as shown by the Chinese engineers for the TaihuLight system, or better still, of ARM, SPARC, and other architectures?

The point is, Graph 500 has worked well for measuring data-intensive computing performance on top supercomputers to date. It arose during the wave of interest in “big data” a few years. That interest has now given way to machine learning as the next level of analytics and should have a benchmark that applies to the largest systems to compare and asses performance.
 

A.Man

Major
Please, Log in or Register to view URLs content!


For decades, America lost factories and jobs to China but retained a coveted title: the world's leader in inventing and commercializing new products.

Now, even that status has been eroded, and it's hurting the economy.

While the United States is still at the top in total investment in research and development — spending $500 billion in 2015 —
Please, Log in or Register to view URLs content!
has made a startling finding: A couple of years ago, China quietly surpassed the U.S. in spending on the later stage of R&D that turns discoveries into commercial products. And at its current rate of spending, China will invest up to twice as much as the U.S., or $658 billion, by 2018 on this critical late-stage research.

In other words, the U.S. Is doing the hard work of inventing new technologies, and China, among other countries, is reaping the benefits by taking those ideas and turning them into commercial products,the report says.

“Other countries are free-riding on the U.S. investment," says Justin Rose, who co-authored the BCG study.

.oembed-asset-photo-image { width: 100%; }
041417-RD-Spend-ONLINE.V3.png

To be continued
 

A.Man

Major
The slippage is a significant blow for the U.S. economy, costing the country tens of billions of dollars a year in manufacturing output and hundreds of thousands of factory jobs over the past decade or so, BCG says. Companies that lead in commercializing ideas also typically build factories near their research centers so scientists can test products before making them.

The burgeoning commercial drone market is a prime example of the shift. The U.S. military developed drone technology throughout the 20th Century for reconnaissance and other purposes, adding microchips for better wireless control and longer-lasting batteries. But China’s Da-Jiang Innovations has refined the unmanned vehicles to better avoid obstacles and has become the world’s largest builder of commercial drones. It sells them to U.S. real estate and construction firms for applications such as aerial photography and mapping. DJI has three factories in Shenzhen.

The U.S. has also given birth to a Smithsonian-worthy collection of breakthrough technologies — including flat-panel displays, digital mobile handsets, notebook computers and solar panels — only to fumble away their development to other countries, particularly China and Japan.


Please, Log in or Register to view URLs content!

The BCG study concludes the U.S. has the potential to reverse the trend through better collaboration among private industry, universities and research consortia. Such a shift would increase annual manufacturing output by 5%, or $100 billion, and add 700,000 factory jobs and another 1.9 million in other sectors through ripple effects.

Yet while President Trump is focused on narrowing the nation’s trade deficit, his proposed budget would slash federal funding for R&D, potentially snuffing out a significant source of U.S. manufacturing jobs that could help accomplish that goal. Last year, the U.S. had an $83 billion trade gap in advanced technology products, according to the Census Bureau.

The country is still the global leader in “basic and applied” R&D, which makes early discoveries and further refines them. About a third of the $500 billion the country spends on R&D is funneled to those activities. But while two-thirds of the total goes to later-stage “development” R&D, China invests 84% of its R&D money on advances that yield commercial products. For the past decade, “development” R&D has been growing 20% a year in China, versus 5% in the U.S., the BCG report says. As recently as 2004, the U.S. spent four times as much as China.

In China, many technology companies are state-owned and so they don’t have to worry if massive R&D spending yields losses until a product is commercialized, and even the research of private firms is often subsidized by the government, says Robert Atkinson, president of the Information Technology and Innovation Foundation. The Chinese government, he says, also gives the private sector specific timetables for achieving dominance in areas such as solar, printers, robots and drones. And China routinely steals technology and fails to enforce patent laws, Atkinson says

“They have huge advantages,” he says.

Please, Log in or Register to view URLs content!

There’s ample opportunity for a U.S. turnaround, BCG says, with 75 of the world’s 200 highest-rated universities located in the U.S. But there’s little cooperation among the schools, which do the lion’s share of basic and applied research, largely through federal grants, and private companies, which do most of the development research.

The Study:

Please, Log in or Register to view URLs content!

Among the obstacles BCG identified:

• Schools do a poor job of promoting their latest research and putting it in a digestible form for manufacturers. And researchers are focused on securing tenure, while companies are seeking a return on their investment. When companies do partner with universities, it’s often for a limited, product-specific purpose rather than for developing industry-wide solutions that take longer to bring to fruition but can create many more jobs.

“Companies are being gun-shy and risk-averse and not wanting to make big bets on transformative technology,” Atkinson says. Instead, they're focused on quarterly profits, which typically determine executives' bonuses.

• U.S. manufacturers are reluctant to collaborate with other companies because they don't want to share the fruits of their research with competitors.

• Manufacturers are reluctant to work with suppliers to establish industry-wide standards that can reduce costs and speed implementation of technologies, fearing the suppliers would share the information with competitors.

Please, Log in or Register to view URLs content!

The study says the government should set up a central repository for federally-funded university research; school research should be geared toward commercializing products; manufacturers should build long-term relationships with universities, such as Procter & Gamble’s link-up with the University of Cincinnati; and public-private research consortia should focus on developing industry-wide solutions. Since 2008, P&G has invested millions of dollars in a university computer simulation center to enhance its consumer household products, their packaging and manufacturing processes.

“If we want to be the leader in product development for things that matter in people’s lives, pushing money into developing important (products) is what we should be focused on,” BCG’s Rose says.
 

delft

Brigadier
To what extent is US R&D directed to military applications and does it need an additional translation to civilian uses? If China is directing its R&D to more general use in the way NACA demonstrated and its successor NASA continues to demonstrate there is little that can be done because US national budget is already overextended in pork barrelling and the money will not be found.
 
Top