The new method is called cycled benchmarking, and researchers use it to assess the potential of scalability. The method is also used to compare different quantum platforms against each other.
Joel Wallman is an assistant professor at Waterloo’s Faculty of Mathematics and Institute for Quantum Computing.
“This finding could go a long way toward establishing standards for performance and strengthen the effort to build a large-scale, practical quantum computer,” said Wallman. “A consistent method for characterizing and correcting the errors in quantum systems provides standardization for the way a quantum processor is assessed, allowing progress in different architectures to be fairly compared.”
Cycle Benchmarking helps quantum computing users to compare competing hardware platforms and increase the capability of each platform to come up with solutions for whatever they are working on.
At this point in time, the quantum computing race is becoming apparent all around the world. The amount of cloud quantum computing platforms and offerings are rising, and major companies like Microsoft, IBM, and Google are constantly developing new technology.
The cycle benchmarking method works by determining the total probability of error under any given quantum computing applications. This takes place when the application is implemented through randomized compiling. Cycle benchmarking provides the first cross-platform means of measuring and comparing the capabilities of quantum processors, and it is customized depending on the applications that the users are working on.
Joseph Emerson is a faculty member at IQC.
“Thanks to Google’s recent achievement of quantum supremacy, we are now at the dawn of what I call the `quantum discovery era’, Emerson said. “This means that error-prone quantum computers will deliver solutions to interesting computational problems, but the quality of their solutions can no longer be verified by high-performance computers.
“We are excited because cycle benchmarking provides a much-needed solution for improving and validating quantum computing solutions in this new era of quantum discovery.”
Emerson and Wallman founded Quantum Benchmark Inc., an IQC spin-off. It licensed the technology to world leading companies within the quantum computing field, including Google’s Quantum AI effort.
Quantum mechanics turned quantum computers into extremely powerful machines for computing. Quantum computers are capable of solving complex problems more efficiently than traditional or digital computers.
Quibits are the basic processing unit in a quantum computer, but they are extremely fragile. Any type of imperfection or source of noise in the system can lead to certain errors that cause incorrect solutions under a quantum computation.
The first step to going further with quantum computing is to gain control over a small-scale quantum computer with one or two quibits. A larger quantum computer could perform more complex tasks such as machine learning or complex system simulation, which could lead to advancements like the discovery of new pharmaceutical drugs. The problem is that engineering a larger quantum computer is more challenging, and the possibility of error is greater as quibits are added and the quantum system scales.
A profile of the noise and errors are produced when a quantum system is characterized. This indicates if the processor is performing the calculations that it is being requested to do. All significant errors need to be characterized in order to understand the performance of a quantum computer or to scale up.
Wallman, Emerson, and a group of researchers at the University of Innsbruck came up with a method to assess all error rates affecting a quantum computer. The new technique was implemented for the ion trap quantum computer at the University of Innsbruck, and it found that error rates don’t rise as the size of that quantum computers scales up.
“Cycle benchmarking is the first method for reliably checking if you are on the right track for scaling up the overall design of your quantum computer,” said Wallman. “These results are significant because they provide a comprehensive way of characterizing errors across all quantum computing platforms.”