Quantum computing researchers at Microsoft and Quantinuum have announced significant progress in reducing error rates using a technique called qubit virtualization. The method combines Quantinuum’s high-precision H-2 ion trap quantum computer with Microsoft’s syndrome extraction method. This breakthrough sets the stage for the development of larger, more reliable quantum computers that can solve problems far beyond the reach of classical machines.
This is the great promise of quantum computing. The possibility of tackling problems that even classical supercomputers cannot solve. However, achieving that goal will require significant improvements in the error rates of existing quantum hardware. Therefore, error correction is a very important area for quantum computing research.
See how Quantinuum’s hardware and Microsoft’s qubit virtualization system work together We worked together to achieve this important progress.
From physical qubit to logical qubit
Scientists from both companies created a symbiotic relationship between Quantinuum’s high-fidelity hardware and Microsoft systems to create four stable logical qubits from 30 physical qubits. The result was a record-breaking error rate of 800 times the underlying physical error rate.
A few years ago, scientists at Google estimated that it takes 1,000 physical qubits to create a single logical qubit. This number actually proves to be much lower, as shown by the error correction performance achieved in this new study.
The superiority of logical error rates over underlying physical error rates may indicate that fault-tolerant quantum computers may be closer than previously thought. Microsoft estimates that a quantum machine with 100 reliable logical qubits could solve many scientific problems that cannot currently be solved by classical computers.
Keep in mind that Microsoft and Quantinuum still have work to do to achieve that. Continued efforts should correct some of the limiting factors discovered in this pioneering study and further improve future results.
shared vision
During the briefing, Matt Zanner, principal program manager at Microsoft, and Dr. Jenni Strabley, senior director of offering management at Quantinuum, reflected on the four-year history of quantum collaboration between their organizations. Both companies are focused on enabling large-scale quantum computing with a shared vision of creating classical-quantum hybrid supercomputers with fault-tolerant computation that can solve world-class problems.
“Microsoft is fully aligned on the path to massive quantization,” Zahner said. “We have a lot of different pillars of work that align with that overall mission, but quantum computing is foremost among them.”
Microsoft plans to integrate quantum computing into its existing Azure Quantum Elements product, which already incorporates HPC and AI. It will probably run on a Quantinuum machine. For more information on Azure Quantum Elements, a Microsoft researcher described how he used HPC and AI to create his 32 million novel materials in the search for more efficient lithium-ion materials for electronics batteries. , see his previous Forbes article.
Microsoft and Quantinuum also share interests in chemistry and materials science. Quantinuum offers a cutting-edge quantum chemistry platform known as Inquanto that can perform complex simulations of molecules and materials. The platform complements Microsoft’s Azure Quantum Elements.
According to Dr. Strabley, a key enabler of these advances is Quantinuum’s close collaboration with Microsoft as a full-stack company with expertise across hardware and software. Our respective error correction and logical qubit mapping teams have been working together to exchange ideas and co-create new solutions to advance quantum computing.
Eliminate quantum noise
Building quantum machines that can solve complex problems in areas such as climate modeling, large-scale financial optimization, and advanced physics simulations requires viable solutions for quantum error correction. But error correction is elusive and complex because of a natural limitation called the non-clonability theorem that makes it impossible to copy quantum information in the same way as classical computers. A collaboration between Microsoft and Quantinuum could lead to a solution that removes that barrier.
Because quantum information cannot be directly copied, correcting errors in qubits requires an alternative approach using logical qubits. The quantum information stored in Quantinuum’s physical qubits was transformed into 30 entangled physical qubits, which then formed four trusted logical qubits. For a logical qubit to be useful, it must have a lower error rate than the physical qubit used to create it. Microsoft’s Qubit Virtualization System combines error correction techniques that make qubits more reliable.
Microsoft used a method called active syndrome extraction to diagnose and repair qubit errors without collapsing the quantum state. Depending on which QEC code is used, syndrome measurements can determine whether an error has occurred, as well as the location and type of error. Microsoft’s approach addresses noise at the logical qubit level, significantly increasing overall reliability. The result is similar to the signal improvement provided by noise-cancelling headphones. In this case, noisy quantum qubits are converted into reliable logical qubits.
The success of this experiment also depends on the existence of a high-performance quantum computer. Quantinuum’s H-2 employs a state-of-the-art trapped ion, shuttling-based processor with best-in-class 99.8% two-qubit gate fidelity and 32 fully connected qubits and unique It has a quantum charge of . Combined device architecture.
Note that Quantinuum also has extensive experience with logical qubits. We published the first research paper demonstrating a fault-tolerant end-to-end circuit with entangled logic qubits using real-time error correction. This was the first time that his two error-corrected logical qubits performed a circuit with higher fidelity than the physical qubits they comprised. You can read my article about it here.
Prior to the release of H-2, I was invited to attend a Quantinum information session in Broomfield, Colorado. We have also written a detailed white paper on its features and capabilities. You can read it here. In short, the H-2’s benchmark results are quite impressive.
Real-time error correction and post-processing error correction
This study not only provided valuable quantum error correction information, but also provided interesting results because it used two error correction methods in two different ways and provided a comparison between the methods. Specifically, Steane code was used for real-time error correction and Carbon code was used after selection.
The Steane code uses seven physical qubits to encode one logical qubit. The researchers used this code to implement active real-time error correction. This required two additional modes to detect and correct errors that occurred during calculations.
Carbon cord circuits, on the other hand, are more efficient and have longer cord distances, allowing for post-selection if needed. Also, the longer the distance of the code, the more resistant the code is to errors. Circuit efficiency and error correction capabilities also minimize the number of discardable runs after selection.
Carbon codes have much higher thresholds and can tolerate higher error rates compared to Steane codes. To maintain the integrity of the quantum information, the structure of the Carbon code is such that when an error occurs, a specific state or syndrome is generated that can be identified and corrected by post-selection.
Insights gained from running two error correction methods
Although both codes demonstrated the ability to keep logical error rates significantly lower than physical error rates, the Carbon code showed a larger gain, with up to 500x reduction compared to the Stane code’s (still impressive) Achieved an 800x reduction. The performance difference between the two codes is likely due to the better error correction ability of the Carbon code. Carbon code syndrome extraction is much more efficient, so it introduces fewer errors, and the longer code distances allow more errors to be tolerated.
One reason for using post-selection was to demonstrate that there are some errors that can be detected but cannot be reliably corrected. Therefore, if these errors are detected during a run, you can discard the run with confidence that it contains errors.
In some situations, post-selection can be more robust to noise. For example, if a false positive is measured in error correction mode, unnecessary corrective actions may be taken, thus introducing noise. However, in error detection mode, that data is discarded without further action.
In our experiments, error correction was successfully applied for the majority of runs. For a small portion of the runs, the researchers were able to diagnose errors that could not be fixed in the code, so those runs were discarded. The majority of errors in this study were corrected before data became corrupted, and only a small number of errors were uncorrectable.
According to the research team, there is no technical reason why real-time decoding could not be used in all experiments. Having the two methods provided scientists with a way to compare the impact of each method.
Next steps for Quantinuum and Microsoft
In addition to our joint efforts, Microsoft and Quantinuum have their own internal roadmaps that will drive future development. In the distant future, Quantinuum he is considering the possibility of creating a quantum machine with 1,000 logical qubits. Using the current ratio, we would need 7,500 physical qubits.
In 2025, Quantinuum plans to introduce a new H-series quantum computer called Helios. Dr. Strabley explained that Helios will be a cloud-based system that will be offered both as a service and on-premises. Based on recent announcements with Microsoft, she expects her Helios to have more than her 10 logical qubits. She sees this as a rapid advancement that scales up the system’s capabilities compared to previous generations.
Meanwhile, when Microsoft integrates reliable logic qubits into Azure Quantum Elements, the product combines the high performance of cloud computing, advanced AI models, and improved quantum computing capabilities. Microsoft plans to use logical qubits to scale its hybrid supercomputer to a level where its performance will be limited to one error per 100 million operations.
The two companies also share an interest in topology research. Quantinuum’s topological interests focus on the use of non-Abellion states in quantum information processing and how non-Abellion braiding can be used to create universal gates. Meanwhile, Microsoft’s research is focused on developing topological qubits that leverage built-in error protection and digital control. So far, Microsoft’s research team has made significant progress in researching topological qubits.
summary
The combination of Microsoft’s qubit virtualization system and Quantinuum’s trapped ion quantum computer and its QCCD architecture has achieved what was impossible just a year ago. 14,000 experiments ran flawlessly without a single error. This is not just an advance, but a major step forward in quantum error correction.
of The success of this research will not only benefit these two companies. It has implications for the entire quantum ecosystem and provides evidence that reliable logical qubits are likely to play a key role in solving future problems. This research shows how thousands or hundreds of thousands of trusted logic qubits can solve complex scientific puzzles, from chemistry and materials science to drug discovery, clean energy research, financial modeling, logistics optimization, and climate prediction. It shows the future that will help you create a plan.