

Quantum-computing companies have been competing for years to squeeze the most qubits onto a chip. But fabrication and connectivity challenges mean there are limits to this strategy. The focus is now shifting to linking multiple quantum processors together to build computers large enough to tackle real-world problems.
In January, the Canadian quantum-computing company Xanadu unveiled what it says is the first modular quantum computer. Xanadu’s approach uses photons as qubits—just one of many ways to create the quantum-computing equivalent of a classical bit. In a paper published that same month in Nature, researchers at the company outlined how they connected 35 photonic chips and 13 kilometers of optical fiber across four server racks to create a 12-qubit quantum computer called Aurora. Although there are quantum computers with many more qubits today, Xanadu says the design demonstrates all the key components for a modular architecture that could be scaled up to millions of qubits.
Xanadu isn’t the only company focused on modularity these days. Both IBM and IonQ have started work on linking their quantum processors, with IBM hoping to demonstrate a modular setup later this year. And several startups are carving out a niche building the supporting technologies required for this transition.
Most companies have long acknowledged that modularity is key to scaling, says Xanadu CEO Christian Weedbrook, but so far they have prioritized developing the core qubit technology, which was widely seen as the bigger technical challenge. Now that chips with practical use are in sight and the largest processors feature more than 1,000 qubits, he believes the focus is shifting.
“To get to a million qubits, which is when you can start truly solving customer problems, you’re not going to be able to have them all on a single chip,” Weedbrook says. “The only way to really scale up is through this modular networking approach.”
Xanadu has taken an unorthodox approach by focusing on the scalability problem first. One of the biggest advantages of relying on photonics for quantum computing—as opposed to the superconducting qubits used by IBM and Google—is that the machines are compatible with conventional networking technology, which simplifies connectivity.
However, Aurora isn’t reliable enough for useful computations due to high optical loss; photons are absorbed or scattered as they pass through optical components, introducing errors. Xanadu aims to minimize these losses over the next two years by developing better components and optimizing architecture. The company plans to start building a quantum data center in 2029.
IBM also expects to hit a major modular quantum-computing milestone this year. The company has designed a 462-qubit processor called Flamingo with a built-in quantum communication link. Later this year, IBM plans to connect three of them to create the largest quantum computer—modular or not—to date.
The Road Map to Modular Quantum Computing
Modularity has always been central to IBM’s quantum road map, says Oliver Dial, the chief technology officer of IBM Quantum. While the company has often led the field in packing more qubits into processors, there are limits to chip size. As they grow larger, wiring up the control electronics becomes increasingly challenging, says Dial. Building computers with smaller, testable, and replaceable components simplifies manufacturing and maintenance.
However, IBM is using superconducting qubits, which operate at high speeds and are relatively easy to fabricate but are less network-friendly than other quantum technologies. These qubits operate at microwave frequencies and so can’t easily interface with optical communications, which required IBM to develop specialized couplers to connect both adjacent chips and more distant ones.
IBM is also researching quantum transduction, which converts microwave photons into optical frequencies that can be transmitted over fiber optics. But the fidelity of current demonstrations is far from what is required, says Dial, so transduction isn’t on IBM’s official road map yet.
IBM plans to connect three of its 462-qubit Quantum Flamingo processors this year to make what the company claims will be the largest quantum computer yet.IBM
Trapped-ion and neutral-atom-based qubits interact directly with photons, making optical networking more feasible. Last October, IonQ demonstrated the ability to entangle trapped ions on different processors. Photons entangled with ions on each chip travel through fiber-optic cables and meet at a device called a Bell-state analyzer, where the photons are also entangled and their combined state is measured. This causes the ions that the photons were originally entangled with to become linked via a process called entanglement swapping.
Scaling this up to link large numbers of quantum processors will require a lot of work, says John Gamble, senior director of system architecture and performance at IonQ. Bell-state analyzers, currently implemented using free-space optical components, will need to be miniaturized and fabricated using integrated photonics. Additionally, optical fiber is noisy, meaning the quality of the entanglement created through those channels is relatively low. To address this, IonQ plans to generate many weakly entangled pairs of qubits and carry out operations to distill those into a smaller number of higher-quality entanglements. But achieving a high enough rate of quality entanglements will remain a challenge.
The French startup Welinq is addressing this issue by incorporating a quantum memory into its interconnect. CEO Tom Darras says one reason why entanglement over photonic interconnects is so inefficient is that the two photons required are often emitted at different times, so they “miss” one another and fail to entangle. Adding a memory creates a buffer that helps synchronize the photons.
“When you need them to meet, they actually meet,” says Darras. “These technologies enable us to create entanglement fast enough so that it will be useful for distributed computation.”
Functional Modular Quantum Computers Need More Steps
Once multiple processors are linked, the challenge shifts to running quantum algorithms across them. That’s why Welinq has also developed a quantum compiler, called araQne, that determines how to partition an algorithm across multiple processors while minimizing communication overhead.
Researchers from Oxford University made a recent breakthrough on this front, with the first convincing demonstration of a quantum algorithm running across two interconnected processors. The researchers performed logical operations between two trapped-ion qubits on different devices. The qubits had been entangled using a photonic connection, and the processors executed a very basic version of Grover’s search algorithm.
The final piece of the puzzle will be figuring out how to adapt error-correction schemes for this new modular future. The startup Nu Quantum recently demonstrated that distributed quantum error correction is not only feasible but efficient.
“This is a really big result because, for the first time, distributed quantum computing and modularity is a real option,” says Nu Quantum’s CEO, Carmen Palacios-Berraquero. “Before, we didn’t know how we would do it in a fault-tolerant way, if it was efficient, or if it was viable.”
This article appears in the March 2025 print issue.
Reference: https://ift.tt/QdWI6D3
No comments:
Post a Comment