John Martinis, the physicist who helped push quantum computing into mainstream science headlines, is pursuing a fresh approach that could reshape how these machines are built and scaled. After leading key breakthroughs in superconducting qubits and steering the 2019 quantum performance milestone at Google, he is turning to a new design push aimed at higher reliability and scale.
The effort arrives as industry and academic groups race to cut error rates, lengthen coherence times, and link growing numbers of qubits. The promise is clear: more capable quantum processors for chemistry, materials, logistics, and security. The challenge is also clear: noise and errors still limit real-world use.
“John Martinis has already revolutionised quantum computing twice. Now, he is working on another radical rethink of the technology that could deliver machines with unrivalled capabilities.”
How He Changed the Field Before
Martinis spent years at the University of California, Santa Barbara, advancing superconducting circuits from lab curiosities into devices with improving coherence and gate fidelity. That work helped turn superconducting qubits into a favored platform among hardware teams.
He later joined Google, where he led the team behind the 2019 “quantum supremacy” experiment using the Sycamore processor. The processor performed a sampling task faster than a classical supercomputer could manage, at least under the test conditions. While the task had limited practical use, it showed the speed and control possible in carefully tuned quantum devices.
Those milestones set a template: better materials, better control electronics, and smarter calibration could make progress each year. Still, scaling from dozens of qubits to the millions likely needed for fault-tolerant computing remains out of reach without a shift in design.
The New Rethink: What Could Change
Martinis’s new push focuses on how to reach useful, error-corrected systems with fewer overheads. Today’s error correction codes often require thousands of physical qubits for one logical qubit. Reducing that overhead is the industry’s central bottleneck.
Paths under study across labs include higher-coherence qubit types, noise-biased encodings, better couplers, and improved cryogenic control. Any approach that achieves lower native error rates or simplifies wiring can cut the resources needed for logical qubits.
Even incremental gains matter. Two-qubit gate errors hovering around one in a hundred to one in a thousand can make or break scaling plans. Pushing those errors down, while keeping fabrication yields high, is the core engineering fight.
Why It Matters for Industry and Research
Companies building quantum services want predictable roadmaps, not just record-setting demos. A design that trims error-correction overhead could bring down hardware cost, lab space, and power needs. It could also shorten the time to pilot applications in finance, pharma, and energy.
Academic groups would gain a clearer testbed for algorithms that need deeper circuits. If fewer qubits can be marshaled into stable logical units, research can move from toy problems to cases with real parameters and constraints.
Investors watching the sector look for signs that engineering risk is falling. A credible plan to link better qubits, cleaner control, and scalable packaging could unlock steadier funding timelines.
Competing Ideas and Open Questions
Superconducting teams are not alone. Trapped-ion, neutral-atom, and photonic groups each claim paths to scale. In practice, the winner may be a mix of ideas across platforms. The market will judge on reliability, cost, and speed of iteration.
- Can native two-qubit errors fall another tenfold while keeping yields high?
- Will packaging and cryogenic wiring support thousands of connections per chip?
- How quickly can logical qubits beat their physical counterparts in real tasks?
Independent benchmarks will be key. Cross-checks between labs, clearer metrics for logical error rates, and open datasets can separate hype from progress.
What to Watch Next
Expect signs of progress in three places. First, device physics: longer coherence and stable gates across larger chips. Second, control systems: cleaner microwave pulses and calibration that holds over time. Third, error correction: demonstrations where logical error rates drop below physical ones in sustained runs.
If Martinis’s rethink reduces qubit overhead or simplifies chip integration, it could reset expectations for timelines to useful machines. If not, the work may still map dead ends and steer resources to better options.
For now, the message is simple. The physics works, but scale needs a smarter path. A third act from one of the field’s most influential engineers could help supply it.
Kirstie a technology news reporter at DevX. She reports on emerging technologies and startups waiting to skyrocket.























