AI Predicts, Quantum Computes

Arvind Krishna said something recently in a Semafor interview that elegantly summarizes the differences between AI-based and quantum computing-based approaches to some of the hardest problems humanity needs to solve :

“AI is great at harnessing the past and then trying to predict a bit of the future. Quantum computes the future. So that unlocks problems you can’t even imagine doing with AI.”

It sounds like executive keynote material until you start mulling on the details, and the more precisely you dive into what each technology is doing computationally, the more the quote holds up. Which is actually rarer than it sounds for statements that compress this much.

As discussed in my intro post, this newsletter is organized around what it takes to build reliable infrastructure for non-deterministic systems. And AI and quantum are the two waves accelerating our computing towards probabilistic engines. They get bundled together but they are fundamentally different and achieve clearly different outcomes. Lets dive into the why..


There is a boundary condition in the AI conversation that very rarely gets discussed, partly because pointing it out in the current (rightfully-so) euphoric moment feels somewhat heretical. AI is operating inside the limits of recorded human knowledge — every protein structure, every legal precedent, every line of code, every diagnosed disease that is in the training corpus of data came from something a human observed, measured, and wrote down. That corpus, however vast, is a compression of our civilization’s observational record, and the model generalizes from what this record contains.

This is an architectural boundary condition. Within the space defined above, a well-trained model generalizes brilliantly: finding structure humans missed, connecting patterns across domains and modalities, compressing years of work and research into hours. AlphaFold is the best example we can point towards today — predicting protein structures that had never been experimentally resolved, doing in an afternoon what crystallography techniques used to take months to do. It is a massive landmark in computational biology, moving us from slow wet labs to faster in-silico dry labs.

But lets dive into what AlphaFold actually does. It trained on roughly 170,000 proteins in the Protein Data Bank at the time of training, each one painstakingly determined by X-ray crystallography, cryo-EM, or NMR across decades of human-recorded experimental work. The grammar of folding is inherent in that data because of evolution, that has been running the sampling experiment for ~4 billion years, but also has been sampling very narrowly. What AlphaFold produces is amazing generalization inside a densely sampled space governed by physical constraints already recorded and observed by us.

Ask it about a protein built from non-standard amino acids with no evolutionary analog, or a folding pathway in a solvent that has never been characterized, or biochemistry that has no entry in any database anywhere — and the architecture itself has no answer, not because the model is insufficiently large but because the observation never happened and was never recorded. You cannot interpolate across what was never sampled. Researchers are actively building proteins using non-standard amino acids, beyond the canonical twenty, to create drugs that evade immune detection, resist enzymatic degradation, or bind targets no natural protein can reach. This is an entire chemistry that evolution never explored and AlphaFold has no training signal for.

AlphaFold is the best map ever drawn of known territory in the protein space, and that is extraordinary. But it is also critically different from the ability to explore territory that has never been visited.


Quantum simulations do not consult a corpus of data. When a molecular problem is encoded into a quantum system, the computation does not search a solution space statistically. It evolves the system according to quantum mechanical law, representing electron orbitals, van der Waals interactions, and hydrogen bonding as the actual operators governing reality at that scale rather than inter- or extrapolate. The answer that emerges is the result of the computation that nature itself would have run. Which, as Richard Feynman put it when he first proposed quantum computing in 1981: if you want to simulate nature, you had better make the simulation quantum mechanical.

This is why quantum reaches where AI structurally cannot. 

A quantum simulation can characterize a molecule that has never existed, find a reaction pathway that has never been run, evaluate a material whose properties nobody has measured. Because it is operating directly on physical law rather than generalizing on the human record of experiments performed. 

Arvind’s quote becomes accurate at this level of detail: AI harnesses the past because the past is what built it, while quantum computes forward because it is running physics rather than statistics.


Questions I come across usually follow along the lines of: Can we use quantum to train models, or aren’t these solving the same problems of humanity. Both are directionally correct and also the very reason for this article, since this distinction matters even more. The sensible architecture is hybrid, and the supercomputer of the future will have CPUs, GPUs (TPUs) and QPUs: AI handles heuristics, prunes search spaces, and navigates known territory, while quantum handles the simulation kernels where the problem is irreducibly quantum mechanical. They are not substitutes for each other. And pretending otherwise because the politics of the current AI moment makes the distinction uncomfortable serves nobody.


The other pushback I have heard is around the timelines for quantum that tend to get stuck on issues around Noisy Intermediate-Scale Quantum (NISQ) hardware — coherence times, error rates, qubit counts — which implicitly assumes the path forward is monolithic machines getting larger through vertical scaling. That is the mainframe model being applied to what is fundamentally a distributed computing problem. What changes the roadmap exponentially is quantum networking: quantum nodes connected at the hardware level, scaling horizontally the way cloud computing scaled classical infrastructure. The design pattern leap that made classical computing ubiquitous was the network that let many machines behave as one. The same leap is coming fast for quantum, and it will redefine what ready means on a timeline most of the current discourse has not focused on yet.

That is a longer essay, for another day.


AI is a civilization-scale achievement in making the recorded past legible and useful, which is not a small thing and should not be diminished. The problems that matter most at the frontier, however, such as novel therapeutics, room-temperature superconductors, reaction pathways for carbon capture, fault-tolerant materials at atomic scale — have interesting solutions that lie beyond human observational record, and they require computing the future rather than predicting it.

That is the foundational difference that Arvind’s quote summarizes so very elegantly.

Discover more from VIJOY PANDEY

Subscribe now to keep reading and get access to the full archive.

Continue reading