Posts tagged: quantum computation
A $100,000 offer is being made by quantum computing/complexity theory researcher Scott Aaronson to whomever can disprove the possibility of scalable quantum computation.
Although very primitive, small-scale quantum computers have been developed and realized in the field, the question of whether or not a large-scale quantum computer can exist to handle arbitrarily large computations is the focus of ongoing research and a matter of debate (at least to some researchers).
This is ultimately a matter of the possibility of quantum computers being able to exist in principle according to the laws of physics, and not one of just theoretical and technological advancement in a given period of time.
The way I see it is that any legitimate attempts to actually disprove the possibility of large scale quantum computation will only end up revealing more details and reasons to believe that large scale quantum computers can and will exist. I would even be willing to go as far as saying that the existence of large scale quantum computation ought to be demanded by the idea of computational universality.
Here is something to think about that was only casually mentioned in passing in the recent video that was posted.
The sunlight you may or may not have experienced today finally managed to reach you after a ~100,000 year long journey since it was originally created at the Sun’s core!
Since the speed of light is finite, about 300,000,000 meters/second (or about 671,000,000 miles/hour), it takes time for it to travel from one point in space to another.
Given that the distance from Earth to the Sun is about 150,000,000,000 meters (about 93,000,000 miles) it takes about 8 minutes for light to reach us!
But this is just the time it takes light to reach us from the surface of the sun.
The light coming from the surface of the Sun is itself created as a by-product of nuclear fusion occurring deep in the Sun’s core.
Once light is created at at the Sun’s core it begins its journey to the surface of the Sun some 700,000,000 meters (430,000 miles) away from the core.
One might assume that this light takes the shortest path and heads straight to the surface, which would only take a couple seconds of travel time.
However, this is not the case because there is all kinds of star stuff that gets in the way.
An actual photon may only travel a mere fraction of a centimeter (anywhere between .01 and .3 centimeters depending on how close it is to the surface) before it makes a collision with other matter thereby diverting its path to some other random direction.
Photons continue moving in these seemingly random trajectories, bumping into other particles along the way, and don’t actually reach the surface until about 100,000 years later (give or take an order of magnitude)!
This kind of behavior characterizing the photons motion is modeled by something called a random walk, and is illustrated in a few different instances in the animations above.
Random walks have widespread applications through out the sciences and mathematics. The idea of random walks are even used in some computer algorithms to allow for more efficient solutions to some problems.
One particular application of personal interest, and a rather abstract generalization of the idea, is the quantum random walk, in which the superposition principle of quantum mechanics is used to put the trajectory into a combination of multiple possible trajectories to assist quantum computers in solving problems. The workings of Grover’s search algorithm can be thought of in this way. This isn’t the only instance that relates quantum mechanics to the workings of the Sun (see here).
Anyway, next time you are out in the relentless light of the Sun you may wonder what was going on some 100,000 years ago when that light first originated in the Sun, or maybe even where you’ll be 100,000 years from now when the light being created in the Sun at this moment finally reaches Earth.
(GIFs created from this Java app)
The notion of nonlocality in quantum mechanics has been a topic of much interest over the past several decades since the conception of quantum mechanics. Nonlocality is the idea that particular systems which are spatially separated can exhibit instantaneous correlations which seemingly violate certain principles of classical physics such as faster-than-light travel. This phenomenon deeply bothered Albert Einstein, and has even led to some metaphysical claims of quantum mysticism.
Experimental evidence of such “nonlocal effects” is by no means doubted in the field. However, the idea of nonlocality may conflict with some of our models and interpretations of the world.
David Deutsch has recently posted a pre-print in which he argues a claim that he made over a decade ago: that reality is not nonlocal as believed by many.
Einstein’s (1949) criterion for locality is that for any two spatially separated physical systems S1 and S2 , ‘the real factual situation of the system S2 is independent of what is done with the system S1 ’. A previous paper (Deutsch & Hayden (2000)) included a proof that quantum physics satisfies this criterion. The method was first to prove that every quantum computational network satisfies it, and then to infer the same for general quantum systems by appealing to the universality of such networks.
Note that Deutsch’s proof makes use of the theory of quantum computation, which gives reason to believe that this theory may deserve a more fundamental role in our understanding of the world. It is also interesting to note that Deutsch is a strong advocate for the many-worlds interpretation of quantum mechanics, and thinks that this interpretation is the natural setting to understand quantum computation.
This makes me wonder what Einstein would of thought about later ideas like quantum computers and the multiverse. I, in my opinion, think he would have liked them.
This picture illustrates that there are 512 different ways to color a 3x3 grid using only two colors.
In general, to do this for a MxM grid with M2=N blocks, there are 2N different ways.
This is also equivalent to the following:
Can you think of anymore?
There is a useful quantum algorithm for quantum computers discovered by Lov Grover which pertains to the problem of searching large unstructured databases. Think of this problem as having to search through a large list or table of entries (like names and phone numbers) trying to find one particular entry. Like the factoring problem, the best a classical computer can do in searching a large unstructured database for a particular item is to proceed by brute force and exhaust all the possibilities. Grover’s search algorithm, though it does not provide an exponential speedup as in the case of factoring large integers, offers a quadratic speedup in searching such databases. However, this still has practical benefits.
Consider a database consisting of N entries that needs to be searched through. It can be shown that the best known classical algorithms, on average, can find a given entry in N/2 computational steps. On the other hand, Grover’s algorithm allows this to be accomplished in √N steps on a quantum computer.
As an illustration, imagine some future quantum computer that is performing Grover’s search algorithm with a processing speed capable of performing 108 computational steps per second—a realistic processing speed for classical computers. Suppose it is engaged in a particularly arduous search through a space of N = 1030 entries looking for a single unique entry. How long will it take? Well, it would require √N = 1015 computational steps, which at a rate of 108 steps per second, would take 107 seconds, or about 4 months. Though this may seem like a long time, let us calculate how long it would take a classical computer to search through the same database with the same processing speed of 108 computational steps per second. It would take N/2 = 5×1029 steps and thus require 5 × 1021 seconds, or very nearly the age of the universe (13.7 billion years)! That is the size of the practical benefit of the quantum search algorithm over the classical. In general, the larger the search is, the larger the gain the quantum computer has over the classical.
The theoretical implications of these algorithms are even more mind-boggling, because of the way quantum computers work. In the example of Grover’s search algorithm, the quantum computer is not just doing 1015 computational steps in parallel, which can be mimicked classically by making all the computers on Earth work on nothing but this problem. The quantum computer is doing 1030 possible computations in parallel every time a computational step is invoked. If all the silicon in the whole of our planet were made into microchips, and they were all set performing different computations, there would still be fewer distinct computations going on in that giant parallel computer than there would be in a single quantum computer performing Grover’s algorithm. That is a measure of the complexity, structure, and the process that exists in ordinary matter just beyond our perception since quantum processes are, of course, going on all the time everywhere. In a quantum computer, some small part of nature’s complexity is put to good use!
What is the Hamiltonian of the whole multiverse? Traditionally, fundamental physics has been about what types of systems exist in nature, what their observables are, and what their Hamiltonians are. That’s what elementary particle physicists call the theory of everything, but there is another way of looking at what is elementary or fundamental. Instead of asking what types of Hamiltonian are found in natural systems, which means what sorts of changes can occur in elementary systems over an infinitesimal time, one could go all the way to the bottom line and ask which transformation can be realized in nature by some quantum system evolving for some time and which cannot. The short answer is all changes, in which observables of the system undergo unitary evolution with every observable undergoing the same transformations so as to preserve their algebra, can occur in nature and nothing else can. Every unitary matrix is the evolution matrix for some quantum system evolving over some time. Or at least we think it is, because it turns out that the vast majority of these possible evolutions can only be realized in a very special type of physical system: a universal quantum computer. They don’t occur naturally because they require a complex computer program to bring them about. So, here is a fascinating situation. In terms of Hamiltonians the laws of physics are very finely tuned. The multiverse has its Hamiltonian and subsystems of the multiverse only have very special Hamiltonians. Most Hermitian matrices cannot be realized in nature as Hamiltonians. But when we ask a slightly different question: which unitary evolution operators can be realized in nature? The answer is all of them if a universal quantum computer can exist. This special type of object, the universal quantum computer, in a sense contains within itself all the diversity of nature. No other system does, except perhaps systems that are capable of constructing a universal quantum computer. Suddenly we find ourselves unavoidably playing a role at the deepest level of the structure of physical reality.
The set of all possible motions of a universal quantum computer includes accurate images of all possible motions of all physical objects. That the laws of physics permit this ‘universal object’ is clearly a profound feature of them, but still an elusive and controversial one.
This is a link to a video where David Deutsch talks about the universality of quantum computation, which is the property that a certain quantum computer can simulate any other quantum computer.
It is argued that underlying the Church-Turing hypothesis there is an implicit physical assertion. Here, this assertion is presented explicitly as a physical principle: ‘every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means’. Classical physics and the universal Turing machine, because the former is continuous and the latter discrete, do not obey the principle, at least in the strong form above. A class of model computing machines that is the quantum generalization of the class of Turing machines is described, and it is shown that quantum theory and the ‘universal quantum computer’ are compatible with the principle. Computing machines resembling the universal quantum computer could, in principle, be built and would have many remarkable properties not reproducible by any Turing machine. These do not include the computation of non-recursive functions, but they do include ‘quantum parallelism’, a method by which certain probabilistic tasks can be performed faster by a universal quantum computer than by any classical restriction of it. The intuitive explanation of these properties places an intolerable strain on all interpretations of quantum theory other than Everett’s. Some of the numerous connections between the quantum theory of computation and the rest of physics are explored. Quantum complexity theory allows a physically more reasonable definition of the ‘complexity’ or ‘knowledge’ in a physical system than does classical complexity theory.
This paper concludes with the following remark:
To view the Church-Turing hypothesis as a physical principle does not merely make computer science a branch of physics. It also makes part of experimental physics into a branch of computer science.
The existence of a universal quantum computer implies that there exists a program for each physical process. In particular, quantum computers can perform any physical experiment. In some cases this is not useful because the result must be known to write the program. But, for example, when testing quantum theory itself, every experiment is genuinely just the running of a quantum computer program.
Quantum computers raise interesting problems for the design of programming languages, which I shall not go into here. From what I have said, programs exist that would (in order of increasing difficulty) test the Bell inequality, test the linearity of quantum dynamics, and test the Everett interpretation. I leave it to the reader to write them.
The quantum computing model is inherently probabilistic. When an algorithm is successfully run on a quantum computer, the output corresponding to the answer to some problem may not always be correct. Instead, we demand that the algorithm gives the correct solution with a probability greater than 1/2. If this is the case, then simply repeating the algorithm multiple times will eventually yield an answer that will be correct with high probability. This is a just a consequence of the Chernoff bound in probability theory. Chris Lomont gives an example of this in The Hidden Subgroup Problem: Review and Open Problems:
Thus the majority is wrong very rarely. For example, we will make most algorithms succeed with probability 3/4 … Although it sounds like a lot, taking 400 repetitions of the algorithm causes our error to drop below 10-20, at which point it is more likely our computer fails than the algorithm fails. And since the algorithms we are considering are usually exponentially faster than classical ones, there is still a net gain in performance. If we do 1000 runs, our error drops below 10-55, at which point it is probably more likely you’ll get hit by lightning while reading this sentence than the algorithm itself will fail.
Simulating quantum systems using classical computers is hard to do because of the size and complexities of quantum systems. To describe a single particle in a classical system, you would need 6 different parameters: one for each of the 3 directions of motion and another 3 for the velocities of the particle in each of these directions. So for a system consisting on N particles, one would need 6N different parameters to completely specify the state space of the system.
On the other hand, due to the superposition principle and the linearity of quantum mechanics, to describe the simplest quantum system consisting of N quantum particles (or qubits), 2N parameters are needed to describe the state space completely. The thing is, for large enough values of N (greater than 4), 2N > 6N. Moreover, 2N grows exponentially faster than 6N as N increases. In essence, this is one of the main reasons why quantum computers would be able to outperform their classical counterparts.
Here is an example illustrating how large and how quickly exponential behavior like this can grow.
Suppose you have a chess board and a lot of coins. There are 64 spaces in total on the board. Start in one corner and place 2 coins on that square. Move to the next square and stack 22=4 coins on it. Then stack 23=8 coins on the third square and continue in this manner stacking 2N coins on the Nth square. The last square will have 264 = 18,446,744,073,709,551,616 coins stacked on top of each other.
How high do you think this stack would be?
It would stretch further than the distance from Earth to the Moon (400,000 kilometers), further than than the distance from the Earth to the Sun (150 million kilometers), and would be able to reach the next closest star Proxima Centauri (about 4.2 light-years or 3.97×1013 kilometers)!
There are many problems out there that require computers to use resources that grow exponentially like in this example. It has been difficult to say whether or not classical computers could even compute fast enough to make them practical problem solvers in this case, but we do have reasons to believe that quantum computers may be able to help out in this regard.
Some cute examples of computability theory applied to specific problems.
- We now know that, if an alien with enormous computational powers came to Earth, it could prove to us whether White or Black has the winning strategy in chess. To be convinced of the proof, we would not have to trust the alien or its exotic technology, and we would not have to spend billions of years analyzing one move sequence after another. We’d simply have to engage in a short conversation with the alien about the sums of certain polynomials over finite fields.
- There’s a finite (and not unimaginably-large) set of boxes, such that if we knew how to pack those boxes into the trunk of your car, then we’d also know a proof of the Riemann Hypothesis. Indeed, every formal proof of the Riemann Hypothesis with at most (say) a million symbols corresponds to some way of packing the boxes into your trunk, and vice versa. Furthermore, a list of the boxes and their dimensions can be feasibly written down.
- Supposing you do prove the Riemann Hypothesis, it’s possible to convince someone of that fact, without revealing anything other than the fact that you proved it. It’s also possible to write the proof down in such a way that someone else could verify it, with very high confidence, having only seen 10 or 20 bits of the proof.
It would be great to prove that RSA is unbreakable by classical computers. But every known technique for proving that would, if it worked, simultaneously give an algorithm for breaking RSA! For example, if you proved that RSA with an n-bit key took n5 steps to break, you would’ve discovered an algorithm for breaking it in 2n^1/5 steps. If you proved that RSA took 2n^1/3 steps to break, you would’ve discovered an algorithm for breaking it in n(log n)^2 steps. As you show the problem to be harder, you simultaneously show it to be easier.
And an excerpt on the relevance of these examples:
So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.
In my opinion, one of the biggest challenges for our time is to integrate the enormous body of knowledge in theoretical computer science (or quantitative epistemology, or whatever you want to call it) with the rest of what we know about the universe. In the past, the logical safari mostly stayed comfortably within 19th-century physics; now it’s time to venture out into the early 20th century. Indeed, that’s exactly why I chose to work on quantum computing: not because I want to build quantum computers (though I wouldn’t mind that), but because I want to know what a universe that allows quantum computers is like.
Among the many ramifications of quantum computation for apparently distant fields of study are its implications for the notion of mathematical proof. Performing any computation that provides a definite output is tantamount to proving that the observed output is one of the possible results of the given computation. Since we can describe the computer’s operations mathematically, we can always translate such a computation into the proof of some mathematical theorem. This was the case classically too, but in the absence of interference effects it is always possible to keep a record of the steps of the computation, and thereby produce (and check the correctness of) a proof that satisfies the classical definition - as “a sequence of propositions each of which is either an axiom or follows from earlier propositions in the sequence by the given rules of inference”. Now we are forced to leave that definition behind. Henceforward, a proof must be regarded as a process — the computation itself — for we must accept that in future, quantum computers will prove theorems by methods that neither a human brain nor any other arbiter will ever be able to check step-by-step, since if the ‘sequence of propositions’ corresponding to such a proof were printed out, the paper would fill the observable universe many times over.
- Deutsch, Ekert, and Lupacchini
“Machines, Logic and Quantum Physics”
This decomposition allows for the quantum Fourier transform to be implemented on a quantum computer efficiently (fast enough for practical purposes). It is used in various algorithms, including Shor’s algorithm, which is an algorithm for factoring large (200+ digit) integers.
The computational task of factoring large integers on a classical computer takes so long that the government, banks, and the internet rely on this difficulty to keep their information secure from the public (RSA cryptography).
So if you have a large scale quantum computer, you could have the ability to render the RSA cryptographic protocol obsolete!
The thing is, quantum computers are really hard to make. However, in 2001 IBM researchers in the foothills behind my house managed to use a quantum computer to factor the number 15 into 3 x 5!