home | CV | papers | research | photos | classes | UM home | UM search I UM A-Z Index


Research

Quantum computation and analyses of computation

It is well-known that certain quantum computers can perform some computational tasks faster than any known classical computer (see Nielsen and Chuang (2000) for several examples). Some, Deutsch (1997) for instance, have suggested that the way to explain this quantum speed-up is to adopt what Duwell (2004) and (2007) calls the Quantum Parallelism Thesis (QPT): quantum computers compute multiple values of a function in a single computational step. Moreover, classical computers are not capable of such behavior. Though the QPT is oft cited as an explanation of quantum speed-up, especially in popular media, its adequacy to explain quantum speed-up has been challenged by Steane (2003) as well as others such as Bub (2010).

In his (2003), Steane argues that the evidence cited for establishing truth of the QPT is insufficient. He argues for this claim by providing an example of a process which is, prima facie, an instance of parallel computation of multiple values of a function, but which ultimately cannot be interpreted as such. The similarity between his example process and the process which is cited as evidence of quantum parallelism is striking. Finally, he suggests that a particular kind of quantum computer, the one-way computer (see Raussendorf and Briegel (2001) and (2002)), cannot be interpreted as computing many values of a function in a single computational step.

Steane's work raises two interesting questions: First, when is a machine performing a genuine computation? Second, given that a genuine computational process occurs and the machine outputs all the values of a function, must this be the result of doing all of the individual computations?

Interestingly, the discussion of quantum speed-up has taken place without consulting philosophical analyses of computation. Analyses of computation aim to identify which physical processes are computational, and in most cases specify which functions, if any, have been computed. Moreover, analyses of computation aim to account for the computational abilities of a system. These analyses are just what it takes to address the questions above in a principled way, as well as to decide the truth of the QPT. So, discussions of quantum speed-up can certainly be enriched, and some questions might be decided, by an application of analyses of computation in the quantum context. I aim to do just that.

As much as discussions of quantum speed-up can benefit from consideration of analyses of computation, so too can analyses of computation benefit from consideration of quantum computers. Analyses of computation have been developed with a keen eye on making sure that paradigm examples of computation turn out to qualify as computations, but they have not kept up with recent advancements in quantum computing. To be fair, given the focus on quantum network computers, which are strikingly similar in many respects, but of course not all, to classical network computers, one might reasonably suspect there is nothing importantly new going on with respect to analyses of computation. That said, alternative kinds of quantum computers exist, in particular, the one-way computers mentioned above, which operate very differently than quantum network computers. Analyses of computation might be improved by considering such exotic kinds of computers.

Preliminary research indicates that on the mechanistic account of computation, as developed by Piccinini (2007) and (2008a,b), one-way computers fail to qualify as computers. Here is why: the mechanistic account of computation requires that computers be made of components which manipulate strings of digits in terms of well-defined rules. It is difficult to interpret one-way computers as doing that. What this means is that one-way computers are an excellent resource to use to refine our understanding of computation generally.

I anticipate the research project will result in at least three articles. For the first part of the project that examines analyses of computation in light of one-way quantum computers, I expect to produce at least two articles. One article, ``Challenging your computational intuitions: quantum one-way computers'', will highlight those features of one-way computers that deviate from more well-known kinds of computers and are salient for revising analyses of computation. In that article I want to introduce one-way computers to a wider audience by suppressing as much technical detail as possible, but without sacrificing any details of philosophical significance. This will be followed by ``One-way computers and analyses of computation'', which details how one-way computers challenge extant analyses of computation and how they might be revised to resolve difficulties. If it is the case that the research project demands the development of a new analysis of computation, I will produce a paper to that effect.

The second part of the research project, that applies analyses of computation to the debate over quantum speed-up, will produce at least one article. In ``One-way computers and quantum speed-up'', I will apply analyses of computation to help determine the truth of the quantum parallelism thesis, and then, using at least one concrete example of a one-way computer algorithm that exhibits speed-up, examine how well the truth of the QPT can provide an explanation of speed-up. I will also examine Bub' s (2010) explanation of quantum speedup in this context. This paper should lay the groundwork for further investigations into the nature of quantum speed-up, especially if extant explanations fail.

The structure of the world

It is a common intuition that the success of science requires an explanation, and it is difficult to comprehend how one might provide such an explanation without resorting to realism. That said, this intuition has proven quite difficult to satisfy. It seems that no matter how one specifies what the success of science consists in, thereby delimiting the relevant historical cases, or what one ought to adopt a realist attitude towards, the history of science demonstrates that we can have the success without the realism as Carrier (1991), Chang (2003), Laudan (1981), and Stanford (2003a), (2003b), and (2006) have argued. But even if realist explanations of the success of science are problematic, the anti-realist does not seem to be much better off.

The most austere ``explanation'' of the empirical success of our theories, which I take it consists in generating true predictions about the observable world, is simply that theories latch onto regularities of the observable world. This sentiment is surely present in van Fraassen's (1980) constructive empiricism. Stanford (2000) has identified why such an explanation is so unsatisfying. It seems to invite an additional explanatory demand. Why is it that the theory has latched onto regularities of the observable world? The constructive empiricist refuses this explanatory demand, the realist embraces it.

No matter whether one is a constructive empiricist, a realist, or somewhere in between, all parties to the debate seem to agree that science is successful by hitting on regularities. If one is a constructive empiricist, then one will restrict one's attention to regularities of the observable world. If one is a realist, one might entertain the notion that some scientific theories are successful because they hit on regularities of the unobservable world too. In both cases, one should provide an explanation of success by showing how a theory manages to latch onto regularities, and then showing that doing so represents a legitimate terminus for our explanatory demands. In order to make these claims precise, one needs to provide an analysis of a regularity.

I argue that a sequence of concrete particulars constitutes a regularity if and only if it admits of a certain type of structural description. By structure, I mean the straight-forward mathematical definition of a structure: a structure is just a tuple consisting of a set of objects and relations on this set. By defining regularities in this way, one can talk coherently about the structure of the world. I define
the structure of the world as that structure that describes the smallest set of regularities, from which all other regularities can be derived. In fact, it is possible to define the structure of the world at many different levels of epistemic accessibility. Chakravartty (2007) makes the useful distinction between observables, those entities and processes that are detectable by the unaided senses, detectables, those unobservable entities and processes that can be detected with the use of instruments, and finally undetectables, those unobservable entities and processes that cannot be detected with the use of instruments. It is possible to define the structure of the world at a certain level of epistemic accessibility by restricting attention to regularities at that level. For example, I define the structure of the observable world as the structure that describes the smallest set of regularities of the observable world, from which all other regularities of the observable world arise. I do the same for detectables and undetectables.

In order to explain the success of a scientific theory, one has to explain how a theory manages to latch onto regularities. A theory manages to latch onto regularities of the world in a very straight-forward way. I argue that every theory can be viewed as placing a structure on the world at a certain level of epistemic access. So, I can define the structure a theory places on the observable world, the detectable world, and the undetectable world. A theory latches onto regularities at a particular level of epistemic access because the structure the theory places on the world at that level matches some substructure of the structure of the world associated with that level. For example, if one wants to explain how a theory manages to latch onto the regularities of the observable world, one simply compares the structure that the theory places on the observable world to the structure of the observable world itself. One can effect that kind of explanation at every level of epistemic access. If one assumes that the observable regularities supervene on the detectable regularities, it would be possible, for some theories, to explain their empirical success in virtue of these theories latching onto the detectable regularities.

The question can be reasonably be posed, does latching onto regularities represent an appropriate terminus for explaining success? Yes! If in order to explain the success of a scientific theory one needs to show how the theory manages to hit on regularities, then I have done so. They just need to be there. What this means is that
the framework that I have provided for explaining the success of scientific theories effectively neutralizes the support that the no-miracles argument affords realism.

The framework for explaining the success of science developed above, which includes showing how the theories can place a structure on the world at various levels of epistemic access, can also be used as a platform for investigating, in certain cases, whether there are good reasons to believe that the theory is getting something importantly right about the world that goes beyond true predictions about observables. What theories might get right in this framework is the structure of the observable world, the structure of the detectable world, the structure of the undetectable world, or simply the structure of the world writ large.

A sequence of progressively more successful theories can provide evidence that one is getting something right about the world. It can do so via an examination of the sequence of structures associated with that sequence of theories at a particular level of epistemic access. If a sequence of structures at a particular level of epistemic access shows that a particular substructure, more or less, is always present across theory change, then this provides evidence that the theories provide an increasingly better approximation to the structure of the world at a particular level. If it turns out that this behavior is present at the level of detectables, then one has good reasons for a realist commitment that a theory is getting the structure of the undetectable world right. Importantly, I am
not suggesting that the only thing a theory gets right about the word is its structure at particular levels of epistemic access. I am suggesting that examining sequences of structures associated with a sequence of theories can provide evidence that a sequence of theories is providing a better and better approximation to the structure of the world. Additional arguments might provide reasons for thinking that a theory gets more right about the world than its structure at a particular level of epistemic access. For example, one might think that a theory provides a good representation of the mechanisms of the unobservable world.

The project I have described is probably a book length project if pursued to its natural limits. If broken into some core papers, one might be entitled ``Regularities and the success of scientific theories'', and would provide the analysis of regularities, their structure, and how theories place a structure on the world at different levels of epistemic access. A second paper, ``Convergent realisms'', would detail the conditions in which a particular substructure of the structure that a theory places on the world at a certain level is best confirmed, and hence a good candidate for preservation across theory change and provide a case study. Further papers might provide additional case studies.