Actually, there's still a snag. We can't read out these states directly. When you observe a mixed state, it collapses into one of the component states, with appropriate probability - third wierdness. So suppose you prepare one million electrons, each 40% spin-up + 60% spin-down. Now you measure the spin of each. You will find that 40% of the electrons (on average) come out as spin up, and 60% as spin down. But it's entirely unpredictable which come out as which.

So when we read out the result of our computation, we only get the result of processing one of these strips. So although our computer has done the parallel computations for free, we've lost all but one of the results. So it might appear that we're still no better off than with a normal computer. However, Deutsch has been able to show that's not true. His kind of quantum computer can still get some results faster - see pages 248-249 of Lockwood.

Michael Stannett has argued that one can go beyond Turing-computability in a different way. See his description of the X machine in X-Machines and the Halting problem: Building a Super-Turing Machine. It's available as AI photocopy S171. The conclusion discusses the relevance of this to Deutsch and Penrose.

French has argued that the Turing test is not ideal: it probes subcognitive phenomena that are irrelevant to its aim. See Subgognitive Probing: Hard Questions for the Turing Test by Robert French, AI box copy F46. From Proc. 10th Ann. Conf. Cog. Sci. Soc, Montreal 1988.

Jocelyn Ireson-Paine
Wed Feb 14 23:51:11 GMT 1996