Bell’s theorem still reverberates: How entanglement makes the impossible possible


Fifty years ago, John Bell made metaphysics testable, but quantum scientists still dispute the implications.

In 1964, Northern Irish physicist John Bell proved mathematically that certain quantum correlations, unlike all other correlations in the Universe, cannot arise from any local cause1. This theorem has become central to both metaphysics and quantum information science. But 50 years on, the experimental verifications of these quantum correlations still have ‘loopholes’, and scientists and philosophers still dispute exactly what the theorem states.


Quantum theory does not predict the outcomes of a single experiment, but rather the statistics of possible outcomes. For experiments on pairs of ‘entangled’ quantum particles, Bell realized that the predicted correlations between outcomes in two well-separated laboratories can be profoundly mysterious (see ‘How entanglement makes the impossible possible’). Correlations of this sort, called Bell correlations,were verified experimentally more than 30 years ago (see, for example, ref. 2). As Bell proved in 1964, this leaves two options for the nature of reality. The first is that reality is irreducibly random, meaning that there are no hidden variables that “determine the results of individual measurements”1. The second option is that reality is ‘non-local’, meaning that “the setting of one measuring device can influence the reading of another instrument, however remote”1.


Most physicists are localists: they recognize the two options but choose the first, because hidden variables are, by definition, empirically inaccessible. Quantum information scientists embrace irreducible randomness as a resource for secure cryptography3. Other physicists and philosophers (the ‘non-localist camp’) dispute that there are two options, and insist that Bell’s theorem mandates non-locality4.


Bell himself was a non-localist, an opinion he first published in 1976 (ref. 6), after introducing a concept, “local causality”, that is subtly different from the locality of the 1964 theorem. Deriving this from Einstein’s principle requires an even stronger notion of causation: if two events are statistically correlated, then either one causes the other, or they have a common cause, which, when taken into account, eliminates the correlation.


In 1976, Bell proved that his new concept of local causality (based implicitly on the principle of common cause), was ruled out by Bell correlations6. In this 1976 theorem there was no second option, as there had been in the 1964 theorem, of giving up hidden variables. Nature violates local causality.


Experiments in 1982 by a team led by French physicist Alain Aspect2, using well-separated detectors with settings changed just before the photons were detected, suffered from an ‘efficiency loophole’ in that most of the photons were not detected. This allows the experimental correlations to be reproduced by (admittedly, very contrived) local hidden variable theories.


In 2013, this loophole was closed in photon-pair experiments using high-efficiency detectors78. But they lacked large separations and fast switching of the settings, opening the ‘separation loophole’: information about the detector setting for one photon could have propagated, at light speed, to the other detector, and affected its outcome.

There are several groups worldwide racing to do the first Bell experiment with large separation, efficient detection and fast switching. It will be a landmark achievement in physics. But would such an experiment really close all the loopholes? The answer depends on one’s attitude to causation.


Source: www.nature.com

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.