Quantum Physics in Consciousness Studies
Dirk K. F. Meijer and Simon Raggett
Review/Literature compilation: The Quantum Mind Extended*
Introduction in Quantum aspects of brain function, p 1-19
Quantum approaches to neurobiology: state of art, p 19-31
David Bohm: Wholeness and the implicate order, p 30-39
Henry Stapp: Attention, intention and quantum coherence, p 39-44
Roger Penrose: Consciousness and the geometry of universe, p 45-54
Stuart Hameroff: Objective reduction in brain tubules, p 55-68
Hiroomi Umezawa and Herbert Frohlich: Quantum brain dynamics, p 69-72
Mari Jibu & Kunio Yasue: Quantum field concepts, p 72-76
Johnjoe McFadden: Electromagnetic fields, p77-80
Gustav Bernroider: Ion channel coherence, p 80-85
Chris King: Cosmology, consciousness and chaos theory, p 85-92
Piero Scaruffi: Consciousness as a feature of matter, p 92-94
Danko Georgiev: the Quantum neuron, p 94-98
Andrei Khrennikov: Quantum like brain and other metaphoric QM models, p 98-102
Hu and Wu/ Persinger: Spin mediated consciousness, p 103-106
Chris Clarke: Qualia and free will, p 106-109
Herms Romijn: Photon mediated consciousness and recent models, p, 110-114
Stuart Kauffman: Consciousness & the poised state p, 114-116
Post-Bohmian concepts of an of a universal quantum field, p 117-121
Dirk Meijer: Cyclic operating mental workspace, p 121-131
Amit Goswami: The vacuum as a universal information field, p 132-146
Simon Raggett: A final attempt to a theory on consciousness p 146-157
Note on cited sources, p 158
References, p 158-175
Internetsites, p 175
Introduction in quantum aspects of brain function
Since the development of QM and relativistic theories in the first part of the 20th century, attempts have been made to understand and describe the mind or mental states on the basis of QM concepts (see Meijer, 2014, Meijer and Korf, 2013,). Quantum physics, currently seen as a further refinement in the description of nature, does not only describe elementary microphysics but applies to classical or macro-physical (Newtonian) phenomena as well. Hence the human brain and its mental aspects are associated to classical brain physiology and are also part of a quantum physical universe. Most neurobiologists considered QM mind theories irrelevant to understand brain/mind processes (e.g. Edelman and Tononi, 2000; Koch and Hepp, 2006).
However, there is no single theory on QM brain/mind theory. In fact a spectrum of more or less independent models have been proposed, that all have their intrinsic potentials and problems. The elements of quantum physics discussed here are summarized in Table 1 and 2; details of the various QM theories have been described elsewhere (Meijer, 2012; Meijer and Korf, 2013).
Some QM mind options assume some sort of space-time multidimensionality, i.e there are more than the four conventional space-time dimensions. Other options assume that one or more extra dimensions are associated with a mental attribute or that the individual mind is (partly) an expression of a universal mind through holonomic communication with quantum fields (Fig.1). The latter idea has led to holographic (holonomic) theories (Pribram 1986, 2011). The human brain is then conceived as an interfacing organ that not only produces mind and consciousness but also receives information. The brain or parts of the brain are conceived as an interference hologram of incoming data and already existing data (a “personal universe”). If properly exposed (“analyzed”), information about the outer world can be distilled.
In neurobiological terms, the existing data is equivalent to the subject’s memory, whereas the “analyzer” is cerebral electrophysiology. Bohm hypothesized that additional dimensions are necessary to describe QM interference processes, thereby circumventing probabilistic theories and consciousness-induced collapse of the wave function. In this theory, the universe is a giant superposition of waves, representing an unbroken wholeness, of which the human brain is a part (Bohm, 1990). Accordingly, the individual mind or consciousness is an inherent property of all matter (and energy), and as such being part, or rather an expression, of this universal quantum field. The apparently diffuse time/space localization of mental functions argues in favor of an underlying multidimensional space/time reality. Bohm and Hiley (1987) also proposed a two-arrow (bidirectional) time dimension. In this concept the stochastic (or double stochastic) character of quanta is explained by an underlying quantum field: the implicate order. This concept implies entanglement (non-locality) as well.
Another hypothesis, having the potential to couple wave information to mental processes, proposes that wave information is transmitted from and into the brain by wave resonance. Through conscious observation they collapse locally to material entities (Stapp 2009; Pessa and Vitiello, 2003; Schwartz et al., 2004). Stapp (2012) argued that this does not represent an interference effect between superposed states (as assumed by Hameroff and Penrose, 1996), but that through environmental de-coherence, super-positions become informative to the brain/organism. A complementary implication of these theories is that mental processes are not necessarily embedded in entropic physical time. In line with this QM idea is that memories are not stored as a temporal sequence, but rather a-temporally.
Fig. 1: The hypothesis that the universe and our minds are integral parts
of a universal consciousness
Some QM mind theories suppose the possible involvement of specific molecules. A spectrum of ions and molecules has been suggested to operate in a quantum manner (Tuszinsky and Woolf 2010). For instance QM theories have been based on micro-tubular proteins (Penrose 1989; Hameroff 2007), proteins involved in synaptic transmission (Beck and Eccles 1992; Beck 2001), including Ca ion-channels (Stapp 2009) and channel proteins instrumental in the initiation and propagation of action potentials (potassium-ion channels, Bernroider and Roy 2004. There is also the hypothesis that synaptic transmission represents a typical (quantum) probability state that becomes critical for an all or none neuronal response (Beck and Eccles 1992; Beck 2001). Attributing non-linear and non-computable characteristics of consciousness, Hameroff and Penrose, 2011, 2013, argue against mechanisms of all or none firing of axonal potentials (Beck and Eccles, 2003). They rather prefer the model of Davia (2010), proposing that consciousness is related to waves traveling in the brain as a uniting life principle on multiple scales. According to some QM mind theories (Woolf and Hameroff, 2001), tunneling was proposed to facilitate membrane/vesicle fusion in neural information processing at the synapse.
Kauffman relates quantum processes in the biological matrix of the brain to the emergence of mental processing (Kauffman 2010; Vattay et al. 2012). This theory, mainly based on chromophores detecting photons, assumes that the coherence of some quantum configurations adhered to proteins is stabilized or is maintained by re-coherence. This principle may have guided evolutionary selection of proteins. Accordingly, mind and consciousness are both quantum mechanical and an expression by the classical neural mechanisms. The underlying coherent quantum states provide the potentiality for the collapse to the de-coherent material state, resulting in classical events such as firing neurons, that are at least to some extent, a-causal, i.e. beyond classical determinacy. The quantum system (of the brain) interacts with a quantum environment, the phase information is lost and cannot be reassembled. By entanglement, the quantum coherence in a small region, e.g. the cell or the brain, might have spatial long-range effects (Vattay et al. 2012; Hagan et al. 2002). Kauffman accepts long-lived coherence states in biological molecules at body temperature (now 750 femto-seconds in chlorophyll at 77K) to be potentially enabling parallel problem solving as major challenges for further investigations. The question is also which neurons or neuronal structures are in particular associated to the coherence/de-coherence brain model of consciousness.
The question is often put as to why quantum theory should be involved in discussions of consciousness at all, and also as to why it should be treated as something special. In thinking about quantum theory, it is important not to be bullied into viewing it as something weird and peripheral that can be ignored (Atmanspacher, 2011). Unfortunately, this allows the more superficial thinkers to dismiss all theories of quantum consciousness. This sort of practice has recently been criticized as ‘pseudoscepticism’, a parallel form to pseudoscience. Pseudo-skepticism (see Wikipedia) similarly uses denunciation in the name of science or scientific affiliation without citing any evidence or possible experimentation to establish this criticism, (see Utts and Josephson, 1996). The features of quantum theory that make it special and also possibly relevant to consciousness can be summarized as follows:
1.) Quantum theory describes the fundamental level of energy and matter. In contrast to higher levels, the quantum level has aspects, such as mass, charge and spin that are given properties of the universe, not capable of further reduction or explanation. In quantum theories of consciousness, it is suggested that consciousness is such a fundamental property existing at this level. Some theories are additionally linked to the structure of spacetime, which is nowadays seen as being interconnected with the nature of the quanta( see Chalmers, 2000; Nagel, 2012)
2.) The other fundamental aspect of the universe is spacetime, as described by the special and general theories of relativity. Although both relativity and quantum theory have both been tested to very high degrees of accuracy, they are nevertheless incompatible with one another. The gravitational force is the main problems, since the smooth continuous curvature of space that describes gravity in general relativity is incompatible with the discreteness of particles/waves that is fundamental to quantum theory. String theory and loop quantum gravity have attempted to bridge this gap, but neither are yet regarded as giving a complete picture. (see Smolin, 2004; Penrose, 2004)
3.) In traditional versions of quantum theory, the wave form of the quanta is conceived as a superposition of the many possible positions of a quantum particle. When the wave function collapses the choice of a particular position for the particle is random. This choice of position is an effect without a cause. The property of randomness is not in itself particularly useful in theories of consciousness, but it does open a chink in the deterministic structure of the universe, which is exploited in particular by the Penrose/Hameroff model, 2013, see also Stapp, 2009, 2012)
4.) Non-locality is the remaining special feature of quantum theory. Classical physics comprises only so-called billiard ball relationships, with bits of matter and energy bumping into one another. These relationships are local, in that they involve immediate contact. Such relationships are also normal in quantum physics. However, quantum physics also possesses non-local relationships. This applies where two particles have been in some close relationship, such as two electrons in the same orbital. In this case they can become correlated. For instance the spin on two particles may always be opposite, if one spins up, the other spins down. This is not a problem while the particles are in a wave form, as both will be in a superposition of up and down. However, if the wave function of one particles collapses, that particle chooses one or the other superposition. When that happens, the other particle will choose the opposite position. In experiment, this is shown to happen when the two particles are out of range of a signal travelling at the speed of light. No matter, energy or conventional information is transferred, and the experiment is not regarded as a violation of relativity, but it is demonstrated that quantum properties can correlate instantaneously over any distance.(see for a basic introduction to QM: Thomas A, link internet.
The Failure of Modern Consciousness Studies
The study of consciousness was a taboo in academic circles through much of the 20th century, at least in part due to the long reign of behaviorism. Even the study of emotion being largely proscribed, with brains conceived as being reasoning machines and nothing else. This started to lift in the late 1980s and at first this seemed to be a marvelous opportunity for the advances made in other areas of science to be applied to the neglected area of consciousness. What followed, however, can be seen as an overall negative in establishing orthodoxies which appear to have negligible chance of success in explaining consciousness, while discouraging explanations that relate to new areas of physics or neuroscience.
The traditional explanation for consciousness or the soul in more traditional language is known as dualism. This posits a separate spirit stuff and physical stuff, with the spirit stuff capable of acting on the physical stuff, as when the soul commands the body. The core argument against dualism was that for the spirit stuff to act on the physicsal stuff it would need to have some physically relevant quality and would therefore not be pure spirit stuff. V ice versa looks to apply for physical stuff. The failure of dualism is one of the few points of agreement between mainstream consciousness studies and those that identify consciousness with a fundamental of the universe. (Thompson, 2000)
Functionalism was at least in the 1990s the dominant explanation for consciousness, driven by the success of computers as problem solving and memory storage machines. The main proposal is that any system or machine that processes information in the same way as the brain will be conscious, regardless of what it is made of. The biological matter and structure of the human brain was deemed irrelevant. In reality, and despite its popularity, this appears as a pseudo-theory, kicking the problem of consciousness further down the road. It does not explain how consciousness arises in the brain and nor does it explain how consciousness might at some point arise in silicon or other matter. It seems, however, that functionalism has had a malign effect in making mainstream consciousness studies practitioners think it un-necessary to take any notice of modern developments in neuroscience or biology.
Identity theory may have been the next most popular theory after functionalism in the 1990s. This declared that consciousness was identical to the brain or identical to its processing. However, it made little attempt to explain why it was identical to the brain, but not to any of the other physical structures in the universe. Nor did it attempt to define what it meant by the brain, despite the fact that our understanding of the physical processing of the brain was changing dramatically. It was further undermined by the discovery that much neural processing such as the dorsal stream governing spontaneous movement could be brought to completion outside of consciousness, which was seen to be more closely related to longer-term evaluations and planning.
Epiphenomenalism was and remains another popular idea. The theory proposes that consciousness is a by-product of neural processing that has, however, no function. Despite its popularity this concept is beset by at least three major problems. It conflicts with evolutionary theory in that it is hard to see why evolution should select for something that had no function, particularly as neural processing is exceptionally energy-hungry. The theory also conflicts head on with physics in which there is no acausality, with every object or process having influences elsewhere. Finally, there is the problem that even granting the idea of a functionless by-product, there is still no physical evidence for what produces such a thing in the brain. Like functionalism it appears to be a pseudo-theory.
In the present century, there seems to have been a tacit recognition that functionalism and identity theory would have difficulty in becoming the consensus of a wider public. This appears to have given rise to two more theories that avoid treating consciousness as a fundamental. Consciousness resulting from embodiment has been possibly the most fashionable of these ideas. Initially embodiment ideas did represent a genuine step forward in both consciousness studies and psychology as a move away from the brain as a computer in a vat. It now accepted that mental events could influence the body and that visceral events could feed back on the brain. It also accepted that emotion is a relevant aspect of mental life. However, there was an over-reach in suggesting that the body somehow drove consciousness that the brain could not produce. This seemed to assume some kind of undefined special property in the body that was not present in the brain. More specifically it ignored the fact apart from the sense of touch, signals entered the brain directly from the environment and were consciously processed in the higher sensory and frontal cortex before being signaled to the viscera.
The attempt to classify consciousness as a form of information or information processing has also become fashionable in this century. Interestingly, there are innumerable examples of non-conscious information, especially when we look at modern technology, with no apparent specification as to how conscious information would differ from non-conscious information. (Meijer, 2012, 2013a, 2014)
At a more philosophical level, there is a core difference between information and reality, in that information embraces only what we happen to know, while it can also be defined as an attempt to describe nature’s behavior and microscopic make up that comprises reality. Thus the hunter-gatherer in ancient Africa, glancing up at the sun is only aware of its glare, heat and position in the sky. A fuller understanding of its reality has to wait for modern science.
A popular but poorly based concept is to call consciousness an emergent property: The idea of consciousness as an emergent property of classically described matter is superficially plausible, and as such can sometimes look like the best shot of modern consciousness studies. Emergence is a familiar process in physics. Thus liquidity is an emergent property of water. The individual component hydrogen and oxygen atoms do not have the property of liquidity. However, when they are bound together in a sufficiently large number of water molecules, the property of liquidity emerges.
The problem for this as an explanation of consciousness is that when emergent properties such as liquidity arise in nature, the emergence can be traced to the component particles and forces, such as the electromagnetic interactions between the water molecules. The macroscopic liquidity is an effect of the microscopic electrical charges and the resulting charge relationships. The problem for consciousness as an emergent property is no arrangement of such particles and forces has been identified that could produce consciousness. Many continue to furiously assert that this is possible, but the claim being made here is in fact the same as dualism, where two things that have no common property are required to act on one another. Anybody who thinks this is possible in physics could simplify their search for consciousness by accepting the idea of dualistic spirits (Murphy, 2007, 2011, Auletta et al, Clayton and Davies, 2006 ).
In the last two decades, consciousness studies has gone off in a different direction from physics or neuroscience. Much of consciousness studies is dominated by philosophers and psychologists who have only a scant interest in what has been happening in brain science, let alone physics. In many cases, they see it as their duty up to prop a nineteenth century Newtonian world view, while dealing in abstractions that that take limited account of neuroscience or physics. Neuroscientists have meanwhile been pressured into treating consciousness as not part of their remit, deferring to philosophers when it was necessary to discuss consciousness, even when the philosophy was contradicted by the neuroscientists own discoveries. More fundamental approaches have fallen victim to black propaganda against them. It seems likely that mainstream consciousness studies, if it survives at all, will reach the end of the 21st century without having achieved consensus on a theory that has explanatory value.Onderkant formulier
The Descent into the Quantum World
Suppose one were to ask for a scientific description of your hand. Biology could describe it in terms of skin, bone, muscles, nerves, blood etc., and this might seem a completely satisfactory description. However, if you were just a bit more curious, you might ask what the muscle and blood etc. were made of. Here you would descend to a chemical explanation in terms of molecules of protein, water etc. and the reactions and relations between these. If you were still not satisfied with this, you would have to descend into the quantum world. At this level, the solidity and continuity of matter dissolves. The molecules of protein etc. are made up of atoms, but the atoms themselves are mainly vacuum. Most of the mass of the atom lies in a small nucleus, comprised of protons and neutrons, which are themselves made up of smaller particles known as quarks. The rest of the mass of the atom resides in a cloud of electrons orbiting around the nucleus (see Fig.2).
Fig.2 : Some central elements of quantum physics: uncertainty of position of particles, wave / particle duality as demonstrated in the double-slit experiment (upper part),as well as entanglement (non-locality) of particles at great distances, the phenomenon of coherence/decoherence and superposition of waves (lower part).
The fundamental particles are bound together by the four forces of nature, which are gravity, electromagnetism and the strong and the weak nuclear forces. The strong nuclear force binds together the particles in the nucleus of the atom, and acts only over the very short range of the nucleus itself. Gravity is a long-range force that mediates the mutual attraction of all objects possessing mass. The electromagnetic force is perhaps the force most apparent in everyday life. We are familiar with it in the form of light, microwaves and X-rays. It holds together the atom through the attraction of the opposite electrical charges of the electron and the proton. It also governs the interactions between molecules. Van der Waals forces, a weak form of the electromagnetic force is vital to the conformation of protein and thus to the process of life itself. In contrast to the nuclear forces, gravity and electromagnetism are conceived of as extending over infinite distance, but with their strength diminishing according to the inverse square law. That is, if you double your distance from an object, its gravitational attraction will be four times as weak. The quanta can be divided into two main classes, the fermions, which possess mass and the bosons which convey energy or the forces of nature. The most fundamental fermions are the quarks making up the nucleus and the circling electrons, while gluons and photons are the most prominent bosons. The gravitons, which may intermediate the gravitational force remain hypothetical.
The Quantum Wave
The quantum particles or quanta are unlike any particles or objects that are encountered in the large scale world. When isolated from their environment they are conceived as having the property of waves, but when they are brought into contact with the environment, there is a process of decoherence, in which the wave function is described as collapsing into a particle. The wave form of the quanta is different from waves in matter in the large scale world, such as the familiar waves in the sea. These involve energy passing through matter. By contrast, the quantum wave can be viewed as a wave of the probability for finding a particle in a specific position. This probability wave also applies to states of the quanta such as momentum. While the quanta remains in its wave form, it is viewed as a superposition of all the possible positions that the particle could occupy. At the peak of the wave, where the amplitude is greatest, there is the highest probability of finding a particle, when the wave eventually collapses. However, the choice of position for each individual particle is completely random, representing an effect without a cause. This comprises the first serious conceptual problem in quantum theory.
The Two-Slit Experiment in Quantum Mechanics
The physicist Richard Feynman said that this classic experiment contained all the problems of quantum theory. In the early nineteenth century, an experiment by Thomas Young showed that when a light source shone through two slits in a screen, and then onto a further screen, then a pattern of light and dark bands appeared on a further screen, indicating that the light was in some places intensified and in other reduced or eliminated. Where two waves of ordinary matter, for instances waves in water, come into contact an interference pattern forms, by which the waves are either doubled in size or cancelled out. This appearance of this phenomenon in Young’s experiment demonstrated that light was a wave, contrary to most scientific opinion prior to the experiment.
Later, the experiment was refined. It could now be performed with one or two slits open. If there was only one slit open, the photons or light quanta, or any other quanta used in the experiment behaved like particles. They passed through the one open slit, interacted with the screen beyond and left an accumulation of marks on that screen, signifying a shower of particles rather than a wave. But once the second slit was opened, the traditional interference pattern, indicating interaction between two waves, reappeared on the screen. The ability to generate the behavior of either particles or waves, simply according to how the experiment was set up, showed that the quanta had a perplexing wave/particle duality.
Fig. 3: The famous double slit experiment: single particles behave like a wave front that show interference pattern on the screen (a), even after passing the two slits decisions to open or close a slit influences the final pattern (b).
The wave/particle duality was shocking enough, but there was worse to come. Technology advanced to the point where photons could be emitted one at a time, and therefore impacted the screen one at a time (Fig. 3). What is remarkable is that with two slits open, but the photons impacting one at a time, the pattern on the screen formed itself into the light and dark bands of an interference pattern. Somehow the photons ‘knew’ to arrange themselves into a pattern indicative on the interaction of waves. The question arose as to how the photons emitted later in time ‘knew’ how to arrange themselves relative to the earlier photons in such a way that there was a pattern of light and dark bands, indicative of interacting waves.
The obvious solution was to place photon counters at the two slits in order to monitor what the photons were up to. However, as soon as a photon is registered by a counter, it collapses from being a wave into being a particle, and the wave related interference pattern is lost from the further screen. The most plausible way to look at it may be to say that the wave of the photon passes through both slits, or possibly that it tries out both routes, and after doing this the divided wave interferes with itself ( see Fig. 3).
The EPR Experiment and the Copenhagen Interpretation
Einstein disliked the inherent randomness involved in the collapse of the wave function. This was despite the fact that his revival of the idea of light in the form of discrete particles or quanta had contributed to the foundation of quantum theory. He sought repeatedly to show that quantum theory was flawed, and in 1935 he seemed to have produced a masterstroke in the form of the EPR (Einstein, Podolsky, Rosen) experiment. At the time this was only a ‘thought experiement’, a mental simulation of how a real experiment might proceed, but since 1982 it has been possible to perform this as a real experiment (Fig.4).
Fig.4: Quantum entanglement in a pair of distant elementary particles with regard to spin
The challenge to quantum theory presented by the EPR experiment hinges on the concepts of locality and non-locality. Locality comprises the idea of normal cause and effect under which objects or particles move or change as a result of being impacted by other objects or particles, or of being directly acted on by energetic forces such as the electromagnetic force. It is local because the object or force producing the action or change has to be in direct contact with the object or particle acted on. Moreover, where a force emitted by one object acts on another distant object such as light emitted from the Sun acting on the Earth, the force passes between the two objects at a speed not greater than that of light. By contrast, non-locality involves the ability of one particle to determine the behaviour of another distant particle instantaneously, and without any matter or energy passing between the two. Einstein termed this ‘spooky action at a distance’.
With the EPR experiment it was shown, that as it stood, quantum theory violated the principle of locality, which is normally regarded as basic to scientific thinking and even to common sense. Quantum theory indicated that when two quanta had been closely related to one another, for instance in the same electron orbital, they could be regarded as quantum entangled. In this state, certain aspects of their behavior in relation to one another became fixed. For instance, quantum particles have a property of spin, which is partly analogous to the spinning of large-scale objects. Quanta can have the property of spin-up or spin-down. In an entangled state particles could have the relationship that when one had spin up, the other would always have spin down. However, as quanta, while they remained in a wave form, they both represented a superposition of spin-up and spin-down, and therefore neither of them had a defined spin (Fig. 4).
The EPR experiment proposed that two such wave-form particles are moved apart. This could be a few meters along a laboratory bench or to the other side of the universe. The relevant consideration is that the two locations should be out-of-range of a signal travelling at the speed of light. Now, if an observation is made on one of the particles, its wave function collapses, and it acquires a defined spin, let’s say spin-up in this case. Now when an observation is made on the other particle, it will always be found to have the opposite spin. This defies the normal expectation of classical physics that a random choice of spin would produce approximately 50% the same spin and 50% different. Therefore, there is seen to be some non-local connection between the two particles, although it is not possible to describe or detect this in terms of a normal physical transfer of energy or matter. This non-locality and the randomness of the outcome of the wave function collapse constitute the two main puzzles in quantum theory.
The Copenhagen interpretation and the EPR Experiment
The idea of non-locality, which appeared to deny much of what the science of the previous three hundred years had been trying to establish, was as repugnant to the leaders of the quantum movement, such as Neils Bohr, as it was to Einstein as an opponent of quantum indeterminism. Some modern analysis suggests that Bohr changed his own view of the quantum world in a crucial manner after encountering the EPR challenge. Bohr’s interpretation is known as the Copenhagen Interpretation, and the form of this that emerged after 1935 essentially denied the objective existence or reality of the quantum wave. Bohr said that there was no quantum world, there was no deep reality. The quanta only achieved objective reality when they were the subject of an experiment or observation (Interpretations of quantum mechanics, Wikipedia; Genovese, 2010).
The concept of reality or objective existence is here taken to mean that something exists even when it is not being observed by anyone. The Copenhagen Interpretation denies that sort of reality to the wave form of the quanta. The wave was to be seen only as an abstract mathematical expression allowing one to predict the probable position of a particle. If the wave form had no real existence, EPR type situations did not involve any physical action at a distance, and the problem could be deemed to have gone away,( see Kumar M ,2009).
The Aspect Experiment and Non-locality
The question returned to the fore in the 1980′s as technology overtook the original EPR thought experiment. In 1969 John Bell’s Theorem had shown mathematically how EPR could be tested, and in 1982, (Alain Aspect, 1982). Aspect’s experiment demonstrated the physical reality of EPR. The Aspect experiment did not invalidate Copenhagen, but it transferred the whole debate from the hypothetical to the scientifically tested level. It presented physics with a stark choice. Either one could accept the Copenhagen Interpretation in which the locality of interactions was preserved, but the components of matter and energy were unreal, or one could have a world that was real, but in part governed by non-local influences, Einstein’s dreaded ‘spooky action at a distance’.
In fact, recent decades have seen a growing challenge to the orthodoxy of Copenhagen. This leaves us without a generally agreed interpretation of quantum theory. The Copenhagan Interpretation preserved us from non-locality, but the concept of the quanta as mathematical abstractions that suddenly produced physical particles may be viewed as troubling. It seems to propose a sort of dualism, comparable to the relationship between spirit stuff and physical stuff. How could mental constructs, such as mathematical formula, become physcial without having had some physical reality in the first place.
Other interpretations have come more to the fore in recent decades. Decoherence has become particularly popular as a substitute for the traditional ‘measurement’ always referred to in the Copenhagen version. In decoherence, the collapse of the wave function happens of its own accord, as a result of the wave becoming entangled with the rest of the environment. In some recent versions, it is suggested that there is no collapse, the information in the wave simply gets lost in the larger scale environment. In some quarters, this is argued to provide a connection to the ‘Many Worlds’ interpretation. In this, there is also no collapse, but a branching of reality into separate universes. So in the Schrodinger cat paradox, for instance, the universe splits into one universe with a live cat and one with a dead cat.
Quantum Gravity and the Search for Reality
The success of quantum theory (see examples in Fig. 5), which describes matter and energy, and of relativity, which describes space and time, have both been marred by the incompatibility of these two key theories. Relativity describes gravity as the smooth, continuous curvature of space under the influence of massive objects, while quantum theory is based on the idea of energy and matter coming in discreet discontinuous units. Mathematically these contrasting features lead to infinities, indicating that something is wrong. The attempt to overcome these problems has led to new theories, such as string theory, and loop quantum gravity (Smollin, 2005).
Fig. 5: Wave/ particle duality in Quantum physics should be rather seen as a state in which a particle and wave forms are complementary features in an hidden reality (up left), and Pusey et al showed that the wave form is a physical reality (middle left). Principles of Quantum physics are presently used in a large variety of technologies (up right). The conscious observer or a detector that observes the double slit system and provides interpretable data collapses the wave function to a single slit pattern.
String theory proposes that the fundamental particles are not point particles, as had been assumed, but one-dimensional strings extending into higher dimensions, beyond the normal four dimensions. The extra dimensions are usually deemed to have been rolled up very small in the Big Bang, which accounts for them never having been detected. The manner in which the strings vibrate determines the nature of the particle involved. The analogy is that of the strings of a violin, where the vibration of the string determines the nature of the note. While this may appear both speculative and improbable, it has the advantage of being described by mathematics that would allow quantum theory and relativity to be compatible (see String Theory; M-theory in Wikipedia ).
The two main criticisms of string theory are that it produces 10^500 possible universes, and that it operates against the background of a fixed spacetime, a concept that relativity showed to be invalid. An alternative approach is provided by loop quantum gravity (LQG). This approaches the problem from the direction of relativity and concepts of spacetime, in contrast to string theory, which approaches from the the point of view of particles and quantum theory (Smolin, 2004).
LQG proposes that spacetime is quantized or in discrete units. Spacetime is suggested to be created out of a network, or a lattice, or a series of loops. This theory has drawn on the earlier spin network theory developed by Roger Penrose (1994, 2004), and moves towards viewing particles and spacetime as dual aspects of the same thing.
Problems and Opportunities in Quantum Theory
We have emphasized three problematic aspects of the theory, a causality in the randomness of the wave function collapse, a causality in the non-local influences demonstrated by EPR type experiments, and the resulting lack of agreement as to the underlying reality of the physical universe. At the quantum level, we find properties of mass, charge and spin that are given properties of the universe lacking cause or explanation. If we ask, what is the charge on the electron, what is it, not what does it do, the answer will be a resounding silence. The quanta and related spacetime appear to be the only level of the physical universe where it might be possible for science to insert consciousness as an additional fundamental property, (see for reviews Vannini and Di Corpo, 2008, Hu and Wu, 2010, and Tarlaci, 2010, Meijer and Korf, 2013, Pereira, 2003 Atmanspacher, 2011).
Timescales for Neural Processes and Consciousness
In looking at the possible physical underpinnings of neuroscience, Georgiev contrasts what is for consciousness studies the still dominant Newtonian orthodoxy of deterministic causes and effects, with quantum physics, in which there is a multitude of potential outcomes, rather than a single determined outcome. Georgiev, 2003 discusses epiphenomenalism, the theory that consciousness is a by-product of brain processing having only an illusion of causal influence. He points out the evolutionary argument against this view, to the effect that evolution would not select for something that conveyed no selective advantage. In general, he sees the idea that we have no freedom or moral responsibility as counterintuitive. Such a counterintuitive result is seen as the inevitable consequence of explanations based on deterministic classical physics. Quantum mechanics does, however, provide a non-deterministic alternative, in which consciousness underlies the neural processes of making choices and thus effecting future possibilities. The author goes on to discuss the vexed question of the possibility of quantum coherence in the brain. Mainstream consciousness studies has managed to fabricate an orthodoxy, to the effect that quantum coherence cannot occur in organic matter. A paper by the physicist, Max Tegmark, is often quoted in this respect. Tegmark asserted what was already an established position, to the effect that quantum coherence in the brain would be too short lived to have a functional role in neural processing( Tegmark, 2000).
Tegmark’s paper was aimed at refuting Hameroff’s Orch OR theory, (Hameroff and Penrose, 2011) which required quantum coherence to be sustained for 25 ms. Thus Tegmark did not show that coherence over shorter timescales could not support consciousness, because he was directing his argument at the longer timescales of Hameroff’s theory. Georgiev here queries whether there is any evidence that consciousness has to arise over a milliseconds timescale. If consciousness could operate over a picosecond or shorter timescale then Tegmark’s calculations do not present any problem for quantum consciousness. It is pointed out that all neuroscience has been able to show to date is that consciousness does not operate on a scale slower than milliseconds.
Tests show that there is a minimum timescale of about 30 ms needed for a subject to distinguish two sensory inputs as being separate. This means that consciousness cannot be slower than 30 ms. However, patients with time agnosia, who have subjective experience of the passage of time, confirm that it is physically possible to have consecutive conscious steps that are experienced as simultaneous. From this it is argued that the real units of consciousness could be at the picoseconds level, although such units cannot be discerned by the conscious subject.
It is argued here that the upper possible bound of the timescale of consciousness need not be its actual scale. As an analogy, Georgiev takes the example of the operation of a personal computer. The computer screen is on a millisecond timescale with the screen refreshing perhaps every 10 ms. But this is not indicative of the performance of the processor, which may operate on a picoseconds timescale. All that the refresh rate of the monitor can tell us that the processor does not operate on a slower than 10 ms timescale. In the brain, the millisecond timescale applies to the brain’s communications with sensory organs and muscles, but this may not say much about its internal processing.
The author goes on to argue that if consciousness arises at the quantum level some of the conventional arguments of mainstream consciousness theory fail. He contrasts classical and quantum information. Classical information can be copied and stored. A DVD encoded on a string of 0′s and 1′s, which can be read by an external observer, is an example of classical information. With the qbits of quantum information, it is impossible to read them because any interaction with them would alter them. All that can be done is to swap or move the information without deleting it. This inability for third parties to observe quantum information is similar to our inability to observe first person consciousness, while in contrast such inability is alien to classical information systems.
The author argues that it is impossible to copy minds that are based on quantum states because of the no-cloning theorem, which demonstrates that attempts to copy quanta result in the quanta being corrupted. In mainstream consciousness studies, philosophers and others have sought to create wonderment by arguing that it is possible to copy minds, and this appears to be true in principle, if consciousness is based on classical physics. However, if conscious arises from quantum states this becomes impossible. The possibility of copying a mind has also been used as a somewhat convoluted argument against the existence of the self,
Table 1: The History of Quantum Physics and Quantum Brain Theory
1805: Young: Double-Slit experiment
1860: Maxwell: Electromagnetism Laws
1870: Bolzman: Gas laws/Movement of particles
1900: Planck: Quantum aspects of Energy
1905: Einstein: Special Relativity Theory
1908: Minkowski: 4-Dimensional Space Time
1915: Einstein: General Relativity Theorie
1913: Bohr: Structure of the Atom
1919: Kaluaza: Fifth dimension Gen. relativity /Electromagn.
1923: De Broglie: Wave/Particle duality, hidden variables
1924: Alfred Lotka: Quantum brain in mind/brain relations
1925: Pauli: Bosons and Fermions and Elementary particles
1925: Schroedinger: Wave equation for Electromagnetic particles
1925: Heisenberg: Uncertainty principle in Quantum physics
1925: Uhlenblick/Goudsmit: Electron spin phenomenon
1926: Born: Statistic description of wave/particle duality
1927: Bohr: Measurement in QM Copenhagen interpretation
1927: Planck/Heisenberg : Zero Point Energy Field
1928: Dirac: Quantum-Electrodynamics/Quantum field theory
1928: Artur Eddington : Q M determinism of brain function
1930: Fritz London/Edmond Bayer: Consciousness creates reality
1932: John von Neuman: Relation between Qm and consciousness
1934: J B S Haldane: Quantum wave character and life
1934: Niels Bohr: The Mind and QM are connected
1940: Wheeler/Feynman: Absorber theorie
1942: Casimir: Experimental proof of Zero Point Energy
1948: Gabor: Holography
1951: Bohm: Hidden variables, Pilot waves and Implicate order
1955: Pauli/Jung: Synchronicity
1957: Everrett: Many-worlds hypothesis
1961: Wigner: Consciousness collapses quantum state
1964: Bell: Quantum entanglement is non-local
1967: Wheeler: Quantum flavour dynamics of elem. particles
1966: John Eccles/F. Beck: Quantum effects in synaptic transmission
1967: L M Riccardi/H Umezawa: Quantum Neurophysics
1970: Prigogine: Nonequilibrium dynamics, unilateral time
1971: Pribham: The Holografic brain
1972: Clauser: Experimental proof of quantum entanglement
1974: Schwartz: Superstring theory
!974: Ewan Walker: Quantum tunnelling in brain processes
1976: Sperry: the Self in Mind/Brain concepts
1978: Stuart/Takahashi/Umezawa: Quntum brain dynamics
1982: Aspect: Experimental proof quantum correlated particles
1980: John Cramer: Transactional interpretation of quantumphysics
1986: Barrow/Tipler: Anthropic cosmological principle
1986: Herbert Frohlich: Bose-Einstein condensates in biology
1986: Penrose; Quantum gravity induced reduction of wave function
1988: Steven Hawkins/: Multiverse concepts
1989: Ian Marshall/Zohar: Consciousness and Bose-Einstein condensates
1989: Puthoff: Particle inertia and Zero Point Energy
1989: Michael Lockwood: Mind, Brain and Quantum
1991: Zurek: Decoherence of quantum function by environment
1992: Schlempp: The quantum principle of MRI in brain scanning
1992: Smolin: Loop quantum gravity and Black holes/ multiverse
1992: Hamerhoff/Penrose: Microtubuli/Consciousness theory
1992: Pylkkänen: Mind /matter interaction and active information
1993: Goswami: the Self-aware Universe
1993: Herbert: Elemental mind
1994: Henry Stapp: Ca-ions, neuron coherence and free will
1995: Edward Witten: M (string) theory
1995: Mari Jibu/ K.Yasue: Ordered water and superradiance
1995: Gordon Globus: Quantum Cognitio
1996: Price: Backward causation
1996: Chalmers: The hard problem/Panpsychism
1998: Scott Hagan: Mictotubuli biophoton emission
2000: Wheeler: The Participatory Universe:
2001: Wolf: Mind into Matter and the Soul
2000: Vitiello: Dissipative Quantum model of the Brain
2002: Huping Wu/ m Wu: Spin mediated consciousness
2003: Zeilinger: Information and quantum teleportation
2004: Laszlo: The informed universe, non-local Akashi field
2003: Primas: Tensed time in Matter and Mind
2005: Yasue/Umezawa: Bioplasm in Quantum brain dynamics
2006: Scaruffi: Consciousness in elementary particles
2006: Deutch: Fabric of reality, quantum computing and multiverse
2012: King: Cosmology of consciousness
2013: Hameroff and Penrose: Modified the Orch Or brain model
because in this argument copying the mind would create the paradox of two identical selves. However if copying of minds is impossible this paradox will not arise.
QM approaches in neurobiology: the state of art
The following section, taken from Meijer and Korf, 2013, discusses the idea that the physical quantum concepts do physically apply to the mind: the mental domain is considered as an aspect of wave information. A special position takes the feature of superposition: quantum particles can be present in multiple spatial locations or states and be described by one or more pure state wave functions simultaneously of which a single state can finally be selected. Penrose, (1989) suggested that the underlying space /time geometry in fact bifurcates during the superposition process and wave collapse occurs in a non-computable manner. It was suggested the conditions found in the microtubule could allow coherent quantum particles to form a unity that can be described by a single wave function. These concepts are considered the ”hard quantum theories”, as opposed to the “soft” or formal theories of the previous section. QM adherents refer often to Wolfgang Pauli (Pauli, 1994), the eminent quantum scientist who suggested that the mental and the material domain are governed by common ordering principles, and should be understood as “complementary aspects of the same reality” (see Atmansapacher and Primas, 1977; Primas 2003). The “hard” mental QM theories apply either to specific brain structures/molecules (this section) or to quantum fields and dimensions or both.
Vannini and Di Corpo, 2008, Hu and Wu, 2010, and Tarlaci, 2010 listed and attempted to categorize the various published quantum brain models, without a detailed treatment of the individual models. Vannini and Di Corpo distinguish models based on consciousness creating reality, models based on probability aspects of QM and models based on already established QM order principles. Hu and Wu differentiate in models based on QM elements of entanglement and coherence and models on the relation of QM with consciousness that can include materialistic modes (consciousness emerges from material brain), dualistic mind/matter models and panpsychistic modes. The first two papers emphasize the potential testability of the various models. More detailed reviews can be found in Pereira, 2003 and Tuszynsky and Woolf, 2010, the latter as an introductory chapter of the instructive book: “Emerging Physics of Consciousness”, while an excellent and critical overview of the field is provided by Atmanspacher, 2011.
A number of, more or less specific, scientific journals are currently, or were, devoted to this subject: NeuroQuantology, Quantum Biosystems, Mind and Matter, and AntiMatters. Stanford Encyclopedia of Philosophy (see quantum mechanics and quantum theory) also provides an excellent reference. Additional publications on the topic can be found in: J Consciousness Exploration& Research, J New Dualism, Dualism Review, Journal of Cosmology, J Scientific Exploration, Biosystems, , Cognitive Neurodynamics, Science and Consciousness Review, Journal of Mathematical Psychology, Chaos, Solitons and Fractals, Open Systems and Information Dynamics, International Journal of Quantum Chemistry, The Noetic Journal, Neuroscience and Biobehavioral Reviews, Experimental Neurology, J. of Mind and Behavior Physics of Life Reviews, Syntropy Journal, Biological Cybernetics and Kybernetik. The Journal of Consciousness Studies is an excellent source for various models for consciousness (see for example Fig. 6).
Fig. 6: Some models that have been proposed for human consciousness
Before we delve into the physical aspects of the quantum brain, a number of common misunderstandings on QM modeling should be dealt with:
- There is no single theory on quantum mechanical aspects of brain function. In fact a spectrum of more or less independent models have been proposed, that all have their intrinsic potential and problems (see table 2, for references see the above mentioned reviews and Meijer, 2012, Meijer and Korf, 2013).
- In spite of the introduction, already in the first part of the 20th century and the spectacular successes of the theory ever since, some still see quantum physics as a sort of esoteric part of science. However it rather represents a revolutionary refinement of classical physics, for example taking into account that the theory was required in order to build an adequate atomic model and more recently to explain experimentally demonstrated teleportation of particles (see Zeilinger, 2000) as well as principles of downward causation (Wheeler, 2002) and time symmetry (see Aharonov et al, 2010). It is also the basis for laser, semi- and super- conductance, and microchip technology as well as MRI brain scanning (Marcer and Schempp, 1997). It should also be kept in mind that classical physics can be fully derived from quantum physics, not the other way around.
- Quantum physics is, by some rejected, since so many interpretations of the theory are at stake (Copenhagen, Many worlds, Implicate order, Interactional theory, Micro- macro- scale definition, Environmental de-coherence, Relational quantum mechanics etc.). Yet a number of common elements such as the true particle/wave aspect, instead of only a probability function, (Pusey et al., 2012), superposition, entanglement/non-locality and coherence/de-coherence phenomena are experimentally established and remain very usable in practice, although the related semantics should be carefully defined.
- It is often stated that quantum wave information coherence cannot be maintained long enough in the brain due to interaction with the macro-environment of the brain components. Yet, on this point major differences in decoherence-time calculations exist, as based on various models and their intrinsic assumptions (see Hagan et al., 2002 and Tegmark, 2000, Lloyd, 2011). A central point here is that sub-compartments could be present at the molecular or sub-molecular level, that by their special arrangements are quantum noise protected or coherence stabilized. Examples are internal parts of channel proteins (Bernroider, 2004), and stabilization by clustered (gel/sol) arrangements of cytoplasmic water clusters (see Hameroff and Penrose, 1996, Penrose and Hameroff, 2011). The latter authors proposed a hierarchic model encompassing nerve cell depolarization, gel/sol transitions of resulting in disconnection of microtubuli, shape/volume pulsation of dendrites including reorganization of synaptic contacts and finally sol/gel transition stabilizing a new state. Through coherence and macroscopic entanglement, life time of wave information can be much longer than in the classical phase, as a consequence of coherence/decoherence dynamic equilibria, allowing nonlocal remote interaction in large numbers of entangled neurons. Such gel/sol oscillations could even be a primary to excitation/depolarization triggered by normal sensory stimuli and are supposed to interact with zero-point vacuum dipole vibrations (the bi-vacuum matrix model of Kaivairanen, 2006).
- It should be realized that decoherence, does not, per definition imply destruction of information since, firstly, it is not compatible with the quantum principles of non-cloning and non-deletion, secondly a cyclic process of decoherence and re-coherence processes cannot be excluded (see Hartmann, et al 2006; Li and Paraoano (2009); Atmanspacher, 2011) while thirdly, even if such decoherence does occur, it may result in mixture of possibilities that may be accommodated by the collection of perceivable worlds in the brain (Stapp, 2012). It has been proposed by Vattay and Kauffman, 2012, that a decoherent state can be converted back to a coherent state by the input of adequate phase and amplitude information. The resulting coherent states can last long enough in warm biological systems in order to, for example, enable coherent search processes for antenna-mediated transport of photon energy in photosynthesis. The author postulates that similar “poised realm” or micro-domains, on the edge of chaos, could also be instrumental in the human brain as sites where a dynamic interplay of decoherence and re-coherence takes place.
- It is often assumed that QM is only valid for a description of nature on the micro-scale (elementary particles etc.). Yet convincing evidence was more recently presented that quantum physics can be applied to macromolecules (Zeilinger, 2000), and to the surprise of many, even can occur in warm and wet biological systems (photosynthesis: Engel et al., 2010) and brains of birds in relation to magnetic sensing and navigation (for references see Arndt et al., 2009, Lloyd, 2011), see 7.
Fig 7: Quantum phenomena that have been detected at the life macro-scale
- Lloyd concluded: “Quantum coherence plays a strong role in photosynthetic energy transport, and may also play a role in the avian compass and sense of smell. In retrospect, it should come as no surprise that quantum coherence enters into biology. Biological processes are based on chemistry, and chemistry is based on quantum mechanics. If an organism can attain an advantage in reproduction, however slight, by putting quantum coherence to use, then over time natural selection has a chance to engineer the necessary biochemical mechanisms to exploit that coherence. Different types of quantum processes that operate at the same time scale can interact strongly either to assist or to impede one another. In photosynthetic energy transfer, the convergence of quantum time scales gives rise to more efficient and robust transport. Evolved biological systems exhibit the quantum Goldilocks effect: natural selection pushes together time scales to allow quantum processes to help each other out”.
- A spectrum of atoms/molecules has been suggested to operate in a quantum manner: Ca 2+- and K+- ions, H2O, enzymes, membrane receptor and channel proteins, membrane lipids, neurotransmitter molecules, in addition to macromolecular structures such as DNA/RNA, gap junctions, pre-synaptic vesicles, microtubules and micro-filaments (Tuszinsky and Woolf, 2010, Meijer, 2014)
- Since our integral universe can be described by the current laws of QM and relativity, it does not seem warranted to place the human brain outside nature: some see even cosmic architecture mirrored in our complex brain (Kak, 2009; Amoroso, 2003)
- The discussion around higher brain functions is frequently obscured by modalities of promissory materialism: “at present we do not understand consciousness but within 20 years the problem will be resolved !” Not only is such an extrapolation scientifically unwarranted but certainly cannot be falsified. Even more damaging is the assumption that one will find the solution by further using current technology, instead of postulating new (for example quantum) models and innovative experimental approaches.
- Some QM models are based on the interaction of brain components with experimentally detected quantum fields (Yasue and Jibu 1995, Vitiello, 1995, Pessa and Vitiello, 2003). The central aspects of realistic quantum field theory hold that the essence of material reality is a set of fields. These fields obey the principles of special relatively and quantum theory and the intensity of a field at a point gives the probability for finding its associated quanta as the fundamental particles that are observed by experimentalists. These fields may holographically project into each other, implying interactions/interpenetrations of their associated quantum waves. Vitiello proposes a virtual shadow brain working in a time-reversed mode that stabilizes coherence and neural memory structures.
- It could be worthwhile to project neo-darwinism and its biological evolution theories against the canvas of potential QM mechanisms, in the sense that parallel quantum superpositions and backward causation mechanisms can provide explanations and/or alternatives for evolution jumps and so-called emergent phenomena (see Davies, 2004; Murphy, 2011; Auletta et al., 2008 Davies and Gregersen, 2010; Ellis, 2005; Vattay et al 2012). Recently, models were proposed for the transfer of information in biological evolution on the basis of quantum formalisms (Bianconi and Rahmede, 2012, Djordjevic, 2012).
- On the basis of QM concepts one should be prepared to envision uncommon and even utterly strange manifestations of quantum entanglement: certain transpersonal human experiences (Kak, 2009; Radin and Nelson 2006; Di Biase, 2009 a, b, Jahn and Dunne, 2007) should not be seen as potentially be explained by QM, but rather required (Radin and Nelson, 2006) by the concept that our world is part of a quantum universe (Vedral, 2010; Lloyd, 2006; Barrow and Tippler, 1986).
QM and Higher Brain Functions
Here we discuss current QM theories as possible theories bridging the classical neuronal and mental concepts. QM theories does indeed apply to the same brain physiological phenomena, but introduce also typical features such as particle/wave duality, entanglement and non-locality, as well as wave interference and superposition. In addition processes such as quantum coherence and resonance of wave interactions are at stake.
Quantum Brain models proposed
|Baaquie and Martine||2005||Marshall||1989|
|Beck and Eccles||2000||Mender||2007|
|Bernroider||2000||Hameroff and Penrose||2012|
|Hu and Wu||2005||Umezawa and Ricciardi||1967|
|Järvilehto||2004||Vannini and DiCorpo||2009|
Table 2: Quantum Brain Models proposed from 1960 and further (see for references Meijer, 2012; Meijer and Korf, 2013; Meijer, 2014; Vannini and Di Corpo, 2008; Hu and Wu, 2010, and Tarlaci, 2010).
It is not our purpose to assess the various QM theories in detail, rather we intend to discuss some of their major implications regarding the concept of a “quantum brain”. The key position of proteins in the quantum-mediated initiation and execution of mental activities was already emphasized. Several QM theories are based on specific properties of proteins, as for instance micro-tubular proteins, (Penrose, 1989; Hameroff, 2007), proteins involved in facilitating synaptic transmission (e.g. Beck and Eccles 1992; Beck 2001), including Ca2+- channels, see Stapp, 2009), as well as specific channel proteins, instrumental in the initiation and propagation of action potentials (K+- channels), Bernroider and Roy, 2004), see Fig 8.
QM theories also extends the mind to different spaces and time dimensions and some consider the individual mind (partly) as an expression of a universal mind through holonomic communication with quantum fields. In the latter approach, the human brain is conceived as an interfacing organ that not only produces mind and consciousness but also receives information necessary for full deployment of these mental phenomena (see next session). The central question here is whether neuronal cells are the sole units for information processing in the brain rather than sub-cellular organelles or molecules (Schwarz et al., 2004).
Fig 8. Some aspects of quantum brain models: Synaptic transmission by vesicular exocytosis of neurotransmitter molecules, Ca2+ -influx via Ca2+- channel protein in the neuronal membrane facilitates fusion of synaptic vesicles in the presynaptic terminal. The fusion of sufficient vesicles leads to transmitter release and depolarization of the postsynaptic membrane, this fusion process bears a quantum probability character.
A major debate about these theories concerns the possibility of coherent quantum states in the “warm” and wet internal milieu of the brain. (see e.g. Atmanspacher, 2011). The defenders of the quantum brain models have argued that in vivo molecular configurations exist that enable the modulation of quantum states through efficient protection and shielding of the wave interaction compartments in the cells (Hagan et al., 2002).
The particular local collapse of the wave function, in this manner, produces new information. As originally proposed by Eccles, this is realized by membrane protein induced fluxes of Ca2+ or K+ ions, that than increase the probability of fusion of neurotransmitter-filled vesicles in the synapses, leading to the firing of the particular neuron or even groups of neurons. The central hypothesis here is that synaptic transmission represents a typical (quantum) probability state in which the total number of vesicles available for exocytosis is critical for an all or none response of neuronal firing (Beck and Eccles, 1992; Beck, 2001). Coherent neuronal perturbations and especially their entangled state are supposed to provide non-local “binding” of sensory and cognitive brain centers, and may also enable perception of qualia and the unitary sense of “conscious self” (Hameroff, 2007). As the “mesoscopic” scale of brain activity where the “binding” process is expected to occur is in the vicinity of the quantum domain, the binding principle is likely to be a quantum non-local effect, probably the only known physical mechanism able of performing such a task. One possibility is the formation of a quantum photonic field (Flanagan, 2006); another possibility is the formation of coherent states on the level of trans-membrane ion fluxes such as that of Ca2+ as suggested by Pereira, 2003, 2007, see section 8, Fig. 8).
Hameroff and Penrose (2011, see later) argue against mechanisms of all-or-none firing of axonal potentials as suggested by Beck and Eccles, since such binary states do not include non-linear and not computable characteristics of consciousness. They rather prefer the model of Davia, 2010 (see the chapter in the book edited by Tuszinski and Woolf, 2010), proposing that consciousness is related to traveling waves in the brain as a uniting life principle on multiple scales. The latter is based on energy dissipation, enzyme catalysis, protein folding that maintains energy balance in an excitable system such as the brain, conditions that are also compatible with the isoenergetic brain model treated in Meijer and Korf, 2013. Non-linearity in brain processes is modeled using the well-known Schrödinger equation, adjusted with a non-linear term, as proposed earlier by Walker (see Behera, 2010 in the same book), by which robustness of a classical approach is combined with the more flexible elements of quantum theory.
The originators of this hypothesis (Penrose, 1989; Hameroff, 2007) have discussed that microtubule, in principle, can maintain quantum states (i.e. superposition) lasting at least 10-6 seconds, long enough to be instrumental in the transfer of quantum wave information. Such lasting quantum states are possible because of the shielding of hydrophobic pockets in the particular proteins, as well as the formation of coherent clusters of these molecules that thereby share a common quantum wave function (so called Bose-Einstein Condensates).There is indirect evidence that microtubules may be relevant for neurocognition: increased synthesis in relation to postnatal development with regard to synaptogenesis and visual learning and as counterpart aging deficits in memory as well as interactions with general anesthetics (Penrose and Hameroff, 2011, 2012; Tuszynski and Woolf, 2010; Kalvairainen, 2005). Yet such correlative studies should not only be further substantiated with experiments that show quantum states in isolated tubules, as reported by (Bandyopadhyay, 2011), but rather and most importantly, directly demonstrate tubular involvement in higher brain functions in situ. More recently in 2013 Bandyopadhyay’s group demonstrated that in microtubules the energy level of up to 40,000 individual tubulin proteins and the energy level of the microtubule are the same. The water core and the individual tubulin proteins are suggested to control the properties of the microtubule by means of delocalised electromagnetic oscillations. The properties of the microtubule might be taken to suggest that the system can support a macroscopic quantum state. The authors say that prior to this 2013 paper the properties of tubulin and microtubules were not extensively studies using the up-to-date technologies mentioned here. Theories that apply to metals, insulators and semi-conductors are not relevant to microtubules.
In conclusion: tubular and synaptic channel proteins exhibit conformational transitions within 10-9 seconds that may last for 10-6 seconds or even longer, (Beck and Eccles 1992; Beck 2001; Bernroider and Roy, 2004, Kaivairainen, 2005). These perturbations may last long enough to be finally detected as miniature neuronal potentials (Hamill et al., 1981, Hagan et al., 2002). The particular mechanisms also imply a manifestation of non-local quantum effects due to distant coherence, a phenomenon that was even recorded in laser stimulated neuronal cell cultures in which classical physical explanations were excluded (Pizzi et al., 2004).
The coherence of such quantum states among brain proteins has been suggested to lead to material changes in brain physiology through orchestrated collapse of quantum coherent clusters of tubulin proteins, triggered by quantum gravity expressed at the spin (Planck scale) level. On the basis of a recent theory on the nature of gravity (Verlinde, 2011), postulating that gravity is not a force but rather an entropic compensation for the movement of mass/information, it was speculated that consciousness may arise from a gravity-mediated reaction on the entropic displacement of information as it occurs in high density in the human brain (Meijer, 2012).
Anyhow, there should be a mechanism to integrate signal processing within a single neuron with other, even distant, neurons and consequently non-local effects due to quantum entanglement should play a role also in this case. These quantum processes may explain phenomena such as qualia, meaning, sensation of unity, intentionality as well as conflict solving, reliability in the sense of correspondence with the outer world and the sense of self. The latter is related to the feeling of causal power that could result from a quantum/classical interface in which classical synaptic processes create a quantum coherent state that enables quantum computation that exert a back-influence on the original synaptic process (Pereira, 2003, 2007). The existence of nonlocality in brain function, being a basic property of the universe strongly argues for an underlying deep reality out of space/time as originally proposed by Bohm (1990) in the form of an implicate order. Bohm claimed that these mechanisms also play a role in different forms of transpersonal and extrasensory perception by wave resonance with an universal quantum field (Kak, 2009, Jahn and Dunne, 2007, Kafatos and Draganescu, 2000, Kafatos, 2009).
The main issue of the present essay is that wave information provides a potential coupling to mental processes. For instance, wave information could be transmitted from and into the brain by wave resonance and may locally collapse to matter entities through conscious observation, including sufficient individual attention and intention (Stapp, 2009). Stapp (2012) argued recently that this does not represent an interference effect between superposed states, as assumed by Hameroff and Penrose (1996), but that through environmental de-coherence, superpositions will be converted to multiple mixtures of information. Since our brain contains a large collection of perceivable worlds, it is able by supercausal free choice and subsequent common random choice to make a fit with one or more of the abovementioned mixed information modalities. The particular waves than spread out and rapid sequence repetitions (the so called Zeno effect) may sufficiently maintain coherence in parts of the brain. Of note, Stapp does not see free will as based on quantum probability aspects. He states: “In the original Copenhagen formulation this extra process is initiated by what is called “A free choice on the part of the experimenter.” The phrase “free choice” emphasizes the fact that, while a definite particular choice is needed, this choice is not determined by any known law or rule: The purely physical aspects of the theory have, therefore a significant causal gap, which opens the door to a possible causal input from the mentally side of reality”.
Quantum information may exert physical effects via a bottom-up flow of information starting at spin networks (Penrose, 1994; Hu and Wu, 2010), that can be passed on as wave forms of elementary particles/atoms, to be ultimately expressed at the level of neuronal molecules. Meijer and Korf, 2013, consider the latter flow of information more feasible than being directly transferred through vibratory interference at the molecular level. According to this integral quantum model, perturbations at the various spatiotemporal domains allow both time-symmetric forward and backward causation and therefore top-down influence of quantum fields).
The basic question is: how are quantum waves or quantum fields finally perceived by the human brain and how they influence or even induce phenomena as (self)consciousness? Organisms do indeed visually perceive photons that exhibit wave/particle duality; humans even sense less than ten photons, whereas insects may even detect a single photon (Baylor et al., 1979; Menini et al., 1995). Sensitive detection is possible with dedicated cellular structures as for instance in the mammalian retina that amplify the energy of a single photon by a cascade of processes, based on changes of protein conformations and cellular potential energy, leading to the electrochemical stimulation of neurons projecting to the brain. Recently, photosensitive proteins have been coupled to ion-channel proteins with biotechnical techniques, so that the neural activity can be modified or inhibited in vivo by light introduced via optic microfibers (Lima and Miesenböck, 2005; Boyden et al., 2005; Tsai et al., 2009). These experimental approaches demonstrate that quantum effects may directly affect neural function, but it remains to be shown more definitely, that this also does occur directly inside the human brain as it was demonstrated in the brain of birds (see the reviews of Arndt et al., 2009, Lloyd, 2011).
Quantum information mechanisms were recently used to model human consciousness as well as the unconscious in relation to conscious perception (Martin et al, 2013) in which various modalities of non-locality were discussed. Of note, entanglement and non-localty may not only apply to spatial separation but also a temporal one (Megidish et al, 2012). It was proposed by Martin that archetypes can be stored as quantum systems and that consciousness may be controlled by quantum entanglement from outside space-time. Although this cannot be easily envisioned, Nicolescu (1992, 2011) made clear that the relation between different levels of reality have to be interpreted in the framework of Gödel’s incompleteness theorems, and that it may be intrinsically impossible to construct a complete theory for describing the unity of all levels of reality. Interestingly, recently a 5-dimensional space-time brane model was proposed in order to adequately position consciousness and universal consciousness in the cosmos (Carter, 2014a, 2014b), an item that was discussed earlier by Smythies, 2003 suggesting that consciousness may be in a brane rather than in the brain. Atmanspacher (2003) explained that mind/matter correlations may require new science, in the sense that the use of emergence and reductionistic schemes may not be adequate and should be replaced by possible symmetry breaking within a domain in which matter and mind are unseparated. He cites d’Espanat postulating an independent ”Ultimate Reality” that is neither mental nor material.
Another issue is whether, more or less, random quantum events can be orchestrated in such way that the information becomes meaningful for the brain. Thus the major challenge is to directly demonstrate that proteins such as in microtubules, K+-channels or synaptic vesicles and associated proteins become informative to the organism. It has been put forward that a combination of quantum mechanisms and non-linear (chaos) theory have to be considered in the amplification of subtle external information necessary for immediate action (King, 2003, 2011). Future information (feeling of future events) may be realized by time-reversed sensing of such an event on the basis of an attractor state. According to the “supercausal” model of consciousness of Chris King, the constant interaction between information coming from the past and information coming from the future leads to that quantum entities that are always confronted with bifurcations between past and future causes. This involves fractal structures and chaotic dynamics that enable free choices to be performed. Consequently consciousness should be a property of all living structures in which each biological process is forced to choose between information coming from the past and information coming from the future (King, 2003). Such models (including that of Vannini and Di Corpo, 2008) attribute consciousness to principles of relativity, quantum physics, and fractal geometry and on the basis of established physical applications of these theory’s, would, in principle, allow experimental testing to falsify them. It is of interest that top-down recurrent connections in higher order in the associative cortex was shown to be indispensable conscious perception (Boly et al., 2011).
In more general terms: processing and amplification of quanta/wave information in the brain may underlie the presumed higher brain or mental functions. If one assumes that such detection mechanisms does indeed operate in the brain, than the next question is whether the information to be processed is exclusively associated with quantum waves or quantum states or, alternatively, with the specific proteins that carry them. Apart from discussing the inherent mechanisms such as forward and backward causation, superposition and entanglement in the mental space, we shortly treat the idea that that individual mind may, at least partly, be an expression of universal consciousness as opposed to the concept the mind is merely an attribute of matter.
David Bohm: Wholeness and the Implicate Order
David Bohm en Louis de Broglie
David Bohm, 1980, 1990, took the view that quantum theory and relativity contradicted one another, and that this contradiction implied that there existed a more fundamental level in the physical universe. He claimed that both quantum theory and relativity pointed towards this deeper theory. This more fundamental level was supposed to represent an undivided wholeness and an implicate order, from which arose the explicate order of the universe as we actually experience it. The explicate order is seen as a particular case of the implicate order. (Fig. 9).
Fig. 9: The Implicate Order concept af David Bohm in which particles and their more complex forms in our classical world are steered by, so called, pilot waves that operate from a 4-dimensional hidden domain, in a mode of active information.
The implicate order applies both to matter and consciousness, and it can therefore explain the relationship between these two apparently different things. Mind and matter are here seen as related projections into our explicate order from the underlying reality of the implicate order. Bohm claims that when we look at the extension of matter and separation of its parts in space, we can see nothing in these concepts that helps us with understanding consciousness.
Bohm compares this problem to Descartes discussion of the difference between mind and matter. Descartes to some extent relied on God to resolve the gap. Bohm says that since Descartes time the idea of introducing God into the equation has been let drop, but he argues that as a result conventional modern thinking has no way left to it for bridging the gap between matter and consciousness. In Bohm’s scheme there is an unbroken wholeness at the fundamental level of the universe, in which consciousness is not separated from matter.
Bohm’s view of consciousness is closely connected to Karl Pribram’s, 1991 holographic conception of the brain. Pribram sees sight and the other senses as lenses, without which the universe would appear as a hologram. Pribram thinks that information is recorded all over the brain, and that this information is enfolded into a whole, also in the manner of a hologram, although it is suggested that the physical function involved is more complicated than a hologram. In Pribram’s scheme, it is suggested that the different memories are connected by association and manipulated by logical thought. If the brain is also attending to sensory data, all of these facets are proposed to fuse together in an overall experience or unanalysable whole. This is suggested to be closer to the essence of consciousness than the mere excitation of neurons. In trying to arrive at a description of consciousness.
Bohm discusses the experience of listening to music. He thinks that the sense of movement and change that constitutes the experience of the music relies on notes both from the immediate past and the present being held in the brain at the same time. Bohm does not view the notes from the immediate past as memories but as active transformations of what came earlier. He proposes that a given moment can cover an extended duration, as opposed to the more conventional ‘now’ concept of something instantaneous. The moment is proposed to have extension in time and space, but the amount of this extension is not precisely defined. One moment gives rise to the next, with content that was implicate in the immediate past becoming explicate in the present. The sense of movement in music is the result of the intermingling of transformations.
Bohm likens these transformations to the emergence of consciousness from the implicate order. He thinks that in listening to music people are directly perceiving the implicate order. The order is thought to be active and to flow into emotional and physical responses. Bohm also discusses the problem of time, the concept of ‘now’ and the difficulty of distinguishing ‘now’ from the immediate past, which no longer exists. In classical physics this problem is overcome via the calculus, with its concept of ‘the limit’, which is effectively a zero change in time or space. This is successful for calculating the movement of material objects in classical physics, which comprises the explicate order. However, it is not applicable to quantum theory in which movement is not seen as continuous. In the implicate order intermingled elements are present together, and processes are the outcome of what is enfolded in the implicate order. In this structure, there is a flow between experience and logical thought that is considered by Bohm to hold out the possibility of a bridge between matter and consciousness.
Bohm also advances the idea of overall necessity driving short-term brain processes. Thus it is proposed that an ensemble of elements enfolded in the brain will constitute the next development of thought, and that these elements are bound by an overall necessity that brings them together, and also determines the next moment in consciousness. Bohm relates movement to the implicate order; for movement, we can also read change or flow or the coherence of our perception of a piece of music over a short period of time. Evidence for this is claimed to derive from studies of infants (Piaget, 1956), who have to learn about space and time, which are seen as part of the explicate order, but appear to have a hard-wired understanding of movement that is implicate. Bohm’s view is that the movement and flow of the implicate order are hard-wired into human brains, in the same way that Chomsky asserts that grammar is hard- wired into the human brain, but that by way of contrast, the classical space and time of the explicate order are something that has to be learnt by experience. –
Basil Hiley was the long-term associate of David Bohm, and is a continuing exponent of many of his ideas (Bohm and Hiley, 1987, 1993). Hiley argues that the Bohmian notion of active information introduced in relation to quantum phenomena can also be applied to classical signalling. This is suggested to have relevance to concept of meaning as opposed to mere information. Hiley queries whether the word ‘information’ that is widely used in science including neuroscience always carries the same meaning. Bohm and Hiley were interested in so-called active information that drives physical processes and leaves no choice as to whether they are implemented or not. This is distinct from a mere list of data or instructions or a way of viewing entropy. Active information has been used in a number of papers relative to the mind/matter relationship (Hiley , 2001 and Hiley & Pylkkänen, 2005)
The colloquial understanding of information is that it is data from which meaning can be extracted by an intelligent entity. Hiley regards it as a fundamental question as to whether information has objective significance devoid of the subjective involvement. Verbal communication is seen as a particular problem, where meaning is translated into sound waves and then back into meaning. Hiley relates this meaning to the agency of the speaker and the agency of the listener. He relates this inseparable link to Bohr’s notion of the indivisibility of the quantum action, which cannot distinguish between the system under observation and the means of observation.
Bohm believed that a quantum potential could be extracted from Schrödinger’s equation and that this quantum potential could act as an information potential. In transmitting a signal there is a trade off between the duration of the pulse and the frequency. There is an ambiguity in the signal that is similar to the uncertainty in quantum mechanics. The two concepts are said to employ different aspects of the same mathematical structure. Hiley refers to the two-slit experiment, where the potential is claimed to cover the whole experimental arrangement. The quantum information changes in relation to any change in the experimental arrangement, and this is related to information entering the brain and changing the arrangement of its parts.
Within the brain Bohm thought that meaning was in the process itself. Bohm proposed that there were two sides or two poles to the brain, the manifest and relatively stable material side and the subtle mind-like side. The manifest side is classical physics, while the subtle side is the quantum level that produces the classical level. Thus the mind cannot be separated from matter. The ambiguity or uncertainty of the quantum comes through in the ambiguity attached to meaning. The quantum is seen as a pool of information shared by entangled particles. When the potential or pool vanishes, the classical world emerges. Hiley also agrees that this system could operate in terms of quantum fields. The main weakness of this description seems to be the lack of detail as to how the quantum mechanism would operate in the brain, and the lack of distinction between information which does not by itself imply consciousness and consciousness itself. The emergence of meaning could be thought to imply consciousness but this important point is not at all developed.
Fig. 10: The reversible ink drop /cylinder-experiment, as an allegory for the unfolding of ”implicate order” with hidden variables
In the 1960s Bohm began to take a closer look at the notion of order. One day he saw a device on a television program that immediately fired his imagination. It consisted of two concentric glass cylinders, the space between them being filled with glycerin, a highly viscous fluid. If a droplet of ink is placed in the fluid and the outer cylinder is turned, the droplet is drawn out into a thread that eventually becomes so thin that it disappears from view; the ink particles are enfolded into the glycerin, (see Fig. 10) .
But if the cylinder is then turned in the opposite direction, the thread-form reappears and re-becomes a droplet; the droplet is unfolded again. Bohm realized that when the ink was diffused through the glycerin it was not in a state of ’disorder’ but possessed a hidden, or non-manifest, order. In Bohm’s view, all the separate objects, entities, structures, and events in the visible or explicate world around us are relatively autonomous, stable, and temporary ’subtotalities’ derived from a deeper, implicate order of unbroken wholeness.
Bohm gives the analogy of a flowing stream: On this stream, one may see an ever-changing pattern of vortices, ripples, waves, splashes, etc., which evidently have no independent existence as such. Rather, they are abstracted from the flowing movement, arising and vanishing in the total process of the flow. Such transitory subsistence as may be possessed by these abstracted forms implies only a relative independence or autonomy of behavior, rather than absolutely independent existence as ultimate substances.
Valentini, 2002 consistently defends the pilot wave mechanism of David Bohm. Bohm, he says, had an interesting trajectory. There are really three Bohms. There’s the very early Bohm who was interested in Niels Bohr’s ideas about complementarity. Then there’s the Bohm of the 1950s who worked on the pilot wave theory of hidden variables. Then in the 1960s he changed again. He met Krishnamurti and got very interested in Indian philosophy and started trying to tag some mystical ideas onto the pilot-wave theory. If you look at the yoga sutras of Patanjali you can see this idea that material objects are somehow illusions and projections from something deeper, that things emerge from this deeper level and disappear into this deeper level again. So, indeed, Bohm tried to adopt an interpretation of the wave as a manifestation of a deeper level, perhaps associated with consciousness.
Why does Valentini like the pilot-wave theory ?:
- It preserves a realist ontology wherein particles possess determinate values of space-time location and momentum.
- They continue to possess such values between various acts of observation/measurement, rather than acquiring them only in consequence of being measured with respect to this or that parameter.
- This allows for greater continuity with certain components of classical (prequantum) physics such as the conservation laws respecting matter-energy and angular momentum.
- The pilot-wave hypothesis produces results in perfect accordance with those obtained in standard QM by means of the Schrödinger-derived wave probability function.
- While avoiding any recourse to mysterious ideas of the wave packet collapse as somehow brought about by observer intervention or only on the instant – in Schrödinger’s parable – when the box is opened up for inspection and the cat thus release from its supposed ‘superposed’ (dead-and-alive) state.
- Pilot-wave theory also seeks to explain quantum effects such as photon deflection or multipath interference without proposing a massively expanded ontology of parallel worlds, shadow universes, multiple intersecting realities, etc.
Pilot-wave theory has three axioms. The first is de Broglie’s law of motion, which specifies exactly how particles are guided by the wave. The second is Schrödinger’s wave equation, telling us how the wave itself changes over time. The third is that particles have to start off with a certain probability distribution.“In any given experiment, each particle is accompanied by a wave”. The particle starts off somewhere inside the wave. In order to give results that can be verified with an experiment, all three axioms have to be used. In classical physics there is an interplay between particle and field, each generates the dynamics of the other. In the original pilot wave theory the steering wave acts on positions of particles, but it is not acted upon by the particles. However Holland, (2001) has explored some deeper ideas related to this question in his work on a possible Hamiltonian formulation of pilot-wave and proposed a particle to wave back-reaction This implies that, through the pilot wave mechanism, particles, just like waves, carry information regarding their future states. It also means that particle trajectories may exert a back reaction on the wave function, implying symmetric interaction between implicate and explicate orders.
What is so unusual about Antony Valentini? He, in fact, resurrected a theory that undoes the central tenet of quantum mechanics, and gives relativity theory a support as well. The theory follows quantum math, but at the same time allows for new possibilities beyond conventional quantum mechanics. It’s a theory that says there is indeed an objective reality behind the things we observe, that quantum uncertainty is not fundamental, and that somewhere, somehow, time is universal—not relative. This implies goodbye to ghostly probabilities, with their strange propensity for collapsing into real things and hello to hidden variables that are objective.
This seems related to the American physicist John Archibald Wheeler (1990, 2002), who suspected that reality exists not because of physical particles, but rather because of the act of observing the universe. “Information may not be just what we learn about the world. It may be what makes the world.” In other words: when humans ask questions about nature, there is an active transfer of information in the domain of quantum waves where, in principle, backward causation from the future is possible. The second arrow (from future to past) remains hidden (unnoticed) for us, because life is trapped in the momentum of time. Entanglement means that particles separated at any distance, under certain conditions, can have mutually determined properties (are correlated). In this block universe multiple path’s or life lines are laid out of which the individual chooses a single one. Consequently this concept allows free choice and therefore is not deterministic. Such non-locality becomes manifest by observation (or collapse of the wave aspect), as has been shown by electron spin orientation or polarized light. This might also be viewed as backward causation.
According to Wheeler’s (1990) and Feynman’s electrodynamics, emitters coincide with retarded fields, which propagate into the future, while absorbers coincide with advanced fields, which propagate backward in time. This time-symmetric model leads to predictions identical with those of conventional electrodynamics. For this reason it is impossible to distinguish between time symmetric results and conventional results.
In his “Transactional Interpretations of Quantum Mechanics, John Cramer (1988) stated that “Nature, in a very subtle way, may be engaging in backwards-in-time handshaking: The transaction between retarded waves, coming from the past, and advanced waves, coming from the future, gives birth to a quantum entity with dual properties of the wave/particle. Thus the wave property is a consequence of the interference between retarded and advanced waves, and the particle property is a consequence of the point in space where the transaction takes place”. The transactional interpretation requires that waves can really travel backwards in time. This assertion seems counterintuitive, as we are accustomed to the fact that causes precede effects. It is important to underline, however, that, unlike other interpretations of QM, the transactional interpretation takes into account special relativity theory which describes time as a dimension of space, as mentioned earlier. Of note, the completed transaction erases all advanced effects, so that no direct advanced wave signaling is possible: “The future can affect the past only very indirectly, by offering possibilities for transactions” (Cramer, 1988, see Fig. 11).
King, 2003 (see later) stated: “the hand-shaking space-time relation implied by the transactional interpretation makes it possible that the apparent randomness of quantum events masks a vast interconnectivity at the sub-quantum level, reflecting Bohm’s implicate order, although in a different manner from Bohm’s pilot wave theory. Because transactions connect past and future in a time-symmetric way, they cannot be reduced to predictive determinism, because the initial conditions are insufficient to describe the transaction, which also includes quantum boundary conditions coming from the future absorbers. However this future is also unformed in real terms at the early point in time emission takes place”.
The principle of backward causation has been experimentally demonstrated recently. Aharonov’s team and various collaborating groups (see Aharonov, 2010), studied whether the future may influence the past by sophisticated quantum physics technology. Aharonov concluded that a particle’s past does not contain enough information to fully predict its fate, but he wondered, if the information is not in its past, where could it be? Clearly, something else must also regulate the particle’s behavior. Aharonov and coworkers proposed a new framework called time-symmetric quantum mechanics. Recent series of quantum experiments in
Fig. 11: The transactional interpretation of QM of Cramer of retarded and advanced waves from past and future that produce the present (up left) and the time-symmetric concept of the prize winning Aharonov (upper right inset) arising from post-selection (soft) measurement of a quantum state, that prevents wave collapse and also shows that the future may affect the past. This shows that through the wave aspect the “wavicle”(inset right below), intrinsically contains an aspect of the future.
about 15 different laboratories around the world seem to actually confirm the notion that the future can influence results that happened before those measurements were even made, see Fig. 11.
Generally the protocol included three steps: a “pre-selection” measurement carried out on a group of particles; an intermediate measurement; and a final, “post-selection” step in which researchers picked out a subset of those particles on which to perform a third, related measurement. To find evidence of backward causality, meaning information flowing from the future to the past, the effects of, so called, weak measurements were studied. Weak measurements involve the same equipment and techniques as traditional ones but do not disturb the quantum properties in play. Usual (strong) measurements would immediately collapse the wave functions in superposition to a definite state. The results in the various groups were amazing: repeated post-selection measurement of the weak type changed the pre-selection state, revealing an aspect of non-locality. Thus it appears that the universe might have a destiny that reaches back and “collaborates” with the past to bring the present into view. On a cosmic scale, this idea could also help explain how life arose in the universe against tremendous odds and confirms the idea that knowledge was inherited from a common information pool (Meijer 2012, Kak, 2009, Jahn and Dunne, 2007).
Henry Stapp: attention, intention and quantum coherence
Stapp starts by asking what sort of brain action corresponds to a conscious thought. He criticizes the mainstream for assuming that Newtonian physics can be applied directly to the brain, and claims that a quantum framework is needed to understand the brain. The Copenhagen interpretation of quantum theory was the first mainstream version, and was pragmatic in recommending the theory as a system of rules that allowed the calculation of empirically verifiable relationships between observations. Stapp, 2009, 2012, favors Heisenberg’s refinement of the original Copenhagen position. Heisenberg thought that the probability distribution of quantum theory really existed in nature, and that the evolution of this probability was punctuated by uncontrolled events, which are the events that actually occur in nature, and which at the same time eliminate the other probabilities.
The development of computing during the second half of the 20th century demonstrated that thought-like or cognitive processes required internal representations not allowed for in the then prevailing behaviourist concept. However, this still did not account for conscious experience, and in this period thinking or cognition came to be seen as something separate from consciousness.
Both Bohr and Heisenberg viewed quantum theory as a set of rules for making predictions about observations under experimental conditions. These predictions are incompatible with classical physics in respect of the prediction of non-locality. Heisenberg did not view the quanta as actual things, but as tendencies for certain types of events to occur. The orderly evolution of the system is deterministic, but this controls only the tendencies for things or propensity for events, and not the actual things or events themselves. The things are controlled by quantum jumps that do not individually conform to any natural law, but collectively conform to statistical rules.
Heisenberg and Schödinger
Stapp, 2009, 2012, bases his proposal for quantum consciousness on three observations.
1.) The brain’s representation of the body or body schema must be represented by some form of physical structure in the brain. 2.) Some brain processes such as the behavior of calcium ions involved in synaptic transmission need to be treated quantum mechanically. Stapp also thinks that the sensitivity and non-linearity of the synaptic system, the involvement of calcium ions and the large number of meta-stable states into which the brain could evolve all point to a quantum mechanical system. 3.) Stapp suggests that the brain could evolve into a state analogous to the deterministic evolution of the quantum state from which an actual state must be selected.
Although Stapp pays a lot of attention to the synapses his is not actually a neuron based theory. Rather the event could be selected from the large scale excitation of the brain. The selection of events from a wide range of probabilities is seen as being particular adaptive where an organism needs to select from a range of future probabilities. Stapp wishes to establish the relationship between mind and matter, the relationship between reality and quantum theory, and also how relativity is reconciled with both experience and non-locality. The solution is suggested to be a series of creative events bringing into being one of a range of possibilities created by prior events. He suggests that consciousness exercises top-level control over neural excitations in the brain. The neural excitations are regarded as a code, and each human experience is regarded as a selection from this code. He sees the physical world as a structure of tendencies in the world of the mind. He finds it as unacceptable that there is an irreducible element of chance in nature as described by quantum theory, which is the most usual conclusion to be drawn from the randomness of the wave function collapse. The element of conscious choice is seen as removing chance from nature. He distinguishes between systems where an external representation and knowledge of the laws of physics can accurately predict how the system develops, and his own idea of a system that is internally determined in a way that cannot be represented outside the system.
The brain is viewed as a self-programming computer with self-sustaining neural patterns as the codes. It is necessary to integrate the code from sensory input, with the code from previous experience. This creates a number of probabilities, from which consciousness has to select. The conscious act is the selection of a piece of top-level code, which then exercises control over the flow of neural excitation. The unity of conscious thought comes from a unifying force in the conscious act itself. It selects a single code from amongst a multitude on offer in the brain. Raising an arm involves a conscious act selecting the top-level code that raises the arm. This is suggested to close the traditional explanatory gap between thought and classical physics, because here the conscious thought is the selection of the code that allows the physical act. Stapp goes on to discuss the conscious process of looking at pictures. According to him top-level codes instruct lower-level codes to produce new top level codes and to initiate their storage in memory. The experience of noticing something is deemed to be the process of initiation into memory. There are close connections between the top-level code and the memory structure. The lower level codes have to be functioning correctly i.e. not damaged, and to be focused on the incoming stimuli in order for it to be put into higher level code and to be registered in memory.
Stapp discusses what neural research would need to reveal if it were to support his theory. It would need to reveal the neural connections needed to support self-sustaining patterns of neural excitation. It is necessary to find the neurons providing the top level coding, then the mechanism for storing memory traces of this, and finally the mechanism by which these memories are involved in the production of new top-level codes. Each conscious experience is seen as a creative act represented in the physical world by the selection of a top level code from among the many generated by the laws of quantum theory. The conscious experiences are the initiation of processes that produce changes in the body schema and the external and internal reality schema. The conscious act is functionally equivalent to changes in the physical world as represented in quantum theory. In the Heisenberg version of quantum theory physical things are events and quantum theory gives the propensity for particular events to occur. This is seen as providing a link between conscious processes and brain processes. In the Heisenberg version it is the act of observation which leads to the selection of a particular propensity.
Stapp attaches great importance to the idea of the formation of a record. This is seen as analogous to the Geiger counter that registers a record of a quantum event. Every conscious experience is seen as recordable, because it is evidence of some form of brain process. The later retrievability of the experience is evidence of a record in the brain. A key process in brain dynamics is seen as persisting patterns of neural excitation producing physical changes in neurons that enable a particular pattern to be re-excited, and allow re-excited pattern to connect with new stimuli. This is seen as the basis of the brain’s associative memory.
The top-level brain process is viewed as a process of actualizing symbols, composed of earlier symbols connected into a whole by neural links. The top-level process is seen as directing information gathering, planning and choice of particular plans, monitoring the execution of plans. This can be understood in terms of top-level direction of multiple neural processes. Because of the top-level directive role, its connection to associative memory and the multiple structure of the symbols involved it is suggested that each top-level event corresponds to a psychological event, and this in turn connects psychological events to the quantum level. Both the top-level brain event and the psychological event act as choosers of a possibility, or converters of potentialities into actualities. Each human conscious experience is seen as the feel of an event in the top-level of processing in the human brain, a sequence of Heisenberg actual events, actualizing a quasi-stable pattern of neural activity. Activation of particular symbols creates a tendency for the activation of other related symbols. The body schema is the product of actualized events accumulated over the life of the body. The top-level symbols have compositional structure formed from other symbols. The Heisenberg events are seen as being capable of grasping a whole a pattern of activity, and this is seen as accounting for the unity of consciousness. The continuity or flow of time is explained by an overlap of symbols with the preceding mental event.
Stapp drawing on studies of infants assumes that humans have a hard-wired body-world schema. Consciously directed action is seen as a projection of this body-world schema into the future, with a corresponding representation in the brain. This body-world schema is seen as directing the unconscious brain, issuing commands for motor action and instructions for mental processing. Ongoing questions to nature continue to be posed by the observer. This equates to the ‘Heisenberg choice’, where the human observer has to decide what question to put to nature. In this case it is the conscious processing in the brain that does this. Each experience leads to further updating of the system.
When an action is initiated by a thought, this usually includes some monitoring of the subsequent action, to check it against the intended action. So something experienced as an intention becomes an action, the attention to which is also experienced. Stapp views the deterministic unfolding of matter according to the Schrödinger equation as running parallel to the movement from intention to attention, as two poles of the same quantum event that prolong the coherent state and thereby protect against potential decoherence. He also sees a tripartite structure being the Schrödinger equation, the Heisenberg choice of question to ask and the (Dirac) choice of answer from nature.
Stapp’s point is that only a conscious observer within the brain can ask the question, and drive the quantum process. This also allows the experiential process to enter into the causal structure of the body/brain. Stapp feels that some additional process is needed and the conscious observer is a perfect candidate. He sees quantum theory as informational in nature and thus linked to increments in knowledge occurring in the brain. The increment in knowledge is seen as linked to a reduction in the quantum state, thus linking mind to the physical world. Mind is thus seen as entering into the physical world through the Heisenberg choice.
When the quantum state is reduced a wave that extends over an indefinite amount of space is instantaneously reduced to a tiny local region. Stapp feels that this constitutes a representation of knowledge rather than a representation of matter. The wave before collapse is seen as a matter of potentiality or probabilities, which are themselves often conceived as ideas rather than realities. However, the quantum state pre-collapse evolves in line with the deterministic Schrödinger equation, giving the state some of the properties of the physical, thus and creating in fact a sort of hybrid.
Stapp does not suggest that our conscious thoughts are completely unconstrained, but he does see our thoughts as a part of the causal structure of the mind-brain that is not dominated by the actions of the smallest components of the brain, but is also not a random effect. Our thoughts are seen not as linked to external objects, but instead linked to patterns of brain activity. Stapp points out that his theory has a place for an efficacious conscious mind linked to the physical processes of the brain. He suggests that the dynamic of the Schrödinger evolution, which is to produce an event that replicates the event that produced, it could somehow stand in for the later action of conscious minds. The identity theory of mind claims that each mental state is identical to some process in the brain. However, classical physics says that the entire causal structure of a physical system is determined by the microscopic level of the physical structure, so that larger scale effects such as consciousness cannot have any influence.
A potential problem with the whole Copenhagen influenced interpretation of quantum theory is its possible dualism. Mathematics can be seen as a mental process instantiated in protein, which, in principle, cannot directly influence the external world. Somehow the mathematical description of the quantum waves is sitting out there in space, and then as a result of a measurement becomes a physical particle. In Copenhagen, a mental concept external to the body seems to become physical with no explanation as to how the two could interact. The Copenhagen system has the additional problem of what was happening before human minds emerged to perform measurements, for which Stapp’s explanation appears rather sketchy. Consequently, a more detailed model is required to picture the inherent interaction between a more general form of consciousness as a measuring device in evolution (see later).
Roger Penrose: Consciousness and the Spacetime Geometry of Universe
Roger Penrose, 1989, 1994, 2004, is one of the very few thinkers to consider how consciousness could arise from first principles rather than merely trying to shoe horn it into nineteenth century physics, and his ideas appear to be a good starting point from which to try to understand consciousness as a fundamental.
Penrose’s approach was a counter attack on the functionalism of the late 20th century, which claimed that computers and robots could be conscious. He approached the question of consciousness from the direction of mathematics. The centre piece of his argument is a discussion of Gödel’s theorem. Gödel demonstrated that any formal system or any significant system of axioms, such as elementary arithmetic, cannot be both consistent and complete. There will be statements that are undecidable, because although they are seen to be true, but are not provable in terms of the axioms.
Penrose’s controversial claim: The Gödel theorem as such is not controversial in relation to modern logic and mathematics, but the argument that Penrose derived from it has proved to be highly controversial. Penrose claimed that the fact that human mathematicians can see the truth of a statement that is not demonstrated by the axioms means that the human mind contains some function that is not based on algorithms, and therefore could not be replicated by a computer. This is because the functioning of computers is based solely on algorithms (a system of calculations). Penrose therefore claimed that Gödel had demonstrated that human brains could do something that no computer was able to do.
Arguments against Penrose’s position: Some critics of Penrose have suggested that while mathematicians could go beyond the axioms, they were in fact using a knowable algorithm present in their brains. Penrose contests this, arguing that all possible algorithms are defeated by the Gödel problem. In respect to arguments as to whether computers could be programmed to deal with Gödel propositions, Penrose accepts that a computer could be instructed as to the non-stopping property of Turing’s halting problem. Here, a proposition that goes beyond the original axioms of the system is put into a computation. However, this proposition is not part of the original formal system, but instead relies on the computer being fed with human insights, so as to break out of the difficulty. So the apparently non-algorithmic insights are required to supplement the functioning of the computer in this instance.
An unknowable algorithm: Penrose further discusses the suggestion of an unknowable algorithm that enables mathematicians to perceive the truth of statements. He argues that there is no escape from the knowability of algorithms. An unknowable algorithm means an algorithm, whose specification could not be achieved. But any algorithm is in principle knowable, because it depends on the natural numbers, which are knowable. Further, it is possible to specify natural numbers that are larger than any number needed to specify the algorithmic action of an organism, such as a human or a human brain.
Mathematical robots: Penrose says that with a mathematical robot, it would not be practical to encode all the possible insights of mathematicians. The robot would have to learn certain truths by studying the environment, which in its turn is assumed to be based on algorithms. But to be a creative mathematician, the robot will need a concept of unassailable truth, that is a concept that some things are obviously true.
This involves the mathematical robot having to perceive that a formal system ‘H’ implies the truth of its Gödel proposition, and at the same time perceiving that the Gödel proposition cannot be proved by the formal system ‘H’. It would perceive that the truth of the proposition follows from the soundness of the formal system, but the fact that the proposition cannot be proved by the axioms also derives from the formal system. This would involve a contradiction for the robot, since it would have to believe something outside the formal system that encapsulated its beliefs.
Amongst experts in this area who do not entirely reject Penrose’s argument, Feferman, 1996 has criticized Penrose’s detailed argument, but is much closer to his position than to that of mainstream consciousness studies. Feferman makes common cause with Penrose in opposing the computational model of the mind, and considering that human thought, and in particular mathematical thought, is not achieved by the mechanical application of algorithms, but rather by trial-and-error, insight and inspiration, in a process that machines will never share with humans. Feferman finds numerous flaws in Penrose’s work, but at the end he informs his readers that Penrose’s case would not be altered by putting right the logical flaws that Feferman has spent much time discovering.
Feferman’s own position is that the computational-mind argument is misleading in terms of the weight that it places on the equivalence between Turing machines and formal systems. The model of mathematical thought in terms of formal systems is considered to be closer to the nature of human thought, and particularly mathematical thought, than to the functioning of Turing machines. The Turing machine model would assume that given a problem, human reason would plug away, applying the same algorithm indefinitely, in the hope of finding an answer. Feferman says that it is ridiculous to think that mathematics is performed in this way. Trial-and-error reasoning, insight and inspiration, based on prior experience, but not on general rules, are seen as the basis of mathematical success. A more mechanical approach is only appropriate, after an initial proof has been arrived at. Then this approach can be used for mechanical checking of something initially arrived at by trial-and-error and insight. He views mathematical thought as being non-mechanical. He says that he agrees with Penrose that understanding is essential to mathematical thought, and that it is just this area of mathematical thought that machines cannot share with us.
Fig. 12: The Twistor theory as proposed by Penrose to find a basic structure for spacetime geometry, as is also attempted by String theories. Twistor theory was later on applied by Witten in the universal string M-theory, to diminish the total number of required extra dimensions
Penrose’s own take on the wave function collapse suggests that it is a real event. He sees superposition as a separation in the underlying space-time geometry. Each quanta is embedded in a bit of space, and as the superpositions grow further apart, a blister or separation appears in space-time. This can be viewed as the same thing as the beginning of the multiple world view, but instead of going on to generate separate universes, if the separation between superpositions grows to more than the Planck length, the wave collapses and chooses one of the superposed alternatives.
Twistor theory (Fig. 12) in the context of space-time has been pioneered by Roger Penrose and others since the 1960s and is based on the association of a complex twistor space CP3 to the space of light rays in space-time. The name derives from the Robinson congruence which is the natural realization of a (non-null) twistor. Penrose thereby attempted to encode spacetime points, affording a quantized spacetime. Some appealing aspects of the theory are:
– twistor space becomes the basic space so that light rays are the fundamental objects from which space-time is derived;
– discrete quantities such as spin are represented in the discrete values obtained by contour integration;
– its evident elegance and simplicity
The normal quantum wave collapse is seen as an entirely random choice of the state of a quantum particle, from amongst the various superpositions of states. However, these collapses involve interaction with the environment. Penrose suggests that a quanta, which does not interact with the environment will undergo objective reduction (OR) when the separation between superpositions begins to exceed the Planck length. He also suggests that while the normal collapse is totally random OR is not totally random but involves a non-computable process. This is suggested because Penrose thinks that the brain manifests a non-computational aspect, and that the wave function collapse is the only place in the universe where such a thing can exist. Penrose also proposes that OR based quantum computation occurs in the brain.
Penrose’s search for a non-algorithmic feature: Penrose went on to ask, what it was in the human brain that was not based on algorithms. The physical law is described by mathematics, so it is not easy to come up with a process that is not governed by algorithms. The only plausible candidate that Penrose could find was the collapse of the quantum wave function, where the choice of the position of a particle is random, and therefore not the product of an algorithm. However, he considered that the very randomness of the wave collapse disqualifies it as a useful basis for the mathematical judgement or understanding in which he was initially interested.
The wave function: In respect of consciousness, it is Penrose’s attitude to the reality of the quantum wave function collapse that is the important area. In particular, he disagrees with the traditional Copenhagen interpretation, which says that the theory is just an abstract calculational procedure, and that the quanta only achieve objective reality when a measurement has been made. Thus in the Copenhagen approach reality somehow arises from the unreal or from abstraction, giving a dualist quality to the theory.
The discussion of quantum theory repeatedly comes back to the theme that Penrose regards the quantum world and the uncollapsed wave function as having objective existence. In Penrose’s view, the objective reality of the quantum world allows it to play a role in consciousness. Penrose emphasizes that the evolution of the wave function portrayed by the Schrödinger equation is both deterministic and linear. This aspect of quantum theory is not random. Randomness only emerges when the wave function collapses, and gives the choice of a particular position or other properties for a particle. Penrose discusses the various takes made on wave function collapse by physicists. Some would like everything to depend on the Schrödinger equation, but Penrose rejects this idea, because it is impossible to see how the mechanism of this equation could produce the transformation from the superposition of alternatives, as found in the quantum wave, to the random choice of a single alternative.
He also discusses the suggestion that the probabilities of the quantum wave that emerges into macroscopic existence arise from uncertainties in the initial conditions and that the system is analogous to chaos in macroscopic physics. This does not satisfy Penrose, who points out that chaos is based on non-linear developments, whereas the Schrödinger equation is linear.
Important distinction between Penrose and Wigner
Penrose also disagrees with Eugene Wigner’s (see Wigner, 1992) suggestion that it is consciousness that collapses the wave function, on the basis that consciousness is only manifest in special corners of spacetime. Penrose himself advances the exact opposite proposal that the collapse of a special (objective) type of wave function produces consciousness. It is important to stress this difference between the Penrose and the Wigner position, as some commentators mix up Wigner’s idea with Penrose’s propositions on quantum consciousness, and then advance a refutation of Wigner, wrongly believing it to be a refutation of Penrose.
Penrose is also dismissive of the ‘many worlds’ version of quantum theory, which would have an endless splitting into different universes with, for instance, Schrödinger’s cat alive in one universe and dead in another universe. Penrose objects to the lack of economy and the multitude of problems that might arise from attempting such a solution, and in addition argues that the theory does not explain why the splitting has to take place, and why it is not possible to be conscious of superpositions.
Objective reduction, Consciousness, Spacetime and the Second law & Gravity
Penrose instead argues for some new physics, and in particular an additional form of wave function collapse. If the superpositions described by the quantum wave extended into the
Fig. 13: Neuronal tubulus as the potential site for quantum mediated effects in brain. Each tubulin is shown to have 9 rings representing 32 actual phenyl or indole rings per tubulin, with coupled, oscillating London force dipole orientations among rings traversing ‘quantum channels’ , aligning with rings in adjacent tubulins in helical pathways through microtubule lattices. On the right, superposition of alternative tubulin and helical pathway dipole states.
macroscopic world, we would in fact see superpositions of large-scale objects. As this does not happen, it is argued that something that is part of objective reality must take place to produce the reality that we actually see. This requirement for new physics is often criticized as unjustified. However, these criticisms tend to ignore the fact that while quantum theory provides many accurate predictions, there has never been satisfactory agreement about its interpretation, nor has its conflict with relativity been resolved.
Penrose sees consciousness as not only related to the quantum level but also to spacetime. He discusses the spacetime curvature described in general relativity. He looks at the effect of singularities relative to two spacetime curvature tensors, Weyl and Ricci. Weyl represents the tidal effect of gravity, by which the part of a body nearest to the gravitational source falls fastest creating a tidal distortion in the body. Ricci represents the inward pull on a sphere surrounding the gravitational force. In a black hole singularity, the tidal distortion of Weyl would predominate over Ricci, and Weyl goes to infinity at the singularity.
However, in the early universe expanding from the Big Bang, the inward tidal distortion is absent, so Weyl=0, while it is the inward pressure of Ricci that predominates. So the early universe is seen to have had low entropy with Weyl close to zero. Weyl is related to gravitational distortions, and Weyl close to zero indicates a lack of gravitational clumping, just as Weyl at infinity indicated the gravitational collapse into a black hole. Weyl close to zero and low gravitational clumping therefore indicate low entropy at the beginning of the universe.
The fact the Weyl is constrained to zero is seen by Penrose as a function of quantum gravity. The whole theory is referred to as the Weyl curvature hypothesis. The question that Penrose now asks is as to why initial spacetime singularities have this structure. He thinks that quantum theory has to help with the problem of the infinity of singularities. This would be a quantum theory of the structure of spacetime, or in other words a theory of quantum gravity.
Penrose regards the problems of quantum theory in respect of the disjuncture between the Schrödinger equations deterministic evolution and the randomness in wave function collapse as fundamental. He thinks in terms of a time-asymmetrical quantum gravity, because the universe is time-asymmetric from low to high entropy. He argues that the conventional process of collapse of the wave function is time-asymmetric. He describes an experiment where light is emitted from a source and strikes a half-silvered mirror with a resulting 50% probability that the light reaches a detector and 50% that it hits a darkened wall. This experiment cannot be time reversed, because if the original emitter now detects an incoming photon, there is not a 50% probability that it was emitted by the wall, but instead 100% probability that it was emitted by the other detecting/emitting device.
Penrose relates the loss of information that occurs in black holes to the quantum mechanical effects of the black hole radiation described by Stephen Hawking. This relates the Weyl curvature that is seen to apply in black holes and the quantum wave collapse. As Weyl curvature is related to the second law of thermodynamics, this is taken to show that the quantum wave reduction is related to the second law and to gravity. He proposes that in certain circumstances there could be an alternative form of wave function collapse. He called this objective reduction (OR). He suggests that as a result of the evolution of the Schrodinger wave, the superpositions of the quanta grow further apart. According to Penrose’s interpretation of general relativity, each superposition of the quanta is conceived to have its own spacetime geometry. The separation of the superpositions, each with its own spacetime geometry constitutes a form of blister in space-time. However once the blister or separation grows to more than the Planck length of 10-35 meters, the separations begin to be affected by the gravitational force, the superposition becomes unstable, and it soon collapses under the pressure of its gravitational self-energy. As it does so, it chooses one of the possible spacetime geometries for the particle. This form of wave function collapse is proposed to exist in addition to the more conventional forms of collapse (see also Fig.13).
Evidence for non-computational spacetime
In support of this, he points out that when the physicists, Geroch and Hartle, 1986 studied quantum gravity, they ran up against a problem in deciding whether two spacetimes were the same. The problem was solvable in two dimensions, but intractable in the four dimensions that accord with the four dimensional spacetime, in which the superposition of quantum particles needs to be modeled. It has been shown that there is no algorithm for solving this problem in four dimensions.
Earlier the mathematician, A. Markov, had shown there was no algorithm for such a problem, and that if such an algorithm did exist, it could solve the Turing halting, for which it had already been shown that there was no algorithm. The possibly non-computable nature of the structure of four-dimensional space-time is deemed to open up the possibility that wave function collapses could give access to this non-computable feature of fundamental space-time.
A long-term experiment is underway to test Penrose’s hypothesis of objective reduction. This experiment is currently being run by Bouwmeester at the University of California, Santa Barbara and involves mirrors only ten micrometres across and weighing only a few trillionths of a kilo, and the measurement of their deflection by a photon. The experiment is expected to take ten years to complete. This means that theories of consciousness based on objective reduction are likely to remain speculative for at least that length of time. However, the ability to run an experiment that could look to falsify objective reduction, at least qualifies it as a scientific theory.
Significance for consciousness
The significance of this for the study of consciousness is that, in contrast to the conventional idea of wave function collapse, this form of collapse is suggested to be non-random, and instead driven by a non-computable function at the most fundamental level of spacetime. Penrose argues that, in contrast to the conventional wave function form of collapse, there are indications that in this case, there is a decision process that is neither random nor computationally/algorithmically based, but is more akin to the ‘understanding’ by which Penrose claims the human brain goes beyond what can be achieved by a computer.
The road from physics to mental phenomena has already been frequented, notoriously by Pauli and Jung and, under the influence of Pauli, by Heisenberg. The interaction is not limited to a three decades-long Jung–Pauli epistolary and the reciprocal influences have been profound. The founding role of Pauliʼs work in quantum physics does not need to be recalled ( Pauli, 1994) and the effects of his quantum vision in the development of Jungʼs vision of the human mind (archetypes included) have been well explored. The title of the essay by Jung in their co-authored volume “Synchronizität als ein Prinzip akausaler Zusammenhänge” (Synchronicity as a Principle of Acausal Connections) could not indicate more clearly the influence of Pauli’s quantism on Jung’s perception of reality (Jung and Pauli, 1955), and the interplay of the two great minds. Before them, the self-referentiality of the Euclidean approach to human consciousness has been narrated by Lewis Carroll in his “Through the looking-glass”.
Philosophically, Orch OR perhaps aligns most closely with Alfred North Whitehead, (see link Wikipedia) and who viewed mental activity as a process of ‘occasions’, spatio-temporal quanta, each endowed—usually on a very low level, with mentalistic characteristics which were ‘dull, monotonous, and repetitious’. These seem analogous, in the Orch OR context, to ‘proto-conscious’ non-orchestrated OR events. Whitehead viewed high level mentality, consciousness, as being extrapolated from temporal chains of such occasions. In his view highly organized societies of occasions permit primitive mentality to become intense, coherent and fully conscious. These seem analogous to Orch OR conscious events. Abner Shimony, 2005, Henry Stapp, 2007 and Hameroff, 1998 recognized that Whitehead’s approach was potentially compatible with modern physics, specifically quantum theory, with quantum state reductions—actual events—appearing to represent ‘occasions’, namely Whiteheadʼs high level mentality, composed of ‘temporal chains … of intense, coherent and fully conscious occasions’ (Fig. 14), these being tantamount to sequences of Orch OR events.
These might possibly coincide with gamma synchrony, but with our current ‘beat frequency’ ideas gamma synchrony might more likely to be a beat effect than directly related to the OR reduction time τ. As Orch OR events are indeed quantum state reductions, Orch OR and Whitehead’s process philosophy appear to be quite closely compatible. Whitehead’s low-level ‘dull’ occasions of experience would seem to correspond to our non orchestrated ‘proto-conscious’ OR events. According to this scheme, OR processes would be taking place all the time everywhere and, normally involving the random environment, would be providing the effective randomness that is characteristic of quantum measurement. Quantum superpositions will continually be reaching a threshold for OR in non-biological settings as well as in biological ones, and OR would usually take place in the purely random environment such as in a quantum system under measurement. Nonetheless, in the Orch OR scheme, these events are taken to have a rudimentary subjective experience, which is undifferentiated and lacking in cognition, perhaps providing the constitutive ingredients of what philosophers call qualia. We term such un-orchestrated, ubiquitous OR events, lacking information and cognition, ‘proto-conscious’. See Fig 14
Fig. 14 : Quantizing space time of Whitehead by postulating units of experience and actual occasions that have a mental and physical pole and carry elements of goals, satisfaction and beauty. They arebuild up from previous occasions from societies of occasions( right insets).
In this regard, Orch OR has some points in common with the viewpoint, which incorporates spiritualist, idealist and panpsychist elements, these being argued to be essential precursors of consciousness that are intrinsic to the universe. It should be stressed, however, that Orch OR is strongly supportive of the scientific attitude, and it incorporates the viewpoint’s picture of neural electrochemical activity, accepting that non-quantum neural network membrane-level functions might provide an adequate explanation of much of the brain’s unconscious activity. Orch OR in microtubules inside neuronal dendrites and soma adds a deeper level for conscious processes.
Stuart Hameroff: Quantum coherence in brain tubules
Stuart Hameroff and Roger Penrose
Hameroff and Penrose, 2011 2013, classify all the mainstream approaches to consciousness as ‘classical functionalism’. Functionalism takes no account of what the brain is made of or of anything finer grained than the level of neuron-to-neuron connections. It believes that these connections could be copied in another material such as silicon, and that the resulting construct would be conscious. However, Hameroff argues that although axonal spikes and synaptic connections clearly play a key role in information processing in the brain, they may not be the main currency of consciousness. Hameroff argues that quantum processing in microtubules within the dendrites and gap junctions between dendrites are the main currency of consciousness.
The main case against quantum processing in the brain has always been that any quantum coherence in the brain would decohere faster than the time taken for any useful biological process. Hameroff accepts that this is in principle a valid argument. However, Hameroff claims that the microtubules may be screened from their environment by a gelatinous non-liquid ordered state that arises in the neuronal interior.
A further objection to quantum processing is that even if it arose in one neuron, it would difficult for it to communicate across the brain. This is countered by the suggestion that there could be quantum tunneling at gap junctions between neurons. In recent years, gap junctions have been discovered to be more widespread in the brain than was previously thought. They are also correlated with the 40Hz gamma synchrony. This oscillation was at one time promoted by Crick and Koch as the most promising correlate of consciousness. However, the idea fell from favour with mainstream neuroscience, when it was discovered that the gamma synchrony correlated with dendritic activity rather than axonal spiking.
In general, Hameroff argues that the emerging evidence of neurobiology has moved in favour of the Orch OR model over the last decade, not withstanding the continued unpopularity of the theory. Hameroff summarizes his proposals in the early part of the chapter. He thinks that consciousness arises in the dendrites of neurons that are connected by gap junctions to form ‘hyperneurons’, and that these are related to the gamma synchrony. Axonal spikes and synapses are seen as making inputs to and receiving outputs from the microtubular process as part of an interactive systems.
Hameroff touches on the famous Libet, 2006 experiments that demonstrated a 500ms timelag between a stimulus and the perception of it entering consciousness, although the subject is not aware of this time lag, as a result of a so-called backward referral in time. The mainstream has tended to favour an interpretation resembling the Dennett ‘multiple drafts’ concept, which would involve an after the event reconstruction of what had happened. Hameroff, however, thinks that the backward referral in time should be taken seriously. This was also the view of Roger Penrose, who suggested that backward referral (Fig. 15) might be indicative of quantum activity.
Fig.15 : Backward referral of time as proposed by Libet et al.
Hameroff points out that changes in dendrites can lead to increased synaptic activity. This is basic to ideas about learning, memory and neural correlates of consciousness. The changes in dendrites involve the number and arrangement of receptors and the arrangement of dendritic spines and dendrite-to-dendrite connections. Axon potentials or spikes have been assumed to be the main basis of consciousness, but Hammerof suggests that there could be other candidates. Electrodes implanted into the brain detect mainly the activity of dendritic gap junctions plus inhibitory chemical synapses. Thus the detected synchrony derives from dendrites rather than axonal spikes.
The main function of dendrites is seen to be the handling of input signal into the neuron, which may eventually result in an axon spike. However, this is not the whole story, since many cortical neurons have dendrites but no axons. Here dendrites interact with other dendrites. Also there can be extensive dendritic activity with no spikes. The evidence suggests that there are complex logic functions in the dendrites, and these may oscillate over a wide area, while remaining below the axon spiking threshold. Many post-synaptic receptors send signals into the dendrite cytoskeleton
Gamma synchronies, in the 30-70 Hz range, have aroused interest as possible correlates of consciousness. Gray and Singer, 1989, found coherent gamma oscillations in the brain that were dependent on visual stimulation. It was suggested that this synchrony could solve the binding problem, which is the problem of how the different inputs into the brain are bound together into a single conscious experience. It was suggested that the synchrony relected the activity of a relevant assembly of neurons. Varela, 1995 noted that synchrony operated whenever the processing of spatially separated parts of the brain were brought together in consciousness. Gamma synchrony has been demonstrated across cortical areas, hemispheres and the sensory/motor modalities. The synchrony is involved in a range of brain activities including perception of sound, REM dream sleep, attention, working memory, face recognition and somatic perception. Also gamma decreases during general anesthesia and returns on waking from this. Hameroff regards gamma synchrony as the best overall correlate of consciousness.
He further addresses the question of how the gamma synchrony is mediated. There is coherence over large areas of the brain, sometimes including multiple cortical areas and both hemispheres of the brain, with zero or near zero phase lag. If the synchrony was based on the axon/synapse system a considerable lag would be expected. In fact, the lack of coherence between the synchrony and axonal spike activity has led to a reduction in the amount of mainstream attention paid to the gamma synchrony.
Hameroff points to gap junctions as an alternative to synapses for connections between neurons. Neurons that are connected by gap junctions depolarize synchronously. Gap junctions play a more important role in the adult brain than was previously supposed. Numerous studies show that gap junctions mediate the gamma synchrony. A neuron may have many gap junction connections but not all of them are necessarily open at the same time. The opening and closing of the junctions may be regulated by the microtubules.
Hameroff suggests that cells connected by gap junctions may in fact constitute a cell assembly, with the added advantage of synchronous excitation. Cortical inhibitory neurons are heavily studded with gap junctions, possibly connecting each cell to 20 to 50 others (4). The axons of these neurons tend to form inhibitory GABA chemical synapses on the dendrites of other interneurons.
Fig. 16: Schematic representation of a brain microtubule, build up from tubulin proteins that can undergo rapid fluctuations in three dimensional configuration, enabling the sensing and transmission of quantum information (Qbits), see also Fig. 13.
Hameroff moves on to discuss the role of the cytoskeleton, which is seen to determine the structure, growth and function of neurons. Actin is the main constituent of dendritic spines and is present throughout the neuronal interior. Actin can de-polymerize into a dense meswork, and when this happens the interior of the cell is converted from an aqueous solution into a gelatinous state. Furthermore, when this happens the whole of the cytoskeleton forms a negatively charged matrix around which water molecules are bound into an ordered state. It is noted that the neurotransmitter glutamate binding to NMDA and AMPA receptors cause gel states in actin spines.
The cytoskeleton of the dendrites is distinct both from that found in cells outside the brain and from the cytoskeleton found in the axons of neurons. The microtubules in dendrites are shorter than those in axons and have mixed as opposed uniform polarity. This appears a sub-optimal arrangement from a normal structural point of view, and it is suggested that in conjunction with microtubule associated proteins (MAPs), this arrangement may be optimal for information processing rather than supportive structural functions. These microtubule/MAP arrangements are connected to synaptic receptors on the dendrite membrane by a variety of calcium and sodium influxes, actin and other inputs. Alterations in the microtubule/MAPs network in the dendrites correlate with the arrangement of dendrite synapatic receptors. Studies demonstrate that the cytoskeleton is also involved in signal transmission. It is suggested that the microtubule lattice is well designed to represent and process information (Fig. 17).
Tubulin was supposed to switch between two conformations (see Fig. 18). It is suggested that tubulin conformational states could interact with with neighboring tubulin by means of dipole interactions. The dipole-coupled conformation for each tubulin could be determined by the six surrounding tubulins. Hameroff describes protein conformation as a delicate balance between countervailing forces. Proteins are chains of amino-acids that fold into three dimensional conformations. Folding is driven by van der Waals forces between hydrophobic amino-acid groups. These groups can form hydrophobic pockets in some proteins. These pockets are critcal to the folding and regulation of protein. Amino acid side groups in these pockets interact by van der Waals forces. Non-polar atoms and molecules can have instantaneous dipoles.
Fig.17 : An ‘integrate-and-fire’ brain neuron, and portions of other such neurons are shown schematically with internal microtubules. In dendrites and cell body/soma (left) involved in integration, microtubules are interrupted and of mixed polarity, interconnected by microtubule-associated proteins (MAPs) in recursive networks (upper circle, right). Dendritic–somatic integration (with contribution from microtubule processes) can trigger axonal firings to the next synapse. Microtubules in axons are unipolar and continuous. Gap junctions synchronize dendritic membranes, and may enable entanglement and collective integration among microtubules in adjacent neurons (lower circle right). In Orch OR, microtubule quantum computations occur during dendritic/somatic integration, and the selected results regulate axonal firings which control behavior
Hameroff discusses the process of anesthesia which erases consciousness, but leaves many non-conscious functions intact. Anesthetic gas molecules are soluble in a lipid-like hydrophobic environment. Such areas are present in the brain in the lipid regions of cell membranes and in hydrophobic pockets within proteins. It is suggested that anesthetic gas molecules interact with amino-acid groups via London forces, altering the normal action of London forces on the conformation of protein.
Hameroff discusses quantum information processing. Quantum superpositions where the quantum waves represent multiple possibilities for the state of a particle, are known to persist until quanta are either measured or naturally interact with the rest of the environment. Hameroff takes the view that the original mainstream interpretation, Copenhagen Interpretation, puts not only consciousness but the concept of reality itself outside physics. Alternatives interpretations include the ‘many worlds’ view, where there is no collapse but the superpositions continue in multiple worlds and David Bohm’s idea in which the quanta are guided by active information.
It is important to stress that quantum computing as such is not expected to generate consciousness. In quantum computers, which many researchers, are now trying to develop quantum collapse will occur as a result of measurement or interaction with the environment. It is only in the event of OR that non-computability and consciousness could be brought into play.
Hameroff goes on to look at some of the detail of the theory that he and Penrose developed as to how consciousness could be based in microtubules in the brain. It is suggested that quantum compuations take place in microtubules orchestrated by the inputs of synapse via MAPs. Hence the theory is often known as Orch OR for orchestrated objective reduction. The computations are suggested to persist for 25 ms, which would link them to the 40Hz gamma synchrony, viewed as a correlate of consciousness even in more mainstream theories. The computations are terminated by objective reduction. It is proposed that in dendrites, the tubulin sub-units of the microtubules interact by dipole coupling so as process information. The tubulin conformation is governed by quantum London forces, so that the tubulins can exist as quantum superpositions of different conformations. In superposition the tubulins would be qbits in a quantum computer, computing by means of non-local entanglement with other tubulin qbits. This entanglement would not just be with tubulins in the same microtubule, but other microtubules in the same dendrite, and in other dendrites connected by gap junctions. Neurons connected by gap junctions can be viewed as a single hyperneuron, and the hyperneuron can be seen as a conventional neuron assembly.
The dendritic interiors alternate between two states as a result of the polymerisation of actin protein. In the depolymerised form the interior of the neuron is aqueous and microtubules signal and process information classically. There are synaptic inputs to the microtubules during this phase. When actin polymerises the interior of the dendrite becomes quasi-solid of gelatinous, and water near to the proteins becomes ordered as a result of the actin gelation. Debye layers of counterions may also shield the microtubules, due to the charged C-termini tails on the tubulins. This is suggested to make the microtubules sufficiently isolated from the environment for quantum superposition to occur in the tubulins. The geometry of a quantum computer lattice could be formed so as to be resistant to decoherence. Microtubules are suggested to have a structure which is particularly suitable for error correction. Coherent pumping of energy and quantum error correction may thus help to prevent decoherence. Quantum error correction involves a code that can detect and correct decoheence in a quantum system.
Hameroff claims to refute Tegmark’s attempt to disprove the Penrose/Hameroff model, (Hagan, et al, 2002). This is significant as Tegmark’s criticism of Orch OR has been widely accepted as a completely satisfactory dismissal of the theory, and responses to Tegmark are habitualy ignored. Tegmark calculated microtubule decoherence time as being 10^-13 seconds, which would certainly be much too short for any neural activity. However, he worked on the basis of his own model for quantum activity in microtubules, which was never proposed by Hameroff or anyone else, basing his calculation on a 24nm separation of solitons from themselves along the microtubules, whereas Orch OR proposes a superposition separation distance six orders of magnitude smaller. For some reason, Tegmark did not choose to address the Penrose/Hameroff model. This invalidates his particular approach, whatever the truth is about decoherence, but somehow it has not prevented his work from being quoted as an absolutely reliable refutation of Orch OR (Hagan, et al, 2002).
A recent update of the Orch OR model
A recent review and update of this 20-year-old theory of consciousness published in Physics of Life Reviews, 2013, persists to claim that consciousness derives from deeper level, finer scale activities inside brain neurons. The recent discovery of quantum vibrations in “microtubules” inside brain neurons corroborates this theory, according to review authors Stuart Hameroff and Sir Roger Penrose. Thi groundbreaking article, and some of the accompanying comments, are partly cited and summarized in the following:
“ Hameroff and Penrose suggest that EEG rhythms (brain waves) also derive from deeper level microtubule vibrations, and that from a practical standpoint, treating brain microtubule vibrations could benefit a host of mental, neurological, and cognitive conditions. Orch OR was harshly criticized from its inception, as the brain was considered too “warm, wet, and noisy” for seemingly delicate quantum processes. However, evidence has now shown warm quantum coherence in plant photosynthesis, bird brain navigation, our sense of smell, and brain microtubules. The recent discovery of warm temperature quantum vibrations in microtubules inside brain neurons by the research group led by Anirban Bandyopadhyay, 2011 at the National Institute of Material Sciences in Tsukuba, Japan (and now at MIT), corroborates the pair’s theory and suggests that EEG rhythms also derive from deeper level microtubule vibrations. In addition, work from the laboratory of Emerson and Eckenhoff et al, 2013, at the University of Pennsylvania, suggests that anesthesia, which selectively erases consciousness while sparing non-conscious brain activities, acts via microtubules in brain neurons.
“The origin of consciousness reflects our place in the universe, the nature of our existence. Did consciousness evolve from complex computations among brain neurons, as most scientists assert? Or has consciousness, in some sense, been here all along, as spiritual approaches maintain?” ask Hameroff and Penrose in their current review. “This opens a potential Pandora’s Box, but our theory accommodates both these views, suggesting consciousness derives from quantum vibrations in microtubules, protein polymers inside brain neurons, which both govern neuronal and synaptic function, and connect brain processes to self-organizing processes in the fine scale, ‘proto-conscious’ quantum structure of reality
After 20 years of skeptical criticism, “the evidence now clearly supports Orch OR,” continue Hameroff and Penrose. ” Our new paper updates the evidence, clarifies Orch OR quantum bits, or “qubits,” as helical pathways in microtubule lattices, rebuts critics, and reviews 20 testable predictions of Orch OR published in 1998 – of these, six are confirmed and none refuted.” An important new facet of the theory is introduced. Microtubule quantum vibrations (e.g. in megahertz) appear to interfere and produce much slower EEG “beat frequencies.” Despite a century of clinical use, the underlying origins of EEG rhythms have remained a mystery. Clinical trials of brief brain stimulation aimed at microtubule resonances with megahertz mechanical vibrations using transcranial ultrasound have shown reported improvements in mood, and may prove useful against Alzheimer’s disease and brain injury in the future.
Lead author Stuart Hameroff concludes, “Orch OR is the most rigorous, comprehensive and successfully-tested theory of consciousness ever put forth. From a practical standpoint, treating brain microtubule vibrations could benefit a host of mental, neurological, and cognitive conditions.”
The review is accompanied by eight commentaries from outside authorities, including an Australian group of Orch OR arch-skeptics. To all, Hameroff and Penrose respond robustly. They will engage skeptics in a debate on the nature of consciousness, and Bandyopadhyay and his team will couple microtubule vibrations from active neurons to play Indian musical instruments. “Consciousness depends on an anharmonic vibrations of microtubules inside neurons, similar to certain kinds of Indian music, but unlike Western music which is harmonic”.
Hameroff explained that consciousness depends on biologically ‘orchestrated’ coherent quantum processes in collections of microtubules within brain neurons, that these quantum processes correlate with, and regulate, neuronal synaptic and membrane activity. The continuous Schrödinger evolution of each such process is supposed to terminate in accordance with the specific Diósi–Penrose (DP) scheme of ‘objective reduction’ (‘OR’) of the quantum state. This orchestrated OR activity (‘Orch OR’) is taken to result in moments of conscious awareness and/or choice. The DP form of OR is related to the fundamentals of quantum mechanics and space–time geometry, so Orch OR suggests that there is a connection between the brainʼs biomolecular processes and the basic structure of the universe. The authors recently reviewed the Orch OR in light of criticisms and developments in quantum biology, neuroscience, physics and cosmology (Hameroff and Penrose, 2012).
The authors introduce a novel suggestion of ‘beat frequencies’ of faster microtubule vibrations as a possible source of the observed electro-encephalographic (‘EEG’) correlates of consciousness. They conclude that consciousness plays an intrinsic role in the universe. The group of Bandyopadhyay, 2011 has indeed discovered conductive resonances in single microtubules that are observed when there is an applied alternating current at specific frequencies in gigahertz, megahertz and kilohertz ranges. Electron dipole shifts do have some tiny effect on nuclear positions via charge movements and Mossbauer recoil. A shift of one nanometer in electron position might move a nearby carbon nucleus a few femtometers (‘Fermi lengths’, i.e. 10−15 m10−15 m), roughly its diameter. The effect of electron spin/magnetic dipoles on nuclear location is less clear.(Fig.18)
Recent Orch OR publications have cast tubulin bits (and quantum bits, or qubits) as coherent entangled dipole states acting collectively among electron clouds of aromatic amino acid rings, with only femtometer conformational change due to nuclear displacement (Fig.13). As it turns out, femtometer displacement might be sufficient for Orch OR/ Diósi–Penrose objective reduction (DP) is a particular proposal for an extension of current quantum mechanics, taking the bridge between quantum- and classical-level physics as a ‘quantum-gravitational’ phenomenon. This is in contrast with the various conventional viewpoints, whereby this bridge is claimed to result, somehow, from ‘environmental decoherence’, or from ‘observation by a conscious observer’, or from a ‘choice between alternative worlds’, or some other interpretation of how the classical world of one actual alternative may be taken to arise out of fundamentally quantum-superposed ingredients.
Fig.18 : Early, and current, versions of the OrchOR qubit. (a)Schematic cartoon version of OrchOR tubulin protein qubit used in OrchOR publications mainly from 1996 to 1998.On left ,tubulin oscillates between 2states with 1nanometer conformational flexing (10%tubulindiameter). On right, both state sexist in quantum superposition. (Irrespective of the schematic cartoon the 1 nanometer displacement has never been implemented in OrchORcalculations). The states are shown to correlate with electron locations(dipole orientations)in two adjacent phenyl (or indole)resonance rings in a non-polar ‘hydrophobic pocket’. (b)Schematic cartoon version of the OrchOR qubit developed since 2002 (following identification of tubulin structure by electron crystallography. Each tubulin is shown to have 9 rings representing 32 actual phenyl or indole rings per tubulin, with coupled, oscillating London force dipole orientations among rings traversing ‘quantum channels’ , aligning with rings in adjacent tubulins in helical pathways through microtubule lattices. On the right, superposition of alternative tubulin and helical pathway dipole states. There is no conformational flexing. Mechanical displacement occurs at the femtometer level of tubulin atomic nuclei (not shown) . Reimers et al . continually, and exclusively, criticize the obsolete, non- implemented version on left(a), and ignore the actual OrchOR dipole pathway qubit version on right(b).
The DP version of OR involves a different interpretation of the term ‘quantum gravity’ from what is usual. Current ideas of quantum gravity (see, for example, Smolin, 2004, normally refer, instead, to some sort of physical scheme that is to be formulated within the bounds of standard quantum field theory—although no particular such theory, among the multitude that has so far been put forward, has gained anything approaching universal acceptance, nor has any of them found a fully consistent, satisfactory formulation. ‘OR’ here refers to the alternative viewpoint that standard quantum (field) theory is not the final answer, and that the reduction R of the quantum state (‘collapse of the wave function’) that is adopted in standard quantum mechanics is an actual physical process which is not part of the conventional unitary formalism U of quantum theory (or quantum field theory). In the DP version of OR, the reduction R of the quantum state does not arise as some kind of convenience or effective consequence of environmental decoherence, etc., as the conventional U formalism would seem to demand, but is instead taken to be one of the consequences of melding together the principles of Einsteinʼs general relativity with those of the conventional unitary quantum formalism U, and this demands a departure from the strict rules of U. According to this OR viewpoint, any quantum measurement—whereby the quantum-superposed alternatives produced in accordance with the U formalism becomes reduced to a single actual occurrence—is a real objective physical process, and it is taken to result from the mass displacement between the alternatives being sufficient, in gravitational terms, for the superposition to become unstable.
It is helpful to have a conceptual picture of quantum superposition in a gravitational context. According to modern accepted physical theories, reality is rooted in 3-dimensional space and a 1-dimensional time, combined together into a 4-dimensional space–time. This space–time is slightly curved, in accordance with Einsteinʼs general theory of relativity, in a way which encodes the gravitational fields of all distributions of mass density. Each different choice of mass density effects a space–time curvature in a different, albeit a very tiny, way. This is the standard picture according to classical physics. On the other hand, when quantum systems have been considered by physicists, this mass-induced tiny curvature in the structure of space–time has been almost invariably ignored, gravitational effects having been assumed to be totally insignificant for normal problems in which quantum theory is important. Surprising as it may seem, however, such tiny differences in space–time structure can have large effects, for they entail subtle but fundamental influences on the very rules of quantum mechanics’.
The initial part of each space–time is at the upper left of each individual space–time diagram, and so the bifurcating space–time diagram on right moving downward and rightward illustrates two alternative mass distributions evolving in time, their space–time curvature separation increasing. mechanically (so long as OR has not taken place), the ‘physical reality’ of this situation, as provided by the evolving wavefunction, is being illustrated as an actual superposition of these two slightly differing space–time manifolds,The OR process is considered to occur when quantum superpositions between such slightly differing space–times take place differing from one another by an integrated space–time measure which compares with the fundamental and extremely tiny Planck (4-volume) scale of space–time geometry. As remarked above, this is a 4-volume Planck measure, involving both time and space, so we find that the time measure would be particularly tiny when the space-difference measure is relatively large (as with Schrödingerʼs hypothetical cat), but for extremely tiny space-difference measures, the time measure might be fairly long. For example, an isolated single electron in a superposed state (very low EGEG) might reach OR threshold only after thousands of years or more, whereas if Schrödingerʼs (∼10 kg) cat were to be put into a superposition, of life and death, this threshold could be reached in far less than even the Planck time of 10−43 s10−43 s.(Fig.19)
Fig. 19: As superposition curvature E reaches threshold, OR occurs and one particle location/curvature is selected, and becomes classical. The other ceases to exist.
In the situations under consideration here, where we expect a conscious brain to be at far from zero temperature, and because technological quantum computers require zero temperature, it is very reasonable to question quantum brain activities. Nevertheless, it is now well known that superconductivity and other large-scale quantum effects can actually occur at temperatures very far from absolute zero. Indeed, biology appears to have evolved thermal mechanisms to promote quantum coherence. Ouyang and Awschalom, 2003 showed that quantum spin transfer through phenyl ring π orbital resonance clouds (the same as those in protein hydrophobic regions, as illustrated in Fig.14, are enhanced at increasingly warm temperatures. Spin flip currents through microtubule pathways, may be directly analogous.)
In the past 6 years, evidence has accumulated that plants routinely use quantum coherent electron transport at ambient temperatures in photosynthesis Engel et al, 2007 and Hildner, 2013. Photons are absorbed in one region of a photosynthetic protein complex, and their energy is conveyed by electronic excitations through the protein to another region to be converted to chemical energy to make food. In this transfer, electrons utilize multiple pathways simultaneously, through π electron clouds in a series of chromophores (analogous to hydrophobic regions) spaced nanometers apart, maximizing efficiency (e.g. via so-called ‘exciton hopping’). Chromophores in photosynthesis proteins appear to enable electron quantum conductance precisely like aromatic rings are proposed in Orch OR to function in tubulin and microtubules.
Quantum conductance through photosynthesis protein is enhanced by mechanical vibration, and microtubules appear to have their own set of mechanical vibrations (e.g. in megahertz as suggested by Sahu et al., 2013. Megahertz mechanical vibrations is ultrasound, and brief, low intensity (sub-thermal) ultrasound administered through the skull to the brain modulates electrophysiology, behavior and affect, e.g. improved mood in patients suffering from chronic pain, perhaps by direct excitation of brain microtubules
Further research has shown warm quantum effects in bird-brain navigation, Gaucher et al,2011, ion channels Benroider and Roy, 2005 , sense of smell Turin, 1996 , DNA, Rieper, 2011, protein folding, Luo and Lu, 2011, and biological water, Reiter, 2013 (see also the reviews of Arndt, 2009 and Lloyd, 2011 on these aspects). What about quantum effects in microtubules? In the 1980s and 1990s, theoretical models predicted ‘Fröhlich’ gigahertz coherence and ferroelectric effects in microtubules. In 2001 and 2004, coherent megahertz emissions were detected from living cells and ascribed to microtubule dynamics (powered by mitochondrial electromagnetic fields) by the group of Jiri Pokorný in Prague .
Beginning in 2009, Anirban Bandyopadhyay and colleagues at the National Institute of Material Sciences in Tsukuba, Japan, were able to use nanotechnology to address electronic and optical properties of individual microtubules (Sahu et al, 2013 a,b). The group has made a series of remarkable discoveries suggesting that quantum effects do occur in microtubules at biological temperatures. First, they found that electronic conductance along microtubules, normally extremely good insulators, becomes exceedingly high, approaching quantum conductance, at certain specific resonance frequencies of applied alternating current (AC) stimulation. These resonances occur in gigahertz, megahertz and kilohertz ranges, and are particularly prominent in low megahertz (e.g. 8.9 MHz). Conductances induced by specific (e.g. megahertz) AC frequencies appear to follow several types of pathways through the microtubule—helical, linear along the microtubule axis, and ‘blanket-like’ along/around the entire microtubule surface. Second, using various techniques, the Bandyopadhyay group also determined AC conductance through 25-nm-wide microtubules is greater than through single 4-nm-wide tubulins, indicating cooperative, possibly quantum coherent effects throughout the microtubule, and that the electronic properties of microtubules are programmed within each tubulin. Their results also showed that conductance increased with microtubule length, indicative of quantum mechanisms (Fig. 20).
Fig.20: Top: Tentatively proposed picture of a conscious event by quantum computing in one of a vast number of microtubules all acting coherently so that there is sufficient mass displacement for Orch OR to take place. Tubulins are in classical dipole states (yellow or blue), or quantum superposition of both dipole states (gray). Quantum superposition/computation evolves during integration phases (1–3) in integrate-and-fire brain neurons, increasing quantum superposition EGEG (gray tubulins) until threshold is met, at which time a conscious moment occurs, and tubulin states are selected which regulate firing and control conscious behavior. Middle: Corresponding alternative superposed space–time curvatures reaching threshold at the moment of OR and selecting one space–time curvature. Bottom: Schematic of a conscious Orch OR event showing U-like evolution of quantum superposition and increasing EGEG until OR threshold is met, and a conscious moment occurs.
The resonance conductance (‘Bandyopadhyay coherence’ – ‘BC’) through tubulins and microtubules is consistent with the intra-tubulin aromatic ring pathways (Fig. 13), which can support Orch OR quantum dipoles, and in which anesthetics bind, apparently to selectively erase consciousness. Bandyopadhyayʼs experiments do seem to provide clear evidence for coherent microtubule quantum states at brain temperature. This said, solid scientific evidence (microtubules and the rest) is not yet completely convincing and one is left with the desire to contribute to the whole intellectual construction in order, not to leave it in its present state. Anyhow, certain parts of the mosaic are particularly appealing: the fact for instance that anesthetic gas exert their effects on consciousness, and that actual evidence from genomics and proteomics point to anesthetic action in microtubules. As Faraday said, it is always better to have a partial vision of the facts rather than having none.
Some could, on the contrary, be fully convinced of the existence and function of objective reductions of the quantum states occurring in and orchestrated by biological structures. Of these, microtubules would represent the most efficient and evolutionary winning example, consciousness being the most visible of its non-epiphenomenal phenotypes. As a novel suggestion relative to their previous studies, “beat frequencies” are introduced by Hameroff and Penrose as a possible source of the observed electro-encephalographic (EEG) correlates of consciousness.
Introducing quantum physics into the realm of biology entails another major positive aspect: room is made for Darwinism and Chance-and-Necessity reasoning. Biological structures as microtubules evolved (well within Darwinian logics) which occurred to cause objective reduction of the quantum state. Once Darwin enters the scene, everything becomes possible. Our mind provides the a posteriori verification. For a deeper look at this concept, the reader is referred to the elaboration of the terms“Ereignis” and “Ereignen” by Martin Heidegger. The basic Hameroff and Penrose assumption would in this case objectively become of paramount importance. The Hameroff–Penrose form of orchestrated objective reduction( Hameroff and Penrose, 2011, 2013) is related to the fundamentals of quantum mechanics and space–time geometry. Hence the connection between the basic structure of the Universe and biomolecular processes. Relating these effects to neurons might appear an unjustified self-inflicted limitation and, in this perspective, the general conclusion should not be avoided: consciousness is a property and a manifestation of life, life is universal in principle. Thus, consciousness is in principle universal”.
A note of caution: Roger Penrose himself recently said: “I donʼt see why we should take quantum mechanics as sacrosanct. I think thereʼs going to be something else which replaces it”. These words, if they can be considered as not being out-of-context, find their explanation in the incompleteness of quantum theory. The awareness of this incompleteness is at the very basis of the Orch OR Theory and is reappearing throughout this important essay.
Hiroomi Umezawa and Herbert Frohlich: Quantum Brain Dynamics
Hiroomi Umezawa Herbert Fröhlich
The basic concept in quantum brain dynamics (QBD) is that the electrical dipoles of the water molecules in the brain constitute a cortical field. The quanta of this field are described as corticons. The field interacts with quantum coherent waves propagating along the neuronal network. There is more than one view within QBD as to how this system supports or instantiates consciousness.
The ideas behind quantum brain dynamics (QBD) derived originally from the physicists, Hiroomi Umezawa and Herbert Frohlich in the 1960s. In the last 20 years, these ideas have been elaborated and given greater prominence by the combined efforts of Japanese physicists, Mari Jibu and Kunio Yasue, 1992, 1993 and the Italian physicist, Giuseppe Vitiello, 1995, 2001.
Iain Stuart, Umezawa and Yasushi Takahashi (1978) proposed the idea of a cortical field in the brain. Water comprises 70% of the brain, and QBD proposes that rather than providing a passive background, water could be an active player in brain processes. Water molecules have a constant electric dipole, and are considered in QBD to be capable of interacting with waves generated by biomolecules that are also electrical dipoles.
In QBD, the totality of the water molecules in the brain is viewed as the best candidate for a cortical field, with the water’s electrical dipoles binding both to one another and to the biomolecules of the neuronal network. There are also suggested to be long-range waves within the cortical field. The quanta of the cortical field are given the name of corticons, and in Jibu and Yasue’s version of the theory, the interaction between the cortical field and the neuronal network, particularly the dendritic part of that network, is the basis of consciousness.
The other half of the theory refers to biomolecules propagating through the neuronal network, an idea deriving from the work of Frohlich, 1968. Frohlich argued that it was not clear how order was sustained in living systems, given the likely disrupting effect of the fluctuations in biochemical processes (Frohlich, 1985). His ideas relate mainly to the ordering of the neuronal network, on which the proposed cortical network of Umezawa is proposed to act.
Frohlich saw the electric potential across the cell membrane as the macroscopic observable of an underlying quantum order. Frohlich’s studies claim to show that with oscillating electrical charges in a thermal bath, a large number of quanta may become condensed into a single state, known as a Bose condensate, allowing long-range correlations amongst the dipoles involved. He also proposed that biomolecules with a high electric dipole moment line up along the actin filaments, and that electric dipole oscillations propagate along these filaments in the form of quantum coherent waves. There is some support for these ideas, in the form of experimental confirmation that biomolecules with high electric dipole moment have a periodic oscillation (Gray and Singer, 1989).
Fig.21: The hypothesis of an individual double as created by our mind
Vitiello agrees with Frohlich in arguing that living systems constitute ordered chains of chemical reactions, which could normally be expected to collapse in the random chemical environment of biological tissue. In Vitiello’s view stable ordering comes from the quantum level, but this is described by quantum field theory rather than quantum mechanics. He also claims that the folding of protein, which is fundamental to the activity of cells, cannot be described by classical physics, but could be quantum ordered. Vitiello, 1995, 2001 provides citations, which he feels support a quantum dynamical view of biological tissue, notably studies of radiation effects on cell growth, on electromagnetic fields and stress, on dynamical response to external stimuli, on non-linear tunnelling, on coherent nuclear motion in membrane proteins, on optical coherence in biological systems, on weak radiation fields and biological systems by (Popp, 1986) and on energy transfer via solitons and coherent excitations. QBD proposes that the cortical field not only interacts with, but also to a good extent controls the neuronal network. It suggests that biomolecular waves propagate along the actin filaments, an important part of the cytoskeleton, particularly in the vicinity of the cell membrane and dendritic spines. The waves derive energy from ATP molecules stored in the membrane, and these in turn are controlled by calcium ions. These waves are also suggested to control the action of ion channels, which are crucial in the transmission of signals to the synapses.. The neurons membrane is further suggested to act as a Josephson junction providing insulation between two layers of superconductivity. The superconductivity current across the membrane can be controlled by the electrical potentials across the same membrane.
Vitiello also discusses the question of quantum decoherence. He claims that QBD only requires quantum oscillations to last 10-14 picoseconds, which should be much shorter than the period required for decoherence ( Del Giudice, 1988, 2002). In common with Stuart Hameroff, he additionally argues that ordered water around protein molecules may shield them from the surrounding thermal bath.
A decisive further step in developing the approach has been achieved by taking dissipation into account. Dissipation is possible when the interaction of a system with its environment is considered. Vitiello (1995) describes how the system-environment interaction causes a doubling of the collective modes of the system in its environment (Fig.21). This yields infinitely many differently coded vacuum states, offering the possibility of many memory contents without overprinting. Finally, dissipation generates a genuine arrow of time for the system, and its interaction with the environment induces entanglement. In a recent contribution, Pessa and Vitiello (2003) have addressed additional effects of chaos and quantum noise.
Mari Jibu & Kunio Yasue: Quantum field concepts
Mari Jibu, Kunio Yasue Giuseppe Vitiello
Jibu and Yasue (1992, 1993) appear to see consciousness as simply a function of the interaction of the corticons, the energy quanta which are proposed to arise in the cortical field, with the biomolecular waves of the neuronal network. Vitiello, while thinking in terms of much the same quantum systems as Jibu and Yasue, proposes that these quantum states produce two poles, first a subjective representation of the external world and secondly a self, which opens itself to this representation of the external world. According to Vitiello’s version of the theory, consciousness is not strictly speaking in either the self or the external representation but between the two, in the opening of one to the other.
The concepts derive from the Japanese physicist, Hiroomi Umezawa, 1993 who speculated that understanding the processes of memory in the brain would involve quantum field theory. This led onto the idea that understanding consciousness would also involve quantum field theory. The first four chapters of their book in 1993 provide a standard background to quantum theory and neuroscience. Those without some grounding would be better advised to look at more standard text books or popularizations, as the style of the book is generally difficult and unnecessarily repetitive. The first four chapters of the book deal with quantum theory. For those not familiar with this, there are many much more comprehensible descriptions. This is followed by some descriptive passages on the brain, which is again better described elsewhere.
Getting beyond these introductory stages, the authors make the same point as others in stressing the estrangement between physics, where fundamental new views of nature emerged during the last hundred years and neuroscience which has remained largely wedded to 19th century physics. In particular physics has tended to think dynamically, in terms of controlled changes. Physics deals primarily with the inanimate, but the concepts of dynamics can be applied to living organisms, as they also undergo controlled changes. The authors suggest that the functions of the cortex might be better understood through the dendritic network, by which information enters cells. They stress that many neurons in the cortex do not have axons but only dendrites. They think that the conventional processing system described in the axon-neurotransmitter-dendrite system may overlook other networks in the brain. Neurons without axons are the majority in the cortex and the authors see these as the likely basis of consciousness.
Fig.22: Schematic representation of the synapse and synaptic cleft with the element of quantum tunneling of electrons(a) and dendritic network (b)
The authors discuss the dendritic network at length. They point out that it is much more sophisticated than the axonal network (Fig. 18). The dendritic membrane comprises biomolecules with electric dipoles, the positive poles of the membrane are aligned on the inner surface and the negative poles on the outer surface. The negative poles on the outer surface attract positive ions, while the positive poles on the inner surface attract negative ions. The regions where these interactions occur are called Debye layers. The dendrites of several neurons are often entangled in a network. Chemical synapses are located on the tips of dendritic spines and there are emphases on the dendritic membranes. In such processes even quantum tunneling may play a significant role (Fig.22).
Since the 1970s, Evan Harris Walker has proposed that quantum tunneling of electrons would take place across junctions between Neurons. Stuart Hameroff says that “… Gap junctions enable quantum tunneling among dendrites …”.According to principles of modern physics: if a particle such as an electron encounters a barrier such as the synaptic junction, there is a finite probability that the particle will … be found on the other side … From the point of view of Bohm’s pilot wave quantum theory, Peter R. Holland says that quantum tunneling is explained because the effective barrier potential is not the classical barrier potential , but is is the quantum potential. From the many-worlds point of view, quantum tunneling means that the electron is in a superposition of position states, some of which are on one side of the junction and some of which are on the other side. Therefore quantum tunneling can also allow quantum superposition states to extend from neuron to neuron across gap junctions.
There is experimental confirmation that biomolecules of high electric dipole moment have a periodic oscillation (Fröhlich, 1968). The authors suggest that these oscillations are crucial to the functioning of the brain. This can be called wave cybernetics, because the wave or biomolecule oscillation is seen as the controlling factor in the brain.
Frohlich proposed a theory where biomolecules with high electric dipole moment line up along the actin filaments immediately below the cell membrane, while electric dipole oscillations propagate along each filament as coherent waves. These are maintained by electrons trapped in and moving along the protein molecules. This is now known as a Frohlich wave. These waves exchange energy with the electromagnetic field. Stuart, Umezawa, and Takahashi, 1978 proposed the idea of a cortical field. This interacts with the macroscopic dynamics of the main neural network, which in turn transmits signals to the body tissues. The filamentous strings found in the cells also extend outside the cells forming an extracellular matrix that is also linked to the cell membrane. So the membrane proteins are linked both to the cytoskeleton and the extracellular matrix.
Fig.23: Cartoons of, so called, Bose –Einstein conjugates
The authors propose that Fröhlich waves propagate along the filamentous strings. The waves are produced by energy stored in ATP molecules at membrane protein sites, which are in turn controlled by calcium ions. The waves also effect the operation of ion channels, which control neural impulses. The authors suggest that this structure can give rise to a macroscopic quantum phenomena, similar to superconductivity. They also regard the cell membrane as an insulating layer between two areas of superconductivity, otherwise known as a Josephson junction. This means that superconductivity current across the Josephson Junction can be controlled by electric potential differences in the insulating layer.
The authors suggest that this quantum activity may facilitate the functioning of the brain and in particular an interface between the proposed cortical field and the neurons network. The cortical field is proposed to contain energy quanta behaving as particles, which the authors call corticons. Corticons are suggested to exist everywhere in the cerebral cortex. The interface between the cortical field and the neuron network takes place in the waves propagating along the filamentous strings in the cytoskeleton and the extracellular matrix.
The authors emphasize the nature and importance of water within the brain. They suggest that water is not just a background substance, but is an active component in cell assemblies. This idea lies behind the original concept of the cortical field and corticons. The water molecule has a constant electrical dipole. It also has a symmetrical form that is invariant under reflection. The molecule rotates around its symmetry axis, which is the electrical dipole. Thus the molecule is a quantum mechanical spinning top, which interacts with the fields generated by biomolecules.
The totality of water molecules in the brain is seen as the best candidate for the sought for cortical field. In water, one side of the molecule becomes negatively charged, and one side positively charged creating an electric dipole. This is an attraction between molecules known as hydrogen bonding. The attraction is both between water molecules and between water molecules and other molecules with electrical dipoles. Biomolecules such as proteins have constant electric dipoles and connect to water molecules.
The cortical field is identified with the water rotational field, created by the spinning dipoles of the water molecules. The field on the cytoskeleton and extracellular matrix is proposed to be a Bose field (Fig. 23), and the interaction between this Bose field and the corticons of the cortical field is seen as the basis of consciousness. Corticons are identified with the energy quanta of the water rotational field of the brain. The corticons interact with each other by emitting and absorbing the exchange bosons of the Bose field, and are themselves the energy quanta of the water rotational field. The water rotational field is a dipole field and therefore interacts with an electromagnetic field. There are also suggested to be long-range correlation waves in the water rotational field of the brain.
The brain structures described here are thought to be sensitive to and to modify themselves in responses to information coming into the brain. The combined dynamics of the cortical field and the electromagnetic field comprise what the authors describe as quantum brain dynamics (QBD). The dynamics of the corticons is thought to be capable of controlling the dendritic and neural networks. The authors think that the creation and annillation of corticons in the QBD is what is called consciousness.
Unfortunately the authors do not explain why they think this, and therefore like more mainstream theories of consciousness, the actual consciousness seems to be created by fiat. There is no more apparent reason why consciousness should arise from this physical interaction than from the physical interaction of electrical potentials and chemical in the synapses. The authors could have suggested that consciousness was a fundamental property of photons or of the proposed corticons or of particular fields but they do not do this.
Johnjoe McFadden: Electromagnetic fields in brain
McFadden starts by stating that synchronous firing in the brain correlates with awareness and perception indicating that disturbances in the brain’s electromagnetic field also correlate with these. This field is a representation of neuronal information and its dynamics could be seen as a correlate of consciousness. McFadden, 2001 views this field as the physical substrata of consciousness. Popper, 1997 and Libet, 2006 have both suggested that consciousness might derive from an overarching field that could integrate the processing of neurons, but they did not think that this could be any known physical field. At the same time, there has been considerable interest in synchronous firing of neurons. Awareness has been shown to correlate with the synchrony of firing in the 40-80Hz range, and this may bind together neurons involved in different aspects of the same visual perception, thus creating the unity of consciousness (Fig.24).
The brain’s electromagnetic field is induced by neuron firing, and also the movement of ions involved in the fluctuation of electrical potential along the cell membrane. The structure of the cortex tends to amplify the induced field. Experiments in the olfactory bulb have demonstrated EEG activity in response to sensory stimuli. Information about the stimuli related to the spatial pattern of the EEG amplitude.
The author concludes that the brain contains a highly structured extracellular electromagnetic field. The field is weak with the trans membrane fields being about 3,000 times stronger. It is suggested that neurotransmission through gap junctions may be voltage dependent and therefore sensitive to local fields. However, McFadden prefers to concentrate on the voltage-gated ion channels in the cell membranes, because their role is better understood. Synchonous firing is thought to due to a large number of spatially distributed neurons. It is thought that many millions of neurons could be influenced by such firing. McFadden claims evidence for neuron communication via the electromagnetic field.
Fig. 24: The electromagnetic field theory of consciousness as part of an integral electromagnetic spectrum
The medical use of trans cranial magnetic stimulation (TMS) is taken to indicate the sensitivity of the brain to weak electromagnetic fields, and as this has impacts on behavior, it is argued to impact neuronal computation and neuronal function Even when fields are weaker than the surrounding noise, they can modulate neurons. The brain’s electromagnetic field is argued to hold the same information as the neuron firing patterns. The widespread of the electromagnetic field would help to explain the unity of consciousness.
Clusters of neurons in the visual cortex have been shown to fire in synchrony in response to particular stimuli. With insects, destruction of synchronous firing has been shown to reduce the ability to discriminate between stimuli. There is indirect evidence for the correlation between synchronous firing and attention and awareness in humans. The olfactory system of rabbits shows that the sensory information is encoded in the spatial pattern of the EEG, and therefore of the electromagnetic field. This correlation also reflected what a particular smell meant to the rabbit, when it had been trained to associate particular things with a smell. This suggested that the shape of the electromagnetic field could be related to perception and meaning. This is taken to suggest that consciousness is related to the electromagnetic field. Where there is habituation with a process and therefore less conscious activity there is a reduction in synchronous firing, so loss of awareness correlates with reduced disturbance in the brain’s electromagnetic field. The theory predicts that only activity that acts on the motor neurons is conscious. This is testable, although there is no direct evidence. The EEG shows that activity increases during creative thinking, declines with sleep but revives with REM dreaming, so the amount of conscious activity correlates with the amount of electromagnetic activity.
The high conductivity of the cerebral fluid in the brain ventricles makes the brain into a kind of Faraday’s cage, insulating it from external electrical fields. However, it is much easier for magnetic fields to penetrate the brain and other tissues. Moving magnetic fields, such as those used in TMS do produce effects in the brain.
Mc Fadden and the function of consciousness
McFadden sides with those who argue that consciousness must have a function or evolution would not have selected for it. Field effects that had an advantageous effect on the performance of ion channels would have been selected for. McFadden thinks that there is information transfer between neurons during synchronous firing. He proposes that the neural circuits involved in conscious and unconscious activity differ in their sensitivity to the electromagnetic field. The conscious will is claimed to be our experience of the electromagnetic field. He thinks that consciousness is not actually the electromagnetic field, but its ability to transmit information to neurons. He also points out the difficulty of trying to perform two conscious tasks or a conscious and unconscious task at the same time. The two interfere with each other, while unconscious multi-tasking is possible.
Consciousness is required for the laying down of long-term memories and for most learning. The Cemi field theory conceives that the electromagnetic field in the brain fine tunes the probabilities of neuron firings. The affected neurons may be part of large connected assemblies, and this leads to memory and learning. In simulated networks non-synaptic neuronal interactions via the elctromagnetic field and also gap junctions enhance learning. Modulation of long term potentiation by electromagnetic fields has also been demonstrated in vitro in rat hippocampal slices.
McFadden and Free will
The author claims that free will is the subjective experience of the influence of the cemi field on neurons. However, the influence of the cemi field is seen as entirely deterministic. The fluctuations in the field that are capable of modulating the firing of neurons would all be generated by changing patterns of electrical activity, while the neurons themselves induce the field. The author admits that there might be some element of random quantum fluctuations in the field, but this randomness is unsuitable for producing free will.
The author, in common with others in consciousness studies, tries to have it both ways at this point. The functioning of the brain is claimed to be entirely deterministic, but something called ‘will’ is active in driving our conscious actions. This appears to be a clear a contradiction, since the whole idea of will is an agent which initiates something of its own accord.
The Cemi theory is trying to provide a plausible explanation of consciousness. The author could have said that consciousness was a fundamental property of electrical charge, or individual charged particles, or the photons that intermediate it, thus making it a primitive or a brute fact of the universe. But he does not do this. He says that our conscious will is our experience of the influence of the Cemi field. This seems to raise a host of questions and contradictions. If the Cemi field isn’t conscious itself, who or what is experiencing it’s influence. This suggests a dualistic non-physical entity that experiences the action of the field. Even if we are happy with this concept it is not clear why this particular set of electromagnetic fields should produce this experience for this entity.
Like many before him, McFadden suddenly declares by fiat that one particular part of the otherwise ordinary material of the brain produces consciousness. Again, it is reasonable to say that evolution selected for a particular type of field that could fine tune the neurons, but the additional production of a feeling of free will, which is false has no demonstrable value.
Gustav Bernroider: Ion channel coherence
Ion channels are a crucial component in the axonal spiking/synaptic firing model of neuronal signaling and information processing. The axonal signal starts from the body of the neuron and proceeds down an extension called the axon, by means of a fluctuation in the difference in electrical potential across the membrane that forms the exterior of the axon. The membrane is formed by a double layer of lipids. The ion channels consist of protein molecules inserted through the lipid bi-layer. The axon fires when sodium (Na+) ions flow in through one set of ion channels, and subsequently returns to its resting state when potassium (K+) ions flow out through another set of ion channels. This process continues down the length of the axon until it reaches the synapse, which it allows to fire, and thus communicate with other neurons. Ion channels are thus a key mechanism in the brain’s signaling and information processing (see Fig. 25).
Fig. 25 : The potassium channel structure that protects the K+-ion from decoherence (above) and the flow of quantum information through entangled series of channels.
Bernroider and Roy, 2004, 2005 base this theory on recent studies of ion channels. These have been made possible by advances in high-resolution atomic-level spectroscopy and accompanying molecular dynamics simulations. In this work, they draw particularly on the work of the MacKinnon group, and on studies of the potassium (K+) channel, especially the closed state of this channel. The functioning of the K+ channel occurs in two stages, firstly, the selection of K+ ions in preference to any other species of ion, and secondly voltage-gating that controls the flow of these favored K+ ions. The authors say that the traditional understanding of both functions has been altered by the recent studies. In its closed state, the channel is now seen to stabilise three K+ ions, two in the permeation filter of the ion channel and one in a water cavity to the intracellular side of this permeation path. In the case of the channel’s voltage gating, the electrical charges involved which were previously thought to act independently of the surrounding proteins and lipids, are now seen to be coupled to these proteins and lipids, and are thus involved in the gating process.
Atomic-level spectroscopy has revealed the detailed structure of the K+ channel in its closed state. The filter region of the channel has a framework of five sets of four oxygen atoms, which are each part of the carboxyl group of an amino-acid molecule in the surrounding protein. These are referred to as binding pockets, involving eight oxygen atoms in total. Both ions in the channel oscillate between two configurations (Fig. 21).
Bernroider and Roy’s calculations lead them to claim that ion permeation can only be understood at the quantum level. Taking this as an initial assumption, they go on to ask whether the resulting model of the ion channel can be related to logic states. Their calculations suggest that the K+ ions and the carboxyl atoms of the binding pockets are two quantum-entangled sub-systems, and they equate this to a quantum computational mapping. The K+ ions that are destined to be expelled from the channel could, in the authors hypothesis, encode information about the state of the oxygen atoms in the axon membrane .
In a later paper, presented at the Quantum Mind conference, Bernroider, 2007, proposed that different ion channels could be non-locally entangled, thus proposing a quantum process over an extended area of the axon. Given the importance of the ion channels in brain functioning, this model would give quantum coherence and non-locality in the axon membrane an integral role in the brain’s signalling and information processing.
Further to this, Bernroider and Roy have pointed out a similarity between the structure of the K+ ion channel and some recent proposals for building quantum computers, in which ions are held in microscopic traps.
The authors argue that their model is well protected against decoherence, which has always been the most cogent criticism of quantum consciousness proposals. In particular, they claim that Tegmark’s calculations do not apply to their model. The authors agree that for ions moving freely in water, Tegmark’s coherence time of 10^20 seconds would apply. However, they argue that the situation of the ions held in the permeation filter of the ion channel is markedly different, with a temperature about half the prevailing level for the brain, and the ions protected from decoherence by the binding pockets and the adjoining water cavity .
Bernroider and Roy propose a quantum information system in the brain that is driven by the entangled ion states in the voltage-gated ion channels. These ion channels, situated in the neuron’s membrane are a crucial component of the conventional neuroscience description of axon spiking leading to neural transmitter release at the synapses. The ion channels allow the influx and outflux of ions from the cell driving the fluctuation of electrical potential along the axon, which in turn provides the necessary signal to the synapse.
The authors concentrate their attention on the potassium (K+) channel and in particular the configuration of this channel when it is in the closed state. This channel is traditionally seen as having the function of resetting the membrane potential from a firing to a resting state. This is achieved by positively charged potassium (K+) ions flowing out of the neuron through the channel.
Recent progress in atomic-level spectroscopy of the membrane proteins that constitute the ion channels and the accompanying molecular dynamic simulations indicate that the organisation of the membrane proteins carries a logical coding potency, and also implies quantum entanglement within ion channels and possibly also between different ion channels. An increasing number of studies show that proteins surrounding membrane lipids are associated with the probabilistic nature of the gating of the ion channels (Fig.25 and 27).
Fig. 26 : Crystallographic X-ray determined structure of a potassium channel (a) and a schematic representation of it showing the polypeptide units (b)
The authors draw particularly on the work of MacKinnon and his group, notably his crystallographic X-ray work, see Fig. 26. The study shows that ions are coordinated by carboxyl based oxygen atoms or by water molecules. An ion channel can be in either a closed or an open state, and in the closed state there are two ions in the permeation path that are confined there. The authors regard this closed gate arrangement as the essential feature with regard to their research work. The open gate presents very little resistance to the flow of potassium ions, but the closed gate is a stable ion-protein configuration.
The ion channel serves two functions, selecting K+ ions as the ones that will be given access through the membrane, and then voltage-gating the flow of the permitted K+ ions. In the authors’ view, recent studies also require a change in views both of the ion permeation and of the voltage-gating process. A charge transfer carried by amino acids is involved in the gating process. In the traditional model the charges were completely independent, whereas in the new model there is coupling with the lipids that lie next to the channel proteins. This view, which came originally from MacKinnon, is now supported by other more recent studies . The authors think that the new gating models are more likely to support computational activity, than were the traditional models.
As mentioned above, three potassium ions would be involved in the ion channel’s closed configuration. Two of these are trapped in the permeation path of the protein, when the channel gate is closed. The filter region of the ion channel is indicated by the recent studies to have five binding pockets in the form of five sets of four carboxyl related oxygen atoms. Each of the two trapped potassium ion are bound to eight of the oxygen atoms, i.e. each of them are bound to two out of the five binding pockets. The author’s calculations predict that the trapped ions will oscillate many times before the channel re-opens, and the calculations also suggest an entangled state between the potassium ions and the binding oxygen atoms. This structure is seen as being delicately balanced and sensitive to small fluctuations in the external field. This sensitivity is viewed as possibly being able to account for the observed variations in cortical responses.
Bernroider’s theory might be seen to represent even more of a challenge to conventional neuroscience than the other quantum consciousness theories. This is because its recruits as its basis the axon membrane and ion channels which form a crucial part of the conventional neuroscience model, and then tries to remodel these core structures on a quantum-driven basis. It is hard to deny that if this theory were to become better substantiated, it would produce in neuroscience a revolution of the most profound kind.
The essential question was how selectivity could be maintained without compromising conductance. The interaction between ions, attracted water molecules and neighbouring oxygen atoms is considered to require a quantum description. This raises the question of whether quantum effects can propagate in the classical states of proteins. The access of ions to the pore gate is a relatively slow process not likely to require quantum processing.
However, the selectivity filter can change its conformation from permissive to non-permissive on a much shorter timescale. It appears that in the conditions of the selectivity filter the ion’s wave function can become highly delocalized over a significant part of the filter region.
Fig. 27 : Neurotransmission via a neuronal synapse with ion channels (a) and a neuronal network (b)
A New Theory of Quantum Consciousness?
Bernroider’s theory could potentially be a vehicle for transferring consciousness from the implicate into the explicate order of David Bohm. Bernroider differs from Penrose and Hameroff’s Orch OR model in his emphasis of the axons and membranes, as opposed to the dendrites and the cytoskeleton. However, there are similarities between the two models in that both of them propose quantum coherence, non-locality and subsequent wave function collapse linked to the brain’s macroscopic information processing activity. As it stands, Bernroider’s proposals only deal with information processing in the brain rather than consciousness as such. However, it appears possible that wave function collapse in the ion channels might link to Penrose’s proposed geometry of space time, just as readily as wave function collapse in the cytoskeleton (Fig. 27).
Bernroider’s theory is distinct from all earlier quantum consciousness theories in locating its mechanism in structures that are central to mainstream theories of the brain’s information processing and production of consciousness. If future experimentation were to substantiate kind the Bernroider proposals, this would involve a revolution in neuroscience of the most profound character. .
Chris King: Cosmology, consciousness, chaos and fractal geometry
Chris King, 1989, 2003 2011, 2012, 2014, favors the approach of Chalmers over the approach of Dennett in looking at the problem of consciousness. He describes Dennett’s ‘multiple drafts’ concept as a description of how verbal reports of internal states are produced, but as lacking in any explanation of how consciousness is achieved (Dennett, 2007). He reminds us of Chalmers comment that a theory of physics that does not explain consciousness is not a theory of everything. Furthermore, he argues that ultimately our knowledge of objective science is only available via our subjective conscious experience (Fig. 28).
Fig. 28 : Human consciousness as a template for a spectrum of common and transcendental experiences (from King 2012).
He cautions against the common tendency to try and discount quantum uncertainty as something that will be averaged out as a result of the very large number of quanta involved in any macroscopic state. In Chaos theory, which may well have a role in brain processes, small fluctuations may be inflated into important differences, and quantum uncertainties may be included in these small differences. King goes on to look at the possible uses of quantum computation. He mentions that classical computing has a problem with the potentially unlimited time needed to check a range of possibilities
King favors the transactional interpretation of EPR type non-local quantum correlations. In the transactional interpretation of non-local events, when a measurement is made on an entangled particle, it sends a photon back in time to when it and the other entangled particle were emitted, and then forward in time to the second entangled particle. Thus the net time taken to send the quantum information about the measurement of the first particle is zero, and the effect of measurement on the second particle appears to be instantaneous, despite the spatial gap between them. The backward travel in time, which looks like an exotic feature is allowed by the laws of physics as embodied in both the Maxwell and Schrodinger equations
King, 2014 thinks that the transactional interpretation of non-locality can be combined with quantum computing to give a spacetime anticipating system and that this may be basic to the way the brain works. He argues that the brain’s performance is not particularly impressive in terms of what classical computers are good at, but it’s impressive in terms of anticipating environmental and behavioral changes. Further citing this article: “The transactional interpretation visualizes an exchanged particle wave function as the interference of a retarded usual time direction offer wave and a time-reversed advanced confirmation wave. Time symmetric interactions also occur in quantum field theories where special relativity allows both advanced and retarded solutions because of the energy relation E = ± p2 + m2 . Virtual photons and electron-positron pairs deflecting an electron in quantum electrodynamics. Since the photon is its own anti-particle, a negative energy photon traveling backwards in time is precisely a positive energy one traveling forwards. In quantum mechanics, not only are all probability paths traced in the wave function, but past and future are interconnected in a time-symmetric hand-shaking relationship, so that the final states of a wave-particle or entangled ensemble, on absorption, are boundary conditions for the interaction, just as the initial states that created them are. The transactional interpretation of quantum mechanics expresses this relationship neatly in terms of offer waves from the past emitter/s and confirmation waves from the future absorbers, whose wave interference becomes the single or entangled particles passing between. When an entangled pair are created, each knows instantaneously the state of the other and if one is found to be in a given state, e.g. of polarization or spin, the other is immediately in the complementary state, no matter how far away it is in space-time. This is the spooky action at a distance, which Einstein feared because it violates local Einsteinian causality in which particles not communicating faster than the speed of light.
However quantum entanglement cannot be used to make classical causal predictions, which would formally anticipate a future event, so the past-future handshaking lasts only as long as a particle or entangled ensemble persist in their wave function. Weak quantum measurement (WQM) is one way a form of quantum anticipation could arise. Weak quantum measurement (Aharonov et al. 2010) is a process where a quantum wave function is not irreversibly collapsed by absorbing the particle but a small deformation is made in the wave function whose effects become apparent later when the particle is eventually absorbed e.g. on a photographic plate in a strong quantum measurement. Weak quantum measurement changes the wave function slightly mid-flight between emission and absorption, and hence before the particle meets the future absorber involved in eventual detection . A small change is induced in the wave function, e.g. by slightly altering its polarization along a given axis (Kocsis et al. 2011). This cannot be used to deduce the state of a given wave-particle at the time of measurement because the wave function is only slightly perturbed, and is not collapsed or absorbed, as in strong measurement, but one can build up a prediction statistically over many repeated quanta of the conditions at the point of weak measurement, once post-selection data is assembled after absorption.
This suggests (Merali, 2010, Cho, 2011) that, in some sense, the future is determining the present, but in a way we can discover conclusively only by many repeats. Focus on any single instance and you are left with an effect with no apparent cause, which one has to put it down to a random experimental error. This has led some physicists to suggest that free-will exists only in the freedom to choose not to make the post-selection(s) revealing the future…..To view and read the full book click here