One of the themes I write about in many of my books is the evolution of consciousness. Although the idea of an evolution of consciousness can be understood in a Darwinian way, that is not the way in which I understand it, nor is it the way in which the many philosophers, psychologists, and other thinkers whose work I draw on understand it.
Perhaps the simplest way to understand the difference between the two is that from a strictly Darwinian point of view, any evolution that consciousness may have undergone has been determined by strictly physical, material factors. That is to say, it has been the result of outer, external factors impinging on our inner world, that inner world itself being the result of previous external influence which, in some way that has yet to be explained, gave rise to what we experience as our “inside” – the mind and our consciousness of it and the world. How a physical world, that can be measured, weighed, and perceived by the senses produced a non-physical, immaterial world that cannot – we can “weigh” thoughts only metaphorically – is as much a mystery now as when ancient philosophers first raised the question. This is true regardless of repeated announcements by strict materialist thinkers that they have nailed it – which is, of course, another metaphor.
In contrast to this, the view of the evolution of consciousness that I am partial to sees things rather differently. For it, the external world that is supposed to account for our inner one is itself a product or a result of factors in consciousness which precede it. That is to say, in this view consciousness or mind is prior to the external world that we perceive as something “outside” and “separate” from us. It is for this reason that the philosopher of language, Owen Barfield, whose work I draw on in several of my books, remarked that “interior is anterior,” meaning the “inner” precedes the “outer,” an idea I hope to flesh out further as we go along.
Although there are many variants to this idea, the generally accepted view of evolution is that it is a process governed by chance. However life first appeared – and contrary to much common opinion we still simply have no idea how that happened – once it did it was subject to influence from what we call “the environment,” coupled with random mutations, whose cause has never been satisfactorily explained, but which the chance theory of evolution must rely on to get the ball rolling. Mutations that are helpful and beneficial in the struggle for life – the well known battle of “the survival of the fittest” – are retained and aid life in its development, and the environment is a kind of funnel, guiding life in the direction it must travel in order to survive. Mutations that are unhelpful are weeded out – or, put more precisely, the organisms in which they occur are.
No doubt scientists trained in the accepted evolutionary theory will consider my statements here as examples of unhelpful mutations in our attempt to conceptualise evolution, and will welcome their weeding out. That may be so, but I think that in a general way I have got the basic idea.
So, what we have in this scenario is a protean but passive stuff – “life” – that is subject to influence from its environment, as well as from unexplained but necessary chance mutations occurring within itself. Good mutations help it carry on – “good” being defined as helpful to survival – while bad mutations don’t. That many critics of this view, beginning with Samuel Butler, who pointed out its flaws in a series of books in the late nineteenth century, have shown it to be untenable, has not prevented it from acquiring a sacrosanct status in the popular mind, although to be sure, specialists do recognise that it has a some unhelpful mutations of its own that prevent it from being accepted at face value.
At bottom, what it amounts to is the idea that given enough time, enough helpful mutations will have occurred in order to produce us, and the enormous variety of other life forms with which we share this planet. The watch is made without the watchmaker, all respect to William Paley aside. This view is encapsulated in the physicist Sir Arthur Eddington’s remark that given enough time, an army of monkeys tapping on typewriters would produce all the books in the British Library.
This, however, is untrue, as the monkeys would not be able to read what they had typed out. What they had finally produced would be as meaningless to them as what their first flurry of random typing had turned out. If this scenario is at all possible – and there is no reason to believe that it is – they would only have tapped all the letters in the proper sequence by accident, including chapter headings, footnotes, page numbers and all the rest. Without the mind behind the letters, what they had produced would not be books. And we would have to have the books properly written to compare to the monkeys’ results, in order to know what they had done. At best what these over active monkeys would have produced was a copy. And for anyone to know this – and the monkeys surely wouldn’t - the original must be on hand.
I can’t recall the source, but one response to this idea is that it amounts to accepting that if enough junk metal parts were thrown together, the result would be a Cadillac. So far, this hasn’t happened.
The basic view from the other camp, the one I find myself in, is that rather than a passive, pliable “stuff” infinitely amenable to alteration by its environment and its own unstable character, life develops from the inside out. There is an active something within life that reaches out and takes advantage of the environment in order to further its own development. We can call this an “opportunistic” view of evolution, with an active desire to evolve – the anthropomorphism is unavoidable – finding the optimum available to it and going for it. This is a view that in different ways Jean-Baptiste Lamarck, Goethe, Henri Bergson, Bernard Shaw, Nietzsche, Hans Driesch and Darwin’s contemporary Alfred Wallace, with whom he shared credit for the “discovery” of evolution, among others, professed. It is generally vilified as “vitalism,” the idea that something else besides physical matter and chance is involved in the evolution of life, some inner drive to evolve, what Bergson called the élan vital and Shaw translated as the “life force.”
In both cases, the “force” involved was not conceptualised as a physical energy, like electricity, although, to be sure, not a few in the eighteenth and nineteenth century believed such a force existed; Mesmer’s “animal magnetism” for example. It was something like an intention, a will, rather as I have the intention and will to write this essay. Of course for materialist thinkers, “intention” and “will” cannot be weighed or measured and so must not exist, or rather must really be “nothing but” a derivation from some material factor that does. This in the end amounts to the belief that a thought and the neurons in the brain associated with it are identical. I think it is safe to say they are not. If I cut open my head I – or someone else – may see my neurons, but they won’t see my thoughts.
This is also why we say someone has a “great mind” but not a “great brain.” Einstein, whom we all agree had a great mind, had a brain of less than average size.
So much for evolution. Now for consciousness.
A view about consciousness that is parallel to the “passive” theory of evolution is what we can call the “blank slate” picture of the mind. This is associated with the philosopher John Locke but it was promoted a bit earlier by his predecessor René Descartes. For both, the mind is passive, in the same sense that for strict Darwinians life is passive. For both the mind or consciousness reflects the outer world, as a mirror does our image. If nothing is in front of a mirror, it will reflect nothing. For Locke, our minds are tabula rasa, blank slates. He famously said that “nothing is in the mind that was not first in the senses.” This means that our consciousness is empty, blank, until it is written upon, or impressed upon, by stimuli coming from outside, and brought to it by the senses.
There are many reasons both came to this conclusion – Descartes actually did believe we came equipped with something he called “innate ideas” but let’s not quibble - and in Locke’s case undermining the idea of the “divine right of kings,” by which monarchs claimed a holy dispensation for their rule, was part of the deal. It is Locke’s “blank slate” that is behind the “self-evident” truth in the United State’s Declaration of Independence that “all men are created equal.” Equally blank, that is.
There are many arguments against the blank slate view of consciousness, reaching back to Plato, none of which necessarily involve re-establishing the divine right of kings. One that obstetricians have observed is that babies still in the womb dream. If their senses have not yet brought stimuli in from the outer world to write upon their blank slates, what do they dream about? Granted that their eyes open while in the womb and that they are aware of sounds coming from outside, it is still the case that their waking and sleeping experience differs very little. The “world” they are aware of in either state is for all intents and purposes identical, so there is very little written on their blank slates for them to draw on when dreaming. What seems more likely is that they come equipped with some material already furnished.
It is also the case that for some time after birth – up to two years – babies exist in what the Jungian historian of consciousness Erich Neumann called an “ouroboric” state of oneness with the mother. Until this is broken, they have little awareness of a “world” other than themselves; they exist in what Barfield called a “participatory” state of consciousness. Rather than see themselves as we do, separate from a world that is “other,” they “participate” with it, with little sense of a difference between “inner” and “outer.” It seems that rather than arriving like an empty flat which he or she needs to go Ikea to fill up, babies arrive, as the poet Wordsworth said, “not in entire forgetfulness/And not in utter nakedness/But trailing clouds of glory...” which, sadly, we sooner or later lose sight of. As Wordsworth also points out, as we get older, “the glory and freshness of a dream” fades from view. But it was there at the beginning.
What we enter life equipped with are what Plato called the Forms and more recently the psychologist C. G. Jung called archetypes. Other thinkers have offered other suggestions. Edmund Husserl, the father of phenomenology, out of which existentialism emerged, argued persuasively, to my mind at least, that rather than reflect the world, consciousness reaches out and grabs it. It is more of a hand than a mirror. Plato would say we grab it through the Forms, which are a kind of stencil through which we fashion a picture of the world. Jung would say we do it via the archetypes, which are sort of psychic blueprints that allow us to organize our experience. Immanuel Kant, on whom Jung often relies, talked about “categories,” which are rather like a pair of glasses we wear in order to have any experience at all.
There are many differences between these different ideas and many hairs have been split in spelling these out. But the main insight is that something already in the mind reaches out and fashions our picture of the world, and that without this the senses would have no world to perceive. Hence, “interior is anterior,” our inside precedes our outside.
This is not to say, as some may already be grumbling about, that “it’s all in the mind,” or that “ we create reality” out of whole cloth. We don’t create “reality” per se. It is “really” there. But we do create our picture of reality. To use another metaphor, think of a television or radio tuner. We don’t create the programs on our TV sets or radios. But we can get good reception or bad. And we can extend this metaphor a bit to cover the contention that consciousness exists solely in our individual heads. The programs we watch on television don’t originate in our TV sets, but in a studio somewhere else. Our TV’s pick up the broadcast which comes from somewhere outside. If I broke open my TV I wouldn’t find the actors in the show I was watching there, just as if I broke open my head I wouldn’t find my thoughts.
What all this suggests is that consciousness is an activity. It is something we do, rather than something we have. I can be more or less conscious, not in the sense that I have more or less of the stuff in my head, but in the sense that it is – or I am - more or less active. Not in the sense of physically active, but in the sense that I put more “into” being conscious, I put more of “myself” into it. In other words, I make more of an effort at it. Husserl spoke of the activity of consciousness as “intentionality,” an idea that the writer Colin Wilson drew on in his work in developing what he called a “new existentialism,” one that avoided the pessimistic dead-ends of old school existentialists such as Sartre and Heidegger. Intentionality is the activity of reaching out and grabbing the world, and for Wilson and Husserl, the more we reach out and grab it, and the stronger our grasp, the more “meaning” we perceive in it.
What led Sartre and to a perhaps lesser degree Heidegger to seeing the world as meaningless – he does have his mystical moments - is that they lost sight of this and accepted the view of consciousness as a passive reflection. Hence their philosophies are ultimately founded on the idea that existence is meaningless. Wilson found that if we increase our intentionality by uncovering the sources of our “unconscious intentions,” we find that the world is much more meaningful than our more passive state can perceive. As he says, there is a “will to perceive” as well as perceptions. If we can increase our will to perceive, we increase our perception. Wilson devised methods of doing this, but the starting point is recognising that, as Husserl says, “perception is intentional.” We actively, albeit unconsciously, direct our attention at the world, not passively reflect it.
But it is not only existentialists who see the world as meaningless. Scientists, in whom we place more trust these days than we do in philosophers, see it this way too. At least the majority of the vocal ones do; there are some who disagree, but they are generally seen as suspect by their less optimistic peers, and because of this often keep a low profile. One example should suffice. In his book about the Big Bang, The First Three Minutes, the eminent physicist Steven Weinberg concludes his account of the creation of the universe on this dour note: “The more the universe seems comprehensible, the more it also seems pointless.”
We may purse our lips at a Sartre or Heidegger or their many epigone informing us that existence is meaningless. But when a scientist tells us it, we tend to listen.
This view of the meaningless of existence, however, is really a rather recent one. We can say that it got its start in the early seventeenth century with the rise of science and became pretty well established by the end of the nineteen century, with its predominance more or less unchallenged ever since. Since then we have been, as the novelist Walker Percy put it, “lost in the cosmos.” The more we have learned about the universe, the more we have come to see that there seems no reason for it, nor for our appearance in it.
Yet people of earlier times did not feel this way. We may say that this is because people of an earlier time believed in something in which we do not: religion. That may be. But if so, why did they believe in it? Did they simply have some bad ideas that we have discarded? Or was something deeper at work?
Owen Barfield, who I mentioned earlier, believed there was, and, in a general sense, his idea is shared and echoed by many other thinkers, some of whom I have written about extensively in my book A Secret History of Consciousness. What Barfield believed, and what we can find in other philosophers of consciousness such as Rudolf Steiner, Jean Gebser, and others, is that in earlier ages, human beings did not experience the world in the way that we do, as something radically separate from ourselves, as a strictly other “outside”, opposed to our subjective “inside.” As mentioned earlier, this opposition, between inside and outside, is something we do not experience for the first two years of our lives. Not to draw too strong an analogy, but through his study of the history of language, Barfield came to the conclusion that people of an earlier time, lived in a much more “participatory” relationship to the world than we do, in a way akin to how we experience it in our first years. They were “in” it more that we are, part of it in a way that we are not, or at least do not experience ourselves to be. They felt “at home” in the cosmos in a very real sense. As Barfield suggests, they felt the world to be a kind of tapestry, into which they were woven, along with everything else, where we feel “in” the cosmos in the sense of it being a kind of “container” we find ourselves in, having no idea how we got here. (As Heidegger says, we are “thrown” into the world.) They “belonged” to the world in a way that we feel we do not.
We can say that our sense of “belonging” to the world began to loosen with the rise of reason and rationality, which we can place circa the sixth or fifth century BC, in the period the philosopher Karl Jaspers called the “Axial Age.” This was a time when a global shift in human consciousness took place, and which produced the spiritual, moral and ethical beliefs that continue to guide mankind, although, to be sure, their power to persuade has lessened in recent times. But while religious inspirations arose in China, India, Persia, and the Holy Land, in Greece something different took place. There began the rise of what Husserl called “theoretical man,” the man of logical enquiry. Where Confucius, Lao-Tse, Zoroaster, and the Hebrew prophets were concerned with the question of how we should live – and the jury is still out on that one – in Greece, pre-Socratic philosophers like Thales, Anaximander, Heraclitus and others wanted to know what was “real,” what the universe was “made of.” These are the kinds of questions that eventually gave birth to what we know as science.
These are, of course, important questions, but in order to pursue them thoroughly, the older mythical understanding of the world had to be jettisoned. A mythical narrative about how the world came into existence does not answer the question of what the basic “stuff” it is made of is. The poetic view which told a story had to give way to the prosaic one which arrived at fact. Thus began what we have come to call the “disenchantment of the world,” the end result of which, is the sense of the world’s meaninglessness and our sense of being lost in it.
This would be a sad ending, and one that many people, scientists and existentialists among them, feel compelled to accept. But Barfield, Wilson, and the other people I write about disagree. They accept that for something like the kind of independent thought, freedom, and rational enquiry that we cherish to arise, it was necessary to cut ourselves off from the sense of being “at home” in the universe in the old way, the ouroboric, participatory consciousness enjoyed by our ancestors. But this is only part of a process, the process of, as Barfield says, achieving self-consciousness. Our current consciousness is a product of time, of history, of evolution, not a final state or consciousness per se.
Just as we shifted from an earlier unconscious participation with the cosmos to our current “detached” relationship to it, Barfield and the other philosophers of consciousness I write about, believe there is good reason to recognise that we are in the process of shifting to a further conscious state of participation. This is a consciousness that retains its independence and freedom of thought and will, but is not hamstrung by the sense of meaninglessness that informs many forms of contemporary philosophy – deconstructionism and postmodernism, for example – or the kind of gloomy pronouncements that physicists such as Steven Weinberg and others have made about our relation to reality. Evidence for this comes in the variety of “mystical” or “altered” states of consciousness that these thinkers examine and which I look at in my books.
I cannot go into detail about these here – space does not allow it. But in general the content of these experiences is a sense of meaning that is overpowering at times, so much so that we can begin to understand the brain not as a producer of consciousness – as many neuroscientists do – but as a kind of “reducing valve,” dampening down the meaning to a level we can appreciate without being overwhelmed by it, an idea that Bergson proposed in the late nineteenth century (and which again we can upgrade to the analogy of a TV and the programs it receives: we need to “tune in” to one, not have them all at once.) But to follow this idea, I must engage in the kind of self-promotion that most writers of integrity find odious but which, at times, they must nevertheless pursue. That is to say, suggest that the reader of this essay seek out and read my books.
(On 15 September, at 1:00PM ET, I'll be speaking about the 'lost knowledge of the imagination', at the Future of Consciousness seminar, held from 25 August to 29 September. Others on the bill include Dean Radin, Paul Levy, Bernardo Kastrup, Richard Smoley and Jeffrey Kripal. Please use the very encouraging code GARYLACHMANFUTURE when plumping for tickets. This article, written a few years ago, gives an idea of one future consciousness may have in store.)
It's getting hot in here.
As close as the spaces between your words.
There's a thunderstorm forecast but my senses tell me it will pass.
My instinct vs all that technology and collective wisdom.
A palming off, with a fencer's flick of the wrist. Pah!
If I'm wrong, and caught outside, I'll get wet. If I stay in, and watch from a window, I will still be aware of being rained upon, but I will remain dry.
I could make a decision to go outside in the downpour and reflect on how dry I would have been if I had stayed in.
I'm landlocked and live at the top of the hill, so it's not a life-threatening thing.
There are…
Fascinating as usual. 3 things occurred to me, (1) It's a shame Colin Wilson didn't live to see John Vervaeke's YouTube series on the Meaning Crisis as I imagine he would have loved it; (2) The "tug" between the precision of left-brain explanations and the more open or amorphous right-brain perspectives remain eternally fascinating. I imagine you know about the studies of neuroscientist Andrew Newberg [Newberg, D'Aquili E & Rause V (2001)], about the two patches of cortex at the rear of the parietal lobes which Newberg calls the "orientation association areas" - the left hemisphere patch contributes to the sensation of knowing where your physical limits or edges are, and the right hemisphere patch maintains a map o…
If we accept that our consciousness can observe the rest of us, and can take control and make different decisions than our outside influences might direct, can we really find any other example in nature or existence where something spontaneously created its own observer capable of taking control of it? It doesn't make any sense. The less complicated doesn't produce the more complicated. It's like suggesting that we're going to build cars so amazing and complicated one day that they will spontaneously produce their own driver that will have different motivations for where to drive themselves. Unfortunately, I think even that sounds plausible to many people, even ones that should know better.