Rational Religion


Contact the author:
tuppennyprofet - at - aol - dot - com
(translate into a real email address)
ORIGIN OF A PHILOSOPHY THE ORIGINS OF A PHILOSOPHY


It is probably incumbent upon on anyone with  pretensions toward being a philosopher that she or he try to collect all his or her random  conclusions (never mind how transitory) into some sort of scannable whole.

This sort of a posteriori organization may be easier for persons under the age of 80, but I feel I should at least make the attempt.

I began setting down some thoughts, principally about religion since it seems to be an almost universal concern among human beings who bother to think at all, and entitled it, (somewhat facetiously) Rational Religion.  I considered the title oxymoronic, but the deeper I got into exploring what I believed  (feeling myself to be congenitally, perhaps even  genetically, a-religious) the more I realized that “Rational Religion” is more of a paradox than an oxymoron.

An oxymoron is simply self-contradictory.  It may be considered a sort of paradox, but it is a trivial one; like a simple grammatical  error.

A true paradox involves a more profound level of confusion, such as the concept of a photon existing as both a particle and a wave  function; and being measurable as both. (There seems to be disagreement about this; some experimenters maintain that the process of measurement makes an “either/or” defining choice.)

The quantum mechanical world, as we presently understand it, is paradoxical.  It is demonstrable that certain things evidently are, and are not, so.

The key phrase in the preceding paragraph is “as we presently understand it.”  There are five individual English words in that phrase, and four of them demand careful definition and a good deal of granted intellectual latitude before we can even begin to discuss what the phrase means.

“It” is a pronoun referring back to “the quantum mechanical world.”  Simply defining that term fills several hundred thick and densely worded books.  Once we agree, for sake of discussion, that “it” refers to a philosophical concept  of modern physical theory which may or may not have any objective basis in physical reality we can move back to parse the word “understand.”

This implies human thought processes, which – despite centuries of effort by thousands of the brightest people the human race has produced – remain beyond our collective ability to define or detect.  We have evolved  elaborate techniques and complex machinery to try to capture human consciousness and the phenomenon of thought, and the best we have managed  to do is to detect some electro-chemical  “events” and occasionally apparently coordinated changes in blood flow in the brain which seem to indicate geographical locations in the central  nervous system where thoughts might be occurring.  Nobody has ever used the machinery to map out a thought; something the individual conscious human central nervous system does automatically, often with the attendant capability of expressing it in words (with more or less individual facility.)

“Presently” is of course an attempt to define a time limit (actually a range of contemporary occurrences to which an event may be connected within the flow of history.)  It tries to imply that the subject event may in some important ways differ from events of the past and of the future.  The further implication is that we may have learned things we did not know “back then” and that we may learn further new things as we live longer and explore the subject more.

“We” is a particularly troublesome pronoun, the use of which involves one of the most profoundly unwarranted leaps of faith the human consciousness is capable of.  It is a leap which all of us make constantly in order to communicate with each other on all levels from the most basic to the most complex.  “We” implies that two (or more) individual human central nervous systems can join in a plural effort in order to achieve a common goal;  and it simply cannot be done!

How can I make such a statement when we have such myriad examples of people acting in concert to achieve all sorts of things; from building pyramids which endure thousands of years to fighting  wars which take millions of lives and change the entire course of history?

Or winning an NCAA basketball championship, in progress as I write?

These are all physical actions which demand that the participants subvert their intellectual individuality in some common cause which they perceive to be worth their pursuit.  They have had, each, to make an individual decision to follow a course of action (which may turn out to have been a terrible idea) because it promises to fulfill some desire or may simply have seemed to be a good idea at the time.

The notion that all of the cooperating participants were thinking the same thing is an illusion, produced by observing their concerted  physical effort.  

Despite the conjecture  treated by some science fiction stories, there is as yet no way to hard-wire two  human intelligences into a true “we.”  Given the best modern research into cognitive science, it does not appear that this sort of neural connection will ever be possible.

Human central nervous systems are not analogous to computer hard drives and their “thoughts” are not reducible to a string of 1’s and 0’s;  or any string of symbols we can yet imagine.

I will make a bold prediction;  that even if some unimaginably inventive technology ever manages to so plug one human brain directly into another, the result will be less an amalgamation – a melding and sharing of consciousnesses—than a simple domination of one entity by another.  I would expect it to be a gross physical process, like a dominant individual forcing a subservient one to behave as he is told.

This is because I do not see the prospect of an individual human intelligence being created by anything but a profoundly individual set of initial conditions and subsequent experiences.  Consider the example of identical twins.  They may seem very much alike; they may apparently have inexplicable intellectual and “psychic” connections; this is to be expected, because their “initial conditions” were demonstrably (or at least conceivably) very much the same.  But only a little bit of observation reveals that they are very different human beings in at least as many ways as they are alike.  This is because their subsequent experiences, including the firing of individual neurons in their separate central nervous systems,  cannot possibly have been as identical as their biological origin.

I have devoted several paragraphs to an in-depth examination of  a statement about Quantum Mechanics, a subject which I actually do not “understand” very well at all, in order to make a larger philosophical point.

Individual human intelligence is the most unitary entity that such an intelligence is capable of conceiving of.  It is, in fact, the ONLY entity that an individual human intelligence of capable of conceiving.  The first human being to notice this and to state it in understandable terms was probably Rene DESCARTES,  whose “Cogito, ergo sum” may be the “truest” thing any human has ever said or written.  It may also be the only true thing ever said or written.

Of course, Descartes, being a philosopher, felt bound to expand his profound realization into an entire canon,  including a pretty tenuous attempt to “prove” the existence of god.

Since one has to “start” somewhere, I am fairly comfortable beginning my philosophy with my initially facetious and somewhat accidental  concept of Rational Religion.

The basis is that I can “know” nothing;  I can, at best, “believe.”

What I choose to believe is what seems rational to me.  Here is the first, or at least an early one of several humongous stumbling blocks; or chasms; which must be “leaped” by “faith.”  Since my mind is so individually mine I have no way to “understand” any new concept except by analogy.   That is, I must link it to something I already “know,” or think I know.

The problem is two-fold.  Whatever I am contemplating, at whatever depth and to whatever length, the chances are that I do not have complete information about it.  Then I am trying to grasp it by comparing it to another set of information about which I am similarly uninformed and which in any case can be no more than a set of broadly symbolic reference points which are bound to diverge widely from the case at hand the more minutely the analogy is pursued.

This is the “reductio ad absurdum” which has been the black hole of philosophy since antiquity.

The whole process becomes a cartoon (Note: another analogy); “understandable,” perhaps, but like a line drawing in comparison to an oil painting, only by leaving a great deal out.  And the oil painting, of course, would only be a more elaborate analogy.

Because I realize I have  incomplete information, and only the overwhelmingly subjective  processes of analogy to deal with it,  in order to remain rational all my conclusions must remain temporary and subject to revision.  Therefore, none of my beliefs may I allow to harden into convictions.

Because I (try to) avoid convictions I am afraid I pay a price in the world of human endeavor, where there is a premium upon dynamic action and “assertiveness.”  Part of being a “successful” human being seems to be a willingness to charge on into the unknown on the strength of conviction.  Here, history tells us that certainly convictions are of unequal quality.  Most charges end in varying degrees of disaster.

Unfortunately, because I must avoid convictions of my own, I will probably have not all that much respect for the convictions of others.  Although it is probably incumbent upon me to have respect for the fact that they have convictions, simply because they are also human intelligences like myself, and I should not deny them the right to hold their beliefs in any manner they choose, I can still observe and identify the probability that they have—in specific regions of inquiry—stopped thinking.

I feel that I do not have to regard this as desirable, especially if the conclusions they seem to have reached before hardening their beliefs into convictions do not seem logical or even  rational to me.

What I do not feel I have any right to do is gratuitously point out the errors of their convictions, or to make any attempt to “convert” them to my way of thinking; even if this were possible.

Thus, for me, the First Commandment in any possible Decalogue is, “Mind your own business.”

If I am so lacking in conviction, how do I get anything done at all;   and why do I bother to write the first sentence of a personal philosophy?

Because I have normal human megalomaniacal tendencies, the product of several million years of cognitive evolution, which tend to make me feel that, somehow, in spite of all evidence to the contrary, my individual human intelligence MATTERS.

Never mind that on a Universal scale, this is most probably an illusion (untraceable to some atavistic impulse to remain alive long enough to reproduce), it is nevertheless a component of my consciousness.  I have to deal with it, and you may choose to do so also if you think you  might find it instructive.

---------------------------------------------
Socrates, in the ancient world, evidently caught on to the probability that human intelligence can never have enough information about anything to decide that it “knows” the subject with absolute finality. (“Evidently,” because we principally have only Plato’s word that Socrates even “existed.”)

Therefore, Socrates’ technique was to ask a series of probing questions in order to make his students (and himself) think progressively deeper and in more detail about Whatever in order to eliminate as much as possible about what the subject wasn’t.

Also evidently, however,  neither Socrates, nor almost certainly Plato, managed to ask the crucial question that de Carte came up with a couple of millennia later; whether such a thing as human intelligence existed at all.

For each of us, de Carte’s deduction is about as close to “proof” as we can expect to get; for all of us, collectively, “intelligence” is pretty demonstrably non-existent.

“I think; therefore I am.” Okay, maybe.

“We think; therefore we are…“ ????

So demonstrably different from one another that we can’t agree on the meaning of “Good morning”????

De Carte’s basic statement is probably “truer” if we turn it inside out.  I am, therefore I think.

This is certainly a good deal more “provable.”  If I cut myself I bleed.  I don’t just “seem” to bleed; I lose vital fluid.  If I cut myself severely enough and allow myself to bleed long enough my ability to think will terminate.  End of experiment;  end of argument.

For Socrates and Plato this would not have been acceptable.  They “understood” that the body is just meat.  Because it was mortal, and transitory, it was nothing but an illusion; a shadow in a cave.  There had to be something “outside” and “above” which was somehow more “real.”

There was obviously, after all, a human spirit; a soul which could search beyond the cave.

To find…what?

At which point in Western philosophy Darwin’s viewpoint takes over, despite the fact that he wasn’t to be born for about 2500 years.

The definition of life, itself, is not an either/or proposition.  Despite several hundred years of speculation and a couple of hundred of pretty industrious experimentation, we have nothing but a chaotic morass of ideas how it might have gotten started and even  about who or what possesses it.

In the Western calendar year of 2010 there is still no agreement about the status of viruses.
    They possess ribonucleic acid, which all living things seem to share; and they most certainly reproduce, because they seem to multiply exponentially to sicken and kill us.  But they don’t appear to be able to reproduce “on their own.”  They have to take over the machinery of somebody else’s  individual cells and force those cells to reproduce the virus.

(For me, personally, the argument is silly.  Whether the “Original”  RNA  “self-assembled in some pre-biotic accident a few billion years ago, then spontaneously differentiated and picked up chemical and physical hitchhikers from the environment until it somehow managed to become a self-replicating entity and eventually populate this particular planet, the fact remains that NOW we can identify specific strains of many different viruses by decoding their genetic sequences; many of them even by simply looking at them in an electronic microscope.  We can watch them reproduce.  Whether they are “alive” is an obfuscating quibble.)

They very probably are not alive in the sense that human beings seem to be.  That is, they demonstrably “are” but they do not seem to be electrochemically complex enough to “think.”

But neither are most of the creatures we usually agree are definitely “living.”

There is currently a lot of scientific disagreement about the cognitive skills of  definite creatures  other than ourselves.  “Lower” primates, a few other mammals, some birds and even octopi seem able to use external objects found in their environments as “tools,” to obtain food.  This was for at least 200 years regarded as a defining element of solely human intelligence, but since this now appears not to be so the definition of intelligence, itself, comes into question.

It appears to be a spectrum.  A super-mollusc (the octopus) can “figure out” how to unscrew the lid of a glass jar in order to eat the crab in a laboratory experiment, or carry around a coconut shell to hide inside in the wild, but can it “think” about doing these things?

A chimpanzee can crack nuts with a carefully chosen rock upon an equally selected larger rock anvil, and evidently teach her children and clan-mates how to do it, but how good is she at recognizing an image in a mirror as herself?

These are all Socratic questions we humans ask ourselves in our continuing attempt to define our own intelligence; our awareness; our cognition; our spirit; our soul.

Whatever name we give it, and however  much authority we invest in it, it appears to be the very center and basis of ourself, just as de Carte specified.

Probably we were aware of it for  many thousands of years before it occurred to us to question where it came from;  and no wonder we  then decided it must be something more than the bleeding meat which housed it.

I ask you, now, to consider the possibility that it is not; that in a sense the meat evolved first as part of a purely Darwinian survival of not necessarily always the “fittest” but a combination of the more adaptable and more fortunate, and that intelligence became more complex as part of that adaptability.  The smarter our ancestors were, the oftener they ate and the oftener they escaped being eaten.

Being smarter evidently involves having more cells in one’s central nervous system; in having it differentiated into various sections and discrete areas in order to accomplish different tasks; in maintaining complex connections between the areas so that the organism remains capable of co-ordinated activity within the environment.

At some point, possibly as a result of its complexity or the sheer number of communicating cells, the subject central nervous systems slips over into awareness  of itself.

This probably occurs as what we have come to recognize as a “phase shift,” wherein a slowly evolving spectrum of sequential conditions reaches a “tipping point” and suddenly becomes “something else.”  An earthquake is a phase shift.

Enough earthquakes, and you get the Atlantic Ocean.

Enough phase shifts in mammalian central nervous systems, and you get the idea of the Immortal Soul.

Never mind that a more fundamental cause of the Atlantic Ocean is a far more steady and non-eventful thermal convection current in the mantle of the Earth;

Or that once human cognition has evolved to a certain point it must necessarily begin to ask why it exists; even if it exists; and that the ancestral impulse of its enhousing organism (the meat)  to survive and reproduce makes it necessary to find a reason for it to continue to exist.

I have speculated elsewhere that a human Immortal Soul is most probably a megalomaniacal fiction.

Our megalomania is unavoidable.  It is the central nervous system’s translation of the basic imperative of life; SURVIVE.

We think we are, individually and collectively, much more important than the Universal evidence indicates because if our ancestors hadn’t been convinced  of this we simply wouldn’t be here.  If we didn’t believe it now, our line would not continue.  

It is of no Universal consequence that this can matter only to us.  It does.

As long as there are any of us left to feel this way, that is as close to a fact as all of human awareness is capable of conceiving.

Socrates’ and Plato’s megalomania led them to believe  that their cognition must have an existence beyond and outside of the meat.  Because they could not conceive of its simply ending, they could not feature how it might have begun, so perhaps it existed “always,” if not as an individual entity, as part of something universal and eternal.  Much of what we “Westerners” define as “Eastern” thought  takes the same path, leading to intermittent paroxysms of “enlightenment” as succeeding generations of “Westerners” rediscover “Eastern “ religions.

Our megalomania has been called a lot of different things down through the centuries.  “Soul” is an ancient term with a lot of history.  Friedrich Nietzche came up with  a more complicated  idea, the “Will to Power,” and the notion that even people who have no “real” or temporal power  (which is most of us) still possess the “Will,” which leads to the invention of religion as a sort of booby prize in the competition for all the good stuff.   Okay, we can’t have the mansion in “this life,” but it will be ours in the “next one” if we behave ourselves and perform the correct rituals from time to time.

Whatever we call it, it’s still the same thing.  It’s not that we exist because we think we do; we think because we are, for a short span of years on a small planet in an inconceivably vast Universe, engaged in a closed process which we define as “living.”

It is a “closed” process because it begins at a definable point which we call “conception” (Which is of course part of a larger and continuing process, which is also “closed,” but of much longer duration, called “life.”) and ends when the process can no longer sustain itself.

I am afraid that cognition, in all of its guises, cannot be demonstrated to be anything but part of a closed process, definable only as an individual human life.

Thus limited, it is still pretty damned spectacular!  I mean, just look at it!  What a hell of a ride!


FREE WILL

Among the silliest of our many silly debates is the question of Free Will.

As a (Christian) religious question, Free Will is a typically convoluted  attempt  to resolve a paradox; How can an all-powerful and “good” god allow evil to exist in the world?

Like most religious paradoxes, this is not a logical paradox at all, but a simple non sequitur.

After you allow the proposition that god “exists,” and gives a damn, you have to define “good” and “evil” and treat both as independent entities.  None of this follows particularly well, even  if you postulate angels and demons.

This, however, works just fine for the purposes of the religion, which can set the better minds among its faithful to endless contemplation of the “mystery” and thus distract them from possibly heretical questions they might otherwise stray upon.

The dogma seems to be that, yes, god is all-powerful and good, but he gave human beings Free Will, which allows them to make bad choices.

And this was necessary, because…?

Well, of course!  Without the bad choices, we would have no need for Salvation.

If this leaves you with no further questions, peace be upon thee.

Free will as a question of scientific logic may be a bit harder to dispose of, but it is no less an obfuscation.

Here the questions seem to deal with the “nature” of time and the history of the Universe.

A graspably simple statement of the question might read, “Do human beings have the capacity to take actions and entertain thoughts which affect their lives and/or subsequent events in their environment?”

The answer, as is the case with many good questions, is another question; “At what scale?”

And we come, again, to the proposition of the existence of “closed systems.”  Within the closed system of Life on the Planet Earth and in Nearby Space it is manifestly obvious that we have Free Will.  We can do any cock-a-mamie  thing our imaginations and technical expertise can devise;  and live with the consequences, if the consequences permit. Outside that closed system the question is moot; meaningless.
-----------------------------------------

ON THE ORIGIN OF “PROCESSES”

I expect  any philosophy major with more than a couple of semesters  of the subject tucked into her or his central nervous system  will identify my  definition of  what most people call “things” or “events” as “processes” as derived  from the writings of a gentlemen named Alfred North Whitehead.

I am sorry, but this is so far from the case as to be a bit of an embarrassment to me.  Less than a month shy of my 81st birthday, I have newly discovered that Dr. Whitehead evidently originated a whole “northwest 40”of philosophy which is classically labeled “Process Philosophy.”

That I have remained unaware of this so late in life is attributable to my (largely purposeful) lack of formal education in the “discipline” of philosophy, and my inescapable prejudice against “wasting my time” delving into the intricate depths of religious apology.

Although my latest, if cursory, investigation of Dr. Whitehead’s philosophy reveals that he was – at least in the beginning of his career – much more of a mathematical and “physical” thinker than a religious one, everything I had heard about him seemed to be that he  was  a mainstay of “modern” Christian intellectualism.  

I therefore never bothered, until very recently, to try to understand what he had to say.

As I do so, I am both elated and somewhat crestfallen to discover that Dr. Whitehead’s Process Philosophy is quite similar to my own mental explorations of the nature of “reality.”  

While I cannot absolutely state that my use of the term “process” owes nothing at all to Dr. Whitehead’s long-ago-published writings (Knowledge which is in the culture and widely available may intrude itself into one’s consciousness without any memory of how and when it might have happened)  I think, however,  given my surprise and embarrassment  upon learning that “process” is far from a new concept in philosophy, that it is more likely that Dr. Whitehead and I independently derived the concept from the same general knowledge available to us within the culture; he, admittedly, deriving it well over a century before it occurred to me.   The use of the exact same English word to characterize the concept is not even coincidental. It is really the best word available to describe an ongoing series of events which seem to be more or less related by cause and effect and apparently self-sustaining within a closed system.  

Elsewhere I have compared an individual human life to the burning of a candle.  The first is incomparably more complex, but both are self-sustained physical processes involving, as a fundamental element, chemical oxidation.  As  long as the individual is not “killed” or the candle “blown out” the process, once begun, will probably continue until all the available oxidizable fuel is consumed. (Or, in the case of the living animal, until too many of the vital subsystems have become depleted and unable to function normally.)

I will now spend some more time investigating Alfred North Whitehead’s philosophy, especially since the current party line in the discipline is to not regard his serious and more secular efforts as all that significant to modern thought.  

His religious philosophy I will probably continue to find incomprehensible.  I am afraid I regard “Christian philosophy” as a bit of an oxymoron; analogous, here in the United States in the year 2010, to the term “Fox News journalism.”

No matter how deep and clever the intellectual processes involved, “philosophy” from an initial ideological viewpoint will always be suspect as nothing more respectable than an attempt to propagandize the subject viewpoint in a favorable light  (even  if the primary target of the propaganda is the philosopher’s individual self).

---------------------------

THE USES OF PHILOSOPHY

Of all the “serious” human endeavors,  there is nothing of much less use than Philosophy.

Religion may be paradoxical in its attempt to provide explanations for the inexplicable  and comfort and reassurance for a short-lived animal whose quirky “awareness” includes the certain knowledge that it is in the process of dying;  but Religion has been fitted by the inexorable forces of evolution to provide just those explanations and just that comfort; so it is of incalculable value to the species.  (It is of highly variable use to individual human beings, depending not only upon their personal awarenesses, but too often upon the attitudes and rationalism of other more or less religious persons around them who find it difficult to Mind Their Own Business.)

Science is an equally oxymoronic endeavor to explore the Universe using nothing but that same flawed mammalian awareness, but minus its reflexive ability to ignore troublesome facts and a good deal of its genius for leaping logical chasms.  Since it takes conscious and constant effort to avoid these genetically mandated  techniques, science is a lot more difficult than religion and practiced by several orders of magnitude fewer people.  Its value,  however,  appears on the face of things to be considerable; but this may be an illusion.  There is a deep, atavistic reason for the pop-culture convention of the “mad scientist.” There is always the danger that science will learn something that we should not know (or already has!); it is possible that it has negative survival value.

Art, and its more accessible and less rigorous subdivision, Entertainment,  are both extremely useful and apparently less dangerous than either Science or Religion.  Aesthetics and simply “having fun” have produced no nuclear explosions, Inquisitions or Jihads that we know of, while contributing several  thousands of years of emotional uplift and merciful distraction from the grimmer realities.  Their chief drawback  appears to be encouragement for certain individuals to become distracted from the processes of “making a living” and to “waste a lot of time” but the emotional rewards seem in most cases to at least balance the distractions and even the occasional hunger pang.

Business and Commerce are unavoidable extensions of the elementary need for sustenance and shelter which serves to maintain the gene pool.   While a small percentage of the species seems to actively enjoy economic pursuits, and to be good at them, the vaster majority of us only join in because we cannot avoid it.  The various “isms” and their relative efficacy, so debated by politicians who do not wish to go on record as supporting or opposing any actual policy of substance, are all as poorly understood by the majority of the populace as religion or science.  

The several extant  iterations of Socialism (communism, Fabianism, United States Social Security and Medicare programs, etc.)  seem to “work” best when  administered by people with strong ethical convictions and notions of fairness, dispassionate dispensation of justice and, ultimately, triage.  (There will never be enough money to supply all the deserving citizenry with everything they desire so hard decisions have to be made by somebody)

All this takes a strong governmental structure run by people who are both smart and honest (always an iffy combination) as well as consistently conscientious about discharging their assigned duties.  In the unlikely event all these stringent criteria are met, there remains the historical possibility that the effect on the gene pool of so many of its members being relieved , for several generations, of the evolutionary pressure which  brought us to the point we occupy, might prove degenerative.

Just as we have, over the millennia of wearing clothing and occupying easily heated dwellings, lost a lot of our ability to survive as naked apes, our relative security from the danger of dying of easily-forestalled diseases or simply starving to death in impecunious old age  may contribute to our becoming a less-fit species.

A lot of people seem to worry about this; including a lot of people who actually don’t believe  in the concept of Darwinian evolution at all;  leading to the question of whether they are actually being as hard-headed and pragmatic as they think they are, or simply mean-spirited and selfish about principles of social justice as opposed to public tax policies.  (Nobody likes to pay taxes; nevertheless, the logic remains that individuals who reap the most benefit from the structures of society should be asked to pay the largest share of supporting them.)

Capitalism, in modern American society, suffers from the most egregious misunderstandings of its purpose and efficacy.  For many people it has become a Belief System almost as tyrannical as the Medieval Catholic Church, with the unwashed multitude forced to suffer its depredations while simultaneously being constrained to support it; and remaining willing to do so because they have been led to believe in its ideology.  (I am not gratuitously baiting the Church, here; only using it as an example of the power of faith over reason.  One could draw a similar analogy using the horrendous Indian caste system, but the Crusades-era Church is a more immediate example for most Americans.)

Capitalism, it cannot be denied, is a marvelous human invention.   It combines some of the most reflexive (and occasionally un-admirable) human traits and impulses;  such as acquisitiveness, lust for power over others, “rugged individualism,” and the simple instinct to demonstrate one’s relative value through competition; with some of the species’ most laudable and desirable traits; creativity, imagination, and a drive for innovation and excellence.

Capitalism has probably done more to raise the standard of living for more millions (billions!) of people than all the religions and sciences the world has ever developed.  (Art and entertainment don’t do anything to raise the standard of living;  they just make living bearable, at whatever  prevailing standard.)

But where Socialism bears the danger of slowing or even reversing the mechanisms of evolution;  Capitalism suffers the peril of becoming cancerous; simply getting out of hand and consuming everything in sight; including itself.

The only cure for either is a strong and judicious governmental structure exercising even-handed control over the “ism’s” excesses while keeping the reins loose enough to permit development.

Achieving that sort of structure in a democracy,  even in a representative republic, remains problematical. It demands a well -educated voting citizenry and more statesmen (and stateswomen) than politicians – a state of affairs which does not currently exist in the Republic.

A dozen years or so ago, as I was approaching 70, a brilliant 12-year-old student asked me what my religion was.

Because I had some clue that hers was a rather emphatic brand of Protestant Christianity and I did not wish to either rebuff her concerned  curiosity or prejudice her against me with  an off-putting conventional label such as “agnostic,” I told her, “Oh, Honey; I’m a philosopher.”  (To my relief, she nodded with approval.)

That was the first time in my life I had ever admitted it;  possibly the first time it had ever occurred to me.

I have never been a student of Philosophy, the discipline.  I have never taken a formal university course in the subject and, indeed, I have never actually studied philosophy at all until I started listening to recorded lectures promulgated by The Teaching Company as a distraction from the tedium of my program of daily walks to maintain my aging physiognomy after a late-60’s heart attack.

It wasn’t that I have consciously avoided  the academic subject of philosophy.  I am afraid I simply never had all that much respect for it; which may seem an odd attitude, until you realize that, since I have been engaged in the practice of philosophy almost since my earliest memories of consciousness, I almost congenitally know how (and why) it is done and it never impressed me enough to make me want to learn much about the conclusions it had historically led other people to formulate.

Philosophy, the practice, is nothing more nor less than an individual intelligence’s attempt to make sense of what it has learned; in a world which supplies too much information in areas which do not much matter, and far too little information in areas which are much more critical.  Some people are better at doing this than others; sometimes as a consequence of differences in raw intelligence; more often as a result of better critical evaluation of the information; what matters and what doesn’t.

In addition, I suspect a healthy genetic component.  Some people are simply inclined to philosophy, as some are inclined to perilous physical sports or to anti-social criminality, because of the entirely accidental  interplay of their few tens of thousands of molecules of deoxyribonucleic acid.

Most of the philosophically inclined will probably defer in their youth to the cumulative wisdom of their intellectual environment and undertake the formal study of the subject.  Thus they will learn the sequential history of the Great Ideas of their culture and be encouraged—even  tasked – to pick and choose among these Great Ideas to identify the particular “school” of thought to which they, themselves, belong.  Along the way they will probably be counseled in the various techniques of logic, observation and inquiry, plus critical evaluation of information, which will tend to raise the quality of their conclusions.

May I suggest that this is rather profoundly a case of reversing the optimal positions of horse and cart.  

If one first learns that Plato and Aristotle were geniuses, then schooled in Aristotle and Plato’s conclusions and “discoveries,” plus an account of their influence on several thousand subsequent thinkers in the last 2500 years or so, any “original” ideas one might come up with are highly likely to reflect a continuance of that influence.  If one is not thereby discouraged from attempting any meaningful additions to the canon at all.

There is a waggish dictum in the study of the art of playwriting;  that too early becoming an expert on the 30-odd plays by William Shakespeare is likely to discourage a particularly bright and promising young playwright from ever  writing anything in the English language, since Divine Will has very likely said everything pertinent about human inter-action, said it somewhat better than one feels personally capable of, and about 400 years ago.
 
It takes a personal megalomania bordering on psychosis for a young person to feel she or he has anything to say to rival Renee Descartes.

I, on the other hand, reached the advanced age of about 65 without knowing much about Descartes except “Cogito, ergo sum,” plus a lot of jokes based on that quotation.   I reached almost 75 without  knowing much more about “Socratic Inquiry” than the notion it involved asking sometimes uncomfortable questions; and without becoming aware of Plato’s  celebrated “Parable of the Cave” at all.

I learned in college, and had the information reinforced in graduate school, that Aristotle had some very specific things to say about drama which might be very useful to a person who was trying to write plays.  I fear I cannot tell you what those things might be because (although it was assigned in at least one classroom)  I never bothered to consult Aristotle directly on the subject.  (I have the impression I may have picked some of his principles up indirectly in studying the mechanics of playmaking.)  

The truth is that, although I have had pretensions to intellectualism since at least the age of 4 (That’s about as far back as I can remember), in my youth  I never had the slightest impetus toward scholarship.  (I usually got good grades in school, but I never worked  at it;  if it required work, I seldom did it;  as a result, after grade 12 I seldom got the best grades unless the subject was outwittable with a bit of creativity, usually in writing the English language; or unless it simply interested me enough to overcome my pervasive laziness.)

My reader  is by now likely to have noticed that, in the process of attempting to explore the definitions of Philosophy, I make a good many references to myself.  Outside of uncomplicated  egocentrism, this is because  my several decades of  living in and observing the world and my fellow beings has led me to a psychological opinion which is as close to a conclusion as my personal philosophy permits me to make.

Each of us lives in an “artificial” world which is ours, alone.  It may have more or less relevance to the “real” world, depending upon the accuracy of our sensory impressions, as filtered through the bewilderingly complex network of the Darwinian biological imperatives which we all share, and mediated, finally, by our own unique set of prejudices.  

I remarked   early in my semi-serious attempt to codify a Rational Religion, that although on a Universal scale “everything is physics:” on the individual human intelligence scale “everything is metaphysics.”  All of each of “us” is in our own wetware.  When it no longer oxidizes enough sugar to “keep the candle lit” – the individual “each” disappears.  

Despite these many thousands of years of effort by millions of human thinkers to deny this overwhelming probability, I have seen no evidence that they have succeeded in doing that;  EXCEPT, of course, in each denier’s individual consciousness.

Where, I must assume, the effort has succeeded  in doing its evolution-mandated job of forestalling paralyzing confusions and providing comfort, or at least sufficient “reason for existence,” for as long as necessary.

In order to forestall unnecessary mental gymnastics on the part of anyone inclined to argue with my stated “conclusion,” above  (and thus permit them to pursue better arguments,  perhaps) let me parse the paragraph in terms which I can reasonably assume to be accurate,  because they are my own .

My world is “artificial” because it is self-constructed out of what I have “observed” and “learned,”  as limited by my individual intelligence and the quality of my available information.  It includes, unavoidably, my impressions of any other such self-assembled artifices as I might become aware of—the other (largely) human creatures whom I interact with.  My awareness  of  objects, forces and entities “outside” of the realm of firing mammalian synapses is hard-won and severely limited.

So most of my individual artifice is constructed of imperfect assessments of a host of other artifices.

This is likely to be confusing, even if I am not quite aware of its confusion.  My central nervous system is not equipped to deal well with confusion;  I will therefore seek techniques to avoid it.

The most “rational” technique is to “decide” that what seems most nearly “real” to me is indeed reality.  As a result, all those other people seem to be at least a little misled.

For psychologists and cognitive scientists, a key element of human-level intelligence is the ability to understand that other people have minds, so supposedly almost all of us can do that.

For me, the critical additional principle is to understand that none of those people;  not even my own children or my closest friend; certainly not the long-suffering wife with whom I recently celebrated  (in a subdued, comfortable, modest fashion) our 50th wedding anniversary, shares my mind.

The corollary realization is that, no matter how hard I try, with whatever  degree of cooperation from them, I can never really know what anybody else’s world is like. (We can each convince ourselves that we understand one another, but even the two of us might be able to “agree” that that is not quite the same experience)

I know, or at least it appears to me, that many other intelligences would regard this dictum, applied to themselves, as horrendous.  The idea of being so abysmally alone!

For me, however,  it is simply unavoidable.  Like death, itself, it may even be a good idea.

Just as it also seems to be a good idea to avoid letting it affect our awarenesses,  and therefore dictate our lives, for as long as possible.

We are each unavoidably, profoundly alone within our own intelligences.

Our actual “contacts” with  other people – even with the independent physical Universe – are rare and probably largely illusory.

So they are marvelous!

We fall in love.  We form friendships.  We worship God.  We appreciate aesthetics.  We share emotions with thousands of other people in athletic stadiums and theatres.

They are horrendous!

We fight in wars.  We run in mobs.  We  follow fads and fashions.  We participate in  pogroms and genocides.   We vote for people who appeal to the “worst” elements of our individual natures.

We “lose”  ourselves in all of these.  But unless we die in them we also “recover.”

I have not spoken to most of my closest “friends” in at least a couple of years.  I cannot even pretend that my love for my wife is the same, or even a similar, emotion as the one I felt for her when we conceived our son. (Even though I believe I remember the exact occasion.)

A recent article in WIRED magazine explores the uses of Failure; the idea being that – since most “new” ideas and efforts are doomed from the outset, Failure is the expected outcome.  But only for an “outside” observer.  The striver himself expects success, or at least hopes for it with  enough optimism to make the preparations and take the original plunge.  And what we “learn” from our failures may eventually lead us to a “success;” perhaps in an entirely different direction.

One peripheral idea in the article struck me as applicable to my relationship with Philosophy.  This was an exploration of the almost canonical notion that most revolutionary “discoveries” are made by young people.  The WIRED article amended this to include, “ or persons who are new to the field.”

The accepted wisdom is that young intelligences and “newbies” might have learned enough to have become familiar with the subject, but have not yet learned to believe what isn’t possible, so the automatic time-saving governor in their brain (which has evolved specifically to keep “wrong” and outlandish notions from interfering with  rational logic) doesn’t keep them from exploring areas which are likely to prove unproductive.

Such persons mostly come up with  cock-a-mamie ideas, but once in a very long while:  Bingo!


A key paradox (one of the real ones) seems to be how all our individual intellects, with all their barriers to communication, manage to come together and get anything done.

How does the “illusion” of a “community” come to be so demonstrably “real?”

The answer to that probably lies somewhere in the trees on the edge of a savannah in Africa, a very long time ago.  Follow me through a conjectural journey

Human awareness probably emerged from a more elementary and “primitive” primate awareness, so the “community” most likely came first,  as a simple inter-species cooperation which enhanced survival and contributed to the creation of a next generation.

By basic Darwinian development,  our ancestors grew gradually smarter and smarter, which is to say they got better and better at surviving in their environment, and so out-reproduced their neighbors and relatives.

 (One element of their superior adaptation seems to have been the ability, perhaps the simple willingness, to leave a familiar environment when it became too crowded or  otherwise lost its ability to support them comfortably, and to seek another which might be more amenable; “Homo erectus” remains and tools have been found all over the “Old World,” from Africa to China, with some of the earliest “modern” discoveries on Asiatic islands which seem to demand an ability to travel  across at least dozens of miles of sea water.  While there is no certainty that H. erectus was our direct ancestor – or could even be considered a single species -- its skeletal similarity to us makes it the best candidate we have.)

Simple awareness of present conditions-- opportunities to eat; safe trees and holes to sleep in--became an ability to think about the future; how to find those necessities tomorrow.  Awareness gradually developed  into Imagination.

At which point, I suspect, the processes of cognition began to be troublesome.  Imagination can be scary.  Not only does experience teach us that a lot of very real bad things can happen,  imagination suggests that they very well might happen.  

Even worse, imagination can invent a lot of stuff which actually never  happened to anybody, anywhere.  But there doesn’t seem to be any reliable way to tell the difference, by purely thinking about it, between the real perils and the imaginary ones.

The hyena eats one of your children; you have more, but some of them drown in a catastrophic flood.  

What’s the difference?  They are dead.  Something took them from you.  O.K., you can see the hyena and you can understand that it was simply hungry,  but why your child and not your brother’s? And there is no obvious reason  the valley you have lived safely in all your life should suddenly fill with a raging torrent, but something apparently wanted  your children’s lives.  The fact that you can’t easily identify it only makes it scarier in the imagination.

The flip side of Imagination is that it can work both ways.  Just as it can terrify us, it can reassure us that everything is going to turn out o.k.  

I suspect that there were many among our ancestors – actually among our ancestors’ relatives, since these individuals would never become anybody’s ancestor – who were so imaginatively fearful that they were effectively paralyzed in the face of predation or natural disaster.  They simply died.

Our ancestors might only have had better reflexes, but I suspect they also had evolved a peculiarly megalomaniacal  cognitive trait which we  call by a number of names, including “hope.”

Along with a growing imagination probably evolved an ever-increasing ability to communicate.  From primitive mammalian “warning cries” (researchers have learned to tell whether a lookout monkey has seen an eagle, a snake or a leopard) developed  over the millennia the capacity to talk about ideas – those products of purely the individual imagination – with other individuals.

If one learns that another person has ideas, especially if a few of them seem to be similar to one’s own, that is likely to reinforce the conviction that one’s ideas are somehow “real.”

It doesn’t take much imagination to get from that point to “rocket science.”  It is an almost traceably cumulative process.  Oh, there were probably a number of “phase shifts,” such as when the first couple of especially smart and communicative anthropoid creatures started talking about their ideas, and a few of  their neighbors heard them.

That sort of thing could spread through the community pretty fast.  But it probably happened a few thousand times before enough people were eavesdropping  for the notion to “catch  on.”

Of course the “default” state of the human imagination probably took for granted that cognitive awareness  was  something more substantial than pure metaphysics.  Some of our earliest and best organized thinkers certainly thought so.  Socrates (Plato)     realized that his body was going to die and decompose, and it was  emphatically subject to the law of gravity and such mundane physical processes as  eating and elimination until it did.  But his mind could soar and conceive of wonders, so if forced to make a choice about which was more “real” it was obvious that his mind, or some portion of it, had more lasting validity.

While our inherent megalomania leads us to the conclusion that each of us, as a thinking individual, must have something like an immortal soul, our common sense and a  bit of embarrassment at making such a claim must also make us aware that probably everybody else does, too.  This logically devolves into such notions as Universal Consciousness, of which each of ours is only a part, and on to a concept of The Supreme Consciousness.

God is probably a good deal older than the notion of Supreme Consciousness, which could only have been arrived at after many generations of talking about gods and souls.  The first gods were probably just powerful forces which our ancestors didn’t understand, but nevertheless were forced to deal with.  The bonobos may have gods, though I doubt if they talk about them much, and it’s possible that god cannot exist without being the subject of conversation.

WHY I AM A PHILOSOPHER

It is probable that I define myself as a philosopher for reasons not entirely flattering to my personality or cognitive condition.

In the first place, a philosopher is something which it is easy for me to be, and I have all my life gravitated toward doing things which are easy for me.  For many years I defined myself as a playwright, because I had learned how to do that and I had some success at it, even including some minor recognition and a few performances of my plays.

The organizational skills, at least the discipline, of constructing a play of any length now seems beyond me.  This may itself be illusory.  When I was younger, and doing it a lot, when I was not actually writing a play I often wondered how I had ever managed to do it, and if I would ever do it again.  So perhaps I could still manage at least a long one-act if I took it in stages, or actually worked at it the way most people seem to write (successfully).  What I seem to lack now is the motivation.  

I don’t really believe I will ever write a play that anyone will want to produce.   I have already written several that are worthy of production, and at least one that a very good amateur actor wants to star in.  But getting it actually on the boards would demand some effort from me that I don’t believe I am up to.

Writing any more plays seems to be nothing but an extended exercise in frustration.

When I was younger I had a fear of reaching the age I am now and not being able to have any more interesting ideas for plays.  That is not my excuse.  I have enough interesting ideas for plays that I have never written that I couldn’t dramatize them all if I could write at my present level of skill until I am well past 100.

I suspect it is just too damned much trouble.

When I was younger my innate egoism made me believe I had things to say which the world needed to know. The best way to attract the World’s attention to what I had to say seemed to be by constructing plays. Now about all I have left is enough egoism to set down my thoughts in a few random essays, which I (or somebody) can possibly collect into some semblance of a coherent personal philosophy.

To what end?

Well probably, in all practical terms, only to get myself through the end of life with a personal world-view which makes my approach to leaving the world cognitively comfortable.

Each of us constructs the world she or he needs to feel comfortable in.  If we are unable to do this, especially as we near the end of it, we are probably fated to be pretty miserable creatures.

I suspect this is the reason many people engage in self-destructive behaviors, from “extreme” sports to alcoholism or drug addiction.  (I mean the reason they START doing things they know to be addictive; once into an addiction, personal choice becomes difficult to reassert.)

I evidently do not have an “addictive personality” (Even my lifelong relationship with sugar seems to be more a matter of self-indulgence than of physiological need.)  So I probably do not have an authoritative basis for judging others in this area, but I do think it is possible that the reason that many people make such horrendous messes of their lives is that they have never learned what I consider to be two profound truths of human existence:

    1. Because we are each alone in the world, cognitively, we must each construct our own personal narrative and adapt it to physical and societal realities  coherently enough to convince ourselves that it “works.”

    2.Through the optimal use of our fantastic imaginations we are each capable of     constructing a world which does work,  if we escape being mis-educated  when we     are too young and inexperienced to properly protect ourselves.

Most people, I have observed, do not so escape.  They are early on so indoctrinated into belief systems which are the product of other people’s imaginations (many of them long dead and with primitive information about the physical nature of the universe) that they have no clue where a properly informed and stimulated imagination might lead them.

This seems to be such a nearly universal human condition, that I am led to suspect that individuals such as myself, who seem to have escaped the crippling indoctrination in their “formative” years are at least partially the result of  random genetic variation.

This would be the same random genetic variation which  has made possible our Darwinian evolution out of the primeval ooze and into our various civilizations.

There seems to be an inherent, possibly pre-programmed, impulse to question authority and reject “received” wisdom;  to believe, in spite of almost everything we are told by our elders and the authorities of our societies, that we “know better” than they do; and to persist in this contrarian mind-set  despite many demonstrations that our elders and the authorities  are more than occasionally right.

Because they are obviously also, and occasionally disastrously, wrong.

Although my personality is rather profoundly in this contrarian camp, my experience has taught me that this is not necessarily adaptive.  

Since people tend to act in accordance with their beliefs, and since the overwhelming majority of human beings seem to regard contrarians as peculiar; even dangerous; I find myself  most familiarly in the predicament of being marginalized.  

Most of my life I consoled myself (and excused the boodleheadedness of others) with the conviction that I was simply smarter than most people.   With the wisdom of age and experience I have come to realize that many, many people “smarter” than I,  or at least as smart, are trapped in  fantastical systems of belief  which have curtailed their imaginations and crippled the construction of their personal narratives.

Their personal tragedies are palliated by the fact that their systems of belief, through centuries perhaps of refinement and self-protective adjustment, most usually also provide them with sufficient “reasons for living,” and even comfortable accommodations as they approach the end of it.

The ancillary discomforts, and even tragedies, that they may inflict upon other persons who happen not to share their fantasies, are another matter entirely.

 I consider myself fortunate, both geographically and in my inadvertent emergence in the historical timeline of humanity.  In a different part of the world, or an earlier or more primitive society I might long ago have been imprisoned, burned at the stake or abandoned in the wilderness.

Meanwhile, a lot of those other people drink too much, or shoot heroin, or stand on street corners in their Sunday Best, carrying signs denigrating people who do not share their sexual orientation.

I call myself a philosopher; and mockingly denigrate philosophy as not of much use to anyone except the individual philosopher.  (Students of philosophy often do somewhat better; in optimal cases, they achieve Tenure.)

More objectively, I am just an old guy at a computer keyboard, setting down some relatively organized thoughts which likely have less chance of being read and intellectually processed by anyone else than a single lottery ticket has of winning the jackpot.

Somehow, it is enough.
--------------------------------------------