The Moon Illusion

The moon looks larger when it’s near the horizon than it does when it is high in the sky.  Sometimes, for example at this NASA website, this is phrased so as to imply that the view on the horizon is the one that’s illusory.

I once had the privilege of seeing this “illusion” in full force.  I was walking down a tree lined city street, with the moon on the horizon surrounded in my visual field by trees and houses.  The moon looked positively enormous – far larger than the entire city.

Guess what?  The moon is far larger than an entire city.  With proper cues available to clue the visual system in, this becomes more apparent.

It’s not always the grandiose view of an object that is illusory.  Sometimes it’s when we see something as small that we are misperceiving it.

Causation: what is it?

I just read a beautiful passage by Don Page, who in turn is commenting on a debate between physicist Sean Carroll and theologian William Lane Craig on the role of theistic explanations in cosmology.  There’s a lot to this passage, but don’t worry.  I’ll try to walk you through it.

I agree with you, Sean, that we learn our ideas of causation from the lawfulness of nature and from the directionality of the second law of thermodynamics that lead to the commonsense view that causes precede their effects (or occur at the same time, if Bill insists). But then we have learned that the laws of physics are CPT invariant (essentially the same in each direction of time), so in a fundamental sense the future determines the past just as much as the past determines the future. I agree that just from our experience of the one-way causation we observe within the universe, which is just a merely effective description and not fundamental, we cannot logically derive the conclusion that the entire universe has a cause, since the effective unidirectional causation we commonly experience is something just within the universe and need not be extrapolated to a putative cause for the universe as a whole.

Let’s start by breaking that first sentence into three parts.  Hey, I said I’d walk you through it, right?

Part 1 is The lawfulness of nature:  we have a bunch of mathematical formulae, like F=ma, and F_g=G*m1*m2/r^2, that enable us to make reliable predictions.  Part 2 is The directionality of the second law of thermodynamics:  the second law concerns entropy.  It says that entropy does not decrease over time, but can increase.  Part 3 of Don Page’s first sentence in the above quote, says that parts 1 and 2 lead to the commonsense view that causes precede effects.

OK, at this point, even if you are not a physicist, you can kind-of understand what Don Page said in that first sentence.  Kind-of, because you might not understand what “entropy” is other than “something that physicists study, and which seems to play an important role in physical theories,” but at this point that’s OK.  But understanding what the sentence says is far short of seeing why it is true.  I want you to see, at least at a very introductory level, why it is true.  Let’s learn some more about this entropy stuff.

As Sean Carroll says in the God and Cosmology debate, it’s an important fact that we observe that the early universe had low entropy.  Given that fact, any other time will be likely to have higher entropy.  There’s a complication here, however, and to ponder it, we’ll need to split our thinking about time into two tracks.  We’ll call them quantum mechanical time, or t(qm), and thermodynamic time, or t(th).  Remember those mathematical formulae we called “laws of nature”?  We have a time parameter, t(qm),  in a quantum-mechanical equations like Schrödinger’s Equation.  And we have a time parameter t(th) in the Second Law of Thermodynamics, dS/dt(th) >= 0.  But why are we splitting “time” into two concepts?  Because we have promising physics models which require it:

I [Don Page] myself have also favored a bounce model in which there is something like a quantum superposition of semiclassical spacetimes […], in most of which the universe contracts from past infinite time and then has a bounce to expand forever. In as much as these spacetimes are approximately classical throughout, there is a time in each that goes from minus infinity to plus infinity.

In this model, as in Sean’s, the coarse-grained entropy has a minimum at or near the time when the spatial volume is minimized (at the bounce), so that entropy increases in both directions away from the bounce. At times well away from the bounce, there is a strong arrow of time, so that in those regions if one defines the direction of time as the direction in which entropy increases, it is rather as if there are two expanding universes both coming out from the bounce. But it is erroneous to say that the bounce is a true beginning of time, since the structure of spacetime there (at least if there is an approximately classical spacetime there) has timelike curves going from a proper time of minus infinity through the bounce (say at proper time zero) and then to proper time of plus infinity.

In Don Page’s model, we can keep our Second Law of Thermodynamics as we previously understood it, provided that we use a new “time” parameter which points in one direction at quantum-times quantum-before the bounce, and in the other direction at quantum-times quantum-after the bounce.  In each case, thermodynamic-time points in the direction of higher entropy, i.e., higher-entropy parts of history are thermodynamically-future.

You may have noticed that I haven’t explained anything yet.  It’s only gotten very complicated!  What these two physicists, Sean Carroll and Don Page, know about entropy but haven’t mentioned, is that entropy always increases when a physical record of an event is made and “read”.  A physical record is an enduring result of an event, such as a dinosaur’s footprint fossilized in mud, or an expanding sphere of high intensity light from a supernova, or a tape recording made by Richard Nixon – the kind of thing that lets us know the event occurred.  Another type of physical record, of particular importance here, is the memories in your brain.  Like any other physical record, the process of recording and then recalling memories necessarily increases entropy.

So, given that at time t1 a physical record is made, and that at time t2 the record is read/recalled, we know that entropy is higher at t2 than at t1.  At time t1 a memory is laid down; at t2, the memory is recalled.  It follows that t2 is thermodynamically later than t1.  It follows that the psychological arrow of time lines up with the thermodynamic arrow of time, insofar as our experience of time is based on remembering the past, but not the future.  (Hat tip: Stephen Hawking, A Brief History of Time.)

But there is another aspect to the psychological arrow of time, which relates to our ability to act on systems and thereby control aspects of their future.  If I replace my worn spark-plug wires, I can improve the performance of my car’s engine tomorrow, but I can’t improve yesterday’s performance.  Why not?  Because the entropy of the universe yesterday was lower than the entropy today, and my interventions today cannot reliably affect lower-entropy states of the universe.  You cannot un-scramble an egg.  But you can scramble one.  By replacing the wires to the spark plugs, I will be increasing the entropy of my car engine in certain ways – making tiny scratches in the connectors, re-shaping various lumps of grease and dirt, and so on.  The new wires may be in a lower-entropy state than the old ones, but remember that the old ones have not been removed from the universe.  They’ve only been removed from my car.  The new wires also got slightly scratched and bent in the process.  So, after car maintenance, the new wires still exist but with slightly higher entropy, the old wires still exist at about the same entropy, and the rest of the engine has gained some entropy – not to mention the atmosphere that I breathed into and radiated some body heat into, etc., etc.

Every time we accomplish some objective, we increase the entropy of the universe.  That is why we cannot affect the past – or rather, cannot affect the parts of it we care about.  The parts of it we care about all involve thermodynamically irreversible processes, i.e., processes that increase entropy.   When my car was running yesterday, it burned gasoline in air and radiated heat like crazy; those operations cannot be undone, in order to achieve better yesterday-performance.  We cannot un-scramble the necessary eggs, which it would take to bring about a specific, lower-entropy, macroscopic event.

So, not only do we remember the past and not the future, we also control some macroscopic future events but no such events in the past.  These two aspects of the “psychological arrow of time” both line up, for deep physical reasons, with the thermodynamic arrow of time.

We are now in a position to understand:

we learn our ideas of causation from the lawfulness of nature and from the directionality of the second law of thermodynamics that lead to the commonsense view that causes precede their effects

Causes precede their effects in our experience, because when we deliberately cause things, those things are in the future, in thermodynamic time, and hence also in psychological time.

Yay hooray!  We understood one sentence!  Let’s go for two:

But then we have learned that the laws of physics are CPT invariant (essentially the same in each direction of time), so in a fundamental sense the future determines the past just as much as the past determines the future.

Wait, whaaaat?  Paul Torek just said that we only deliberately cause events that lie in our future, and now he quotes Don Page (with approval) saying the future determines the past just as much as the past determines the future??

Yes, but look closer.  That “deliberately” is important.  But first, we need a clearer concept of “causing”.  Let’s borrow from Judea Pearl’s book Causality.  We’ll just suppose that we can set the value of some variable, and see what happens to the probability of other variables.  For example if we want to know if smoking causes cancer, we set Do(smoking)=True, and see what happens to the probability of cancer.  If it goes up, the answer is yes.

So what happens if we set Do(change-sparkplug-wires)=True?  Does the probability of Better Engine Performance Tomorrow go up?  Yes, quite a lot.  Does the probability of Better Engine Performance Yesterday go up?  No, not at all.  Does the probability of a particular ion being here, rather than there, a minute after the Big Bang, change?  Maybe!  It depends on which particle we have in mind, and where “here” and “there” are, exactly; but if we spell out the exact motions that Do(change-wires) involves, and so on, we could in principle derive new probabilities for the early-universe conditions, which would in some cases be higher or lower than in the scenario where Do(change-wires)=False.  Because as Don Page points out, the laws of physics are CPT-invariant, which means that if we reverse Charge, Polarity, and Time, we get the same equations.

Actually, CPT invariance is more than we need, to make the relevant point.  Given that CPT invariance is true, it’s pretty easy to see that if we can use the equations of physics to derive future conditions from past ones, we can just as easily use them to derive past conditions from future ones.  Let’s call the italicized part of that sentence “bidirectionality”.  CPT invariance implies bidirectionality, but the reverse is not true.

So:  using a Pearl-esque definition of causality, we do indeed cause events in the past.  It’s just that none of the events we care about are among them!  So, sorry, we cannot make the Detroit Tigers win the 2006 World Series.  We cannot have you-yesterday make that witty comeback to your annoying colleague, that you just thought of today.  All of those things – things you care about, things you (by utter non-coincidence!) remember – lie on the wrong side of a thermodynamic/entropic gradient, and you can’t touch them.  Alas.

We now understand two of the sentences from that beautiful passage I quoted to start, and there’s only one more.  I have only a brief comment on the third sentence:

I agree that just from our experience of the one-way causation we observe within the universe, which is just a merely effective description and not fundamental, we cannot logically derive the conclusion that the entire universe has a cause

Our one-way causation is at the macroscopic level, where we do our living.  And indeed, we cannot derive the conclusion that the entire universe has a cause.  But then, we couldn’t derive that, even if the one-way-ness were fundamental.  From

for all X, Y; (X causes Y) -> (X precedes Y)

it would not follow that

for all Y, there exists X: X causes Y.

But to take a step back and look at the big picture, what Don Page seems to be getting at, is that people take their own experience, and project it onto the universe as a whole.  They reason something like the following.  I do stuff, making the future certain ways.  Maybe the whole universe is like that, and Someone made it happen!  I’ve never seen a time which didn’t have a time one second earlier than it – there couldn’t possibly be a beginning of time!  I use causal relations to exert control – therefore all causality is control!  I can control some future events I care about, but not the past – therefore causality and control run strictly from past to present to future!

All those inferences do seem to have something in common.

For anyone interested in the philosophy-of-physics issues that I’ve discussed here, if you have time to watch a video, I recommend Jenann Ismael’s talk at 0:54:00 – 1:39:00 or so in the conference recording.  You will need Microsoft Silverlight, a free download, which has an Apple OS compatible version.

Why “no ghost, no machine”?

A very common metaphysical view of human beings is that we are part ghost and part machine.   And I mean “metaphysical” in both main senses of the word:  the part of philosophy that is about the fundamental kinds of things that exist and how they relate to each other, and also the “spooky irrational beliefs” sense of “metaphysical”.  The ghost is supposed to be immaterial, spiritual, invisible and intangible.  The machine is supposed to be, well, a machine.  I don’t believe in the ghost or the machine.

The best answer to the ghost+machine metaphysic that I’ve ever seen is Bakunin’s, which I’ll quote at length:

Idealists of all schools, aristocrats and bourgeois, theologians and physicians, politicians and moralists, religionists, philosophers, or poets, not forgetting the liberal economists – unbounded worshippers of the ideal, as we know – are much offended when told that man, with his magnificent intelligence, his sublime ideas, and his boundless aspirations, is, like all else existing in the world, nothing but matter, only a product of vile matter.

We may answer that the matter of which materialists speak, matter spontaneously and eternally mobile, active, productive, matter chemically or organically determined and manifested by the properties or forces, mechanical, physical, animal, and intelligent, which necessarily belong to it – that this matter has nothing in common with the vile matter of the idealists. The latter, a product of their false abstraction, is indeed a stupid, inanimate, immobile thing, incapable of giving birth to the smallest product, a caput mortuum, an ugly fancy in contrast to the beautiful fancy which they call God; as the opposite of this supreme being, matter, their matter, stripped by that constitutes its real nature, necessarily represents supreme nothingness.

–Mikhail Bakunin, God and the State