An ancient Greek dialogue

I recently stumbled across the following fragment of ancient Greek dialogue. Or maybe I made it up.

Herodotos: The youth today have no respect for tradition. Why just today I visited the shrine to Eros, and mine was the only offering. Where is the gratitude for the boons the gods give us?

Eudokia: That’s because Eros isn’t real, and some people are beginning to notice! We need to cast aside myths like Erotic love and focus on real things, like friendship, or sex.

Alexander: Now hang on a minute. I don’t believe in Eros either, but that doesn’t mean erotic love isn’t a real phenomenon. A little casual observation will show you that some people have a special bond – whether a winged god shot them with an invisible arrow or not, that’s beside the point.

Herodotus and Eudokia, in unison: Verbal gymnastics! Semantic trickery!

Herodotus: Real Erotic love requires Eros! It’s in the name.

Eudokia: Surveys show that the vast majority of people believe in Eros. Therefore, the concept of a distinctive kind of love is inextricably tied to the myth, and must die without it.

Alexander: Doesn’t follow. As for Eros being in the name, that’s why when I write “erotic love” I start with a small e rather than capital E. And I predict that centuries from now, people will still do the same, though no one will believe in the winged god with the arrows. And everyone will know what they are talking about.

Herodotus (morosely): Nonsense! Without Eros, there is only sex.

Eudokia (triumphantly): Without Eros, there is only sex!

And there the fragment ends.


The cherry pion fallacy

“There can be no cherry pie without cherry pions.”

That’s the fallacy. A cherry pion, in case it isn’t obvious, would be an indivisible particle which had an irreducibly cherry quality to it. I wish I could find the internet comment that inspired me here, but I can’t find it. Let me just admit that I didn’t invent the metaphor all on my own (but I did coin the horribly punny title! so there!)

The alternative to the Cherry Pion fallacy is called Emergence. Yes, that’s a word with many uses, not all of them so innocent. But give “emergentists” an innocent-until-proven-guilty verdict, I plead. Many of them are just observing that some concepts aren’t applicable at the finest level of detail, but find targets at higher levels of organization.

Another inspiration for my coinage is Ronald Dworkin’s “morons” in Justice for Hedgehogs. Dworkin wrote:

If there are morons, and morons make moral claims true or false, then we might imagine that morons, like quarks, have colors. An act is forbidden only if there are red morons in the neighborhood, required only if there are green ones, and permitted only if there are yellow ones.

Dworkin’s “morons” are a caricature of a straw man, i.e. an intentionally ridiculous version of some philosophical arguments about ethics. Not that I endorse Dworkin’s solution to the no-“morons” “problem”, mind you. But the caricature of the argument was good for a laugh.

Arrow Dynamics of time

A deep look at science shows that time and causality don’t work the way most of us intuitively think they do.  For example, some models of cosmology such as the one advocated by Sean Carroll in From Eternity to Here, claim that at some time in our past the (ancestor of our) universe was at minimum entropy.  At still further times from ours, its entropy was larger than that, and in its daughter universes on that other side of the minimum, entropy may grow as one goes further into (what we consider) the past.  So far, no big deal.  However, as Sean Carroll also argues, it appears that everything we experience as making time “flow” in one direction can be explained by the gradient of entropy.  As far as we know, it is physically possible that at some time intelligent beings exist(ed) in those daughter universes and perceive time to flow in the opposite direction. And their viewpoint is just as valid as ours.

Which direction the arrow of time points, depends on where and when you sit.  Arrow dynamics.  I will go a long way for a pun.

This – and other strange and wonderful discoveries of science – obviously have serious potential to change some philosophical thinking.  The area of philosophy I am most interested in, in this connection, is “the problem of free will and determinism”.  Most of the classic statements of this problem assume things about causality that find no place in modern science.  So here I list some resources that shed light on these issues.

Carl Hoefer points out that well known scientific deterministic theories are bidirectional in time: that is, they allow us to infer from the present or future to the past, just as easily as from past to future.

Huw Price and Ken Wharton explain how “retrocausal” QM theory can account for known violations of Bell’s Theorem.

Yakir Aharonov and Lev Vaidman discuss the Two State Vector Formalism (TSVF), an empirically equivalent formulation of standard QM that wears its time symmetry on its sleeve; and Aharonov et al apply TSVF to explain weak measurement experiments.  Guido Bacciagaluppi uses an alternative formalism to argue that a time-directed interpretation of probabilities, if adopted, should be both contingent and perspectival.

E. T. Jaynes partially explains the relationship between entropy and information.

Eric Lutz and Sergio Ciliberto discuss experiments on information storage and entropy changes.

Steven Savitt explores Being and Becoming in Modern Physics.

Larry Sklar says that “The great problem remains in trying to show that the entropic asymmetry is explanatorily adequate to account for all the other [time] asymmetries in the way that the gravitational asymmetry can account for the distinction of up and down.”

Craig Callender discusses the relationships between the thermodynamic (entropic) arrow of time, and other intuitively appealing arrows like epistemic (memory), mutability (our actions affect the future), and explanatory.

Mlodinow and Brun show that given plausible physical assumptions, recording and then reading a robust memory always proceeds in the direction of increasing entropy.  H. M. Doss places their work in a larger context.

In a tour de force, Jenann Ismael explains (0:55:00 – 1:38:00) why we see the past as fixed and the future as something we can bring about.  This one requires Microsoft Silverlight to view, which is a pain, but worth it.


The Moon Illusion

The moon looks larger when it’s near the horizon than it does when it is high in the sky.  Sometimes, for example at this NASA website, this is phrased so as to imply that the view on the horizon is the one that’s illusory.

I once had the privilege of seeing this “illusion” in full force.  I was walking down a tree lined city street, with the moon on the horizon surrounded in my visual field by trees and houses.  The moon looked positively enormous – far larger than anything I’ve ever seen up-close and personal.  Since I’ve walked around it a lot, let’s say the biggest thing I’ve seen up close is my city.

Guess what?  The moon is far larger than an entire city.  With proper cues available to clue the visual system in, this becomes more apparent.

It’s not always the grand view of an object that is illusory.  Sometimes it’s when we see something as small that we are misperceiving it.

Causation: what is it?

I just read a beautiful passage by Don Page, who in turn is commenting on a debate between physicist Sean Carroll and theologian William Lane Craig on the role of theistic explanations in cosmology.  There’s a lot to this passage, but don’t worry.  I’ll try to walk you through it.

I agree with you, Sean, that we learn our ideas of causation from the lawfulness of nature and from the directionality of the second law of thermodynamics that lead to the commonsense view that causes precede their effects (or occur at the same time, if Bill insists). But then we have learned that the laws of physics are CPT invariant (essentially the same in each direction of time), so in a fundamental sense the future determines the past just as much as the past determines the future. I agree that just from our experience of the one-way causation we observe within the universe, which is just a merely effective description and not fundamental, we cannot logically derive the conclusion that the entire universe has a cause, since the effective unidirectional causation we commonly experience is something just within the universe and need not be extrapolated to a putative cause for the universe as a whole.

Let’s start by breaking that first sentence into three parts.  Hey, I said I’d walk you through it, right?

Part 1 is The lawfulness of nature:  we have a bunch of mathematical formulae, like F=ma, and F_g=G*m1*m2/r^2, that enable us to make reliable predictions.  Part 2 is The directionality of the second law of thermodynamics:  the second law concerns entropy.  It says that entropy does not decrease over time, but can increase.  Part 3 of Don Page’s first sentence in the above quote, says that parts 1 and 2 lead to the commonsense view that causes precede effects.

OK, at this point, even if you are not a physicist, you can kind-of understand what Don Page said in that first sentence.  Kind-of, because you might not understand what “entropy” is other than “something that physicists study, and which seems to play an important role in physical theories,” but at this point that’s OK.  But understanding what the sentence says is far short of seeing why it is true.  I want you to see, at least at a very introductory level, why it is true.  Let’s learn some more about this entropy stuff.

As Sean Carroll says in the God and Cosmology debate, it’s an important fact that we observe that the early universe had low entropy.  Given that fact, any other time will be likely to have higher entropy.  There’s a complication here, however, and to ponder it, we’ll need to split our thinking about time into two tracks.  We’ll call them quantum mechanical time, or t(qm), and thermodynamic time, or t(th).  Remember those mathematical formulae we called “laws of nature”?  We have a time parameter, t(qm),  in a quantum-mechanical equations like Schrödinger’s Equation.  And we have a time parameter t(th) in the Second Law of Thermodynamics, dS/dt(th) >= 0.  But why are we splitting “time” into two concepts?  Because we have promising physics models which require it:

I [Don Page] myself have also favored a bounce model in which there is something like a quantum superposition of semiclassical spacetimes […], in most of which the universe contracts from past infinite time and then has a bounce to expand forever. In as much as these spacetimes are approximately classical throughout, there is a time in each that goes from minus infinity to plus infinity.

In this model, as in Sean’s, the coarse-grained entropy has a minimum at or near the time when the spatial volume is minimized (at the bounce), so that entropy increases in both directions away from the bounce. At times well away from the bounce, there is a strong arrow of time, so that in those regions if one defines the direction of time as the direction in which entropy increases, it is rather as if there are two expanding universes both coming out from the bounce. But it is erroneous to say that the bounce is a true beginning of time, since the structure of spacetime there (at least if there is an approximately classical spacetime there) has timelike curves going from a proper time of minus infinity through the bounce (say at proper time zero) and then to proper time of plus infinity.

In Don Page’s model, we can keep our Second Law of Thermodynamics as we previously understood it, provided that we use a new “time” parameter which points in one direction at quantum-times quantum-before the bounce, and in the other direction at quantum-times quantum-after the bounce.  In each case, thermodynamic-time points in the direction of higher entropy, i.e., higher-entropy parts of history are thermodynamically-future.

You may have noticed that I haven’t explained anything yet.  It’s only gotten very complicated!  What these two physicists, Sean Carroll and Don Page, know about entropy but haven’t mentioned, is that entropy always increases when a physical record of an event is made and “read”.  A physical record is an enduring result of an event, such as a dinosaur’s footprint fossilized in mud, or an expanding sphere of high intensity light from a supernova, or a tape recording made by Richard Nixon – the kind of thing that lets us know the event occurred.  Another type of physical record, of particular importance here, is the memories in your brain.  Like any other physical record, the process of recording and then recalling memories necessarily increases entropy.

So, given that at time t1 a physical record is made, and that at time t2 the record is read/recalled, we know that entropy is higher at t2 than at t1.  At time t1 a memory is laid down; at t2, the memory is recalled.  It follows that t2 is thermodynamically later than t1.  It follows that the psychological arrow of time lines up with the thermodynamic arrow of time, insofar as our experience of time is based on remembering the past, but not the future.  (Hat tip: Stephen Hawking, A Brief History of Time.)

But there is another aspect to the psychological arrow of time, which relates to our ability to act on systems and thereby control aspects of their future.  If I replace my worn spark-plug wires, I can improve the performance of my car’s engine tomorrow, but I can’t improve yesterday’s performance.  Why not?  Because the entropy of the universe yesterday was lower than the entropy today, and my interventions today cannot reliably affect lower-entropy states of the universe.  You cannot un-scramble an egg.  But you can scramble one.  By replacing the wires to the spark plugs, I will be increasing the entropy of my car engine in certain ways – making tiny scratches in the connectors, re-shaping various lumps of grease and dirt, and so on.  The new wires may be in a lower-entropy state than the old ones, but remember that the old ones have not been removed from the universe.  They’ve only been removed from my car.  The new wires also got slightly scratched and bent in the process.  So, after car maintenance, the new wires still exist but with slightly higher entropy, the old wires still exist at about the same entropy, and the rest of the engine has gained some entropy – not to mention the atmosphere that I breathed into and radiated some body heat into, etc., etc.

Every time we accomplish some objective, we increase the entropy of the universe.  That is why we cannot affect the past – or rather, cannot affect the parts of it we care about.  The parts of it we care about all involve thermodynamically irreversible processes, i.e., processes that increase entropy.   When my car was running yesterday, it burned gasoline in air and radiated heat like crazy; those operations cannot be undone, in order to achieve better yesterday-performance.  We cannot un-scramble the necessary eggs, which it would take to bring about a specific, lower-entropy, macroscopic event.

So, not only do we remember the past and not the future, we also control some macroscopic future events but no such events in the past.  These two aspects of the “psychological arrow of time” both line up, for deep physical reasons, with the thermodynamic arrow of time.

We are now in a position to understand:

we learn our ideas of causation from the lawfulness of nature and from the directionality of the second law of thermodynamics that lead to the commonsense view that causes precede their effects

Causes precede their effects in our experience, because when we deliberately cause things, those things are in the future, in thermodynamic time, and hence also in psychological time.

Yay hooray!  We understood one sentence!  Let’s go for two:

But then we have learned that the laws of physics are CPT invariant (essentially the same in each direction of time), so in a fundamental sense the future determines the past just as much as the past determines the future.

Wait, whaaaat?  Paul Torek just said that we only deliberately cause events that lie in our future, and now he quotes Don Page (with approval) saying the future determines the past just as much as the past determines the future??

Yes, but look closer.  That “deliberately” is important.  But first, we need a clearer concept of “causing”.  Let’s borrow from Judea Pearl’s book Causality.  We’ll just suppose that we can set the value of some variable, and see what happens to the probability of other variables.  For example if we want to know if smoking causes cancer, we set Do(smoking)=True, and see what happens to the probability of cancer.  If it goes up, the answer is yes.

So what happens if we set Do(change-sparkplug-wires)=True?  Does the probability of Better Engine Performance Tomorrow go up?  Yes, quite a lot.  Does the probability of Better Engine Performance Yesterday go up?  No, not at all.  Does the probability of a particular quark being here, rather than there, a minute after the Big Bang, change?  Maybe!  It depends on which particle we have in mind, and where “here” and “there” are, exactly; but if we spell out the exact motions that Do(change-wires) involves, and so on, we could in principle derive new probabilities for the early-universe conditions, which would in some cases be higher or lower than in the scenario where Do(change-wires)=False.  Because as Don Page points out, the laws of physics are CPT-invariant, which means that if we reverse Charge, Polarity, and Time, we get the same equations.

Actually, CPT invariance is more than we need, to make the relevant point.  Given that CPT invariance is true, it’s pretty easy to see that if we can use the equations of physics to derive future conditions from past ones, we can just as easily use them to derive past conditions from future ones.  Let’s call the italicized part of that sentence “bidirectionality”.  CPT invariance implies bidirectionality, but the reverse is not true.

So:  using a Pearl-esque definition of causality, we do indeed cause events in the past.  It’s just that none of the events we care about are among them!  So, sorry, we cannot make the Detroit Tigers win the 2006 World Series.  We cannot have you-yesterday make that witty comeback to your annoying colleague, that you just thought of today.  All of those things – things you care about, things you (by utter non-coincidence!) remember – lie on the wrong side of a thermodynamic/entropic gradient, and you can’t touch them.  Alas.

We now understand two of the sentences from that beautiful passage I quoted to start, and there’s only one more.  I have only a brief comment on the third sentence:

I agree that just from our experience of the one-way causation we observe within the universe, which is just a merely effective description and not fundamental, we cannot logically derive the conclusion that the entire universe has a cause

Our one-way causation is at the macroscopic level, where we do our living.  And indeed, we cannot derive the conclusion that the entire universe has a cause.  But then, we couldn’t derive that, even if the one-way-ness were fundamental.  From

for all X, Y; (X causes Y) -> (X precedes Y)

it would not follow that

for all Y, there exists X: X causes Y.

But to take a step back and look at the big picture, what Don Page seems to be getting at, is that people take their own experience, and project it onto the universe as a whole.  They reason something like the following.  I do stuff, making the future certain ways.  Maybe the whole universe is like that, and Someone made it happen!  I’ve never seen a time which didn’t have a time one second earlier than it – there couldn’t possibly be a beginning of time!  I use causal relations to exert control – therefore all causality is control!  I can control some future events I care about, but not the past – therefore causality and control run strictly from past to present to future!

All those inferences do seem to have something in common.

For anyone interested in the philosophy-of-physics issues that I’ve discussed here, if you have time to watch a video, I recommend Jenann Ismael’s talk at 0:54:00 – 1:39:00 or so in the conference recording.  You will need Microsoft Silverlight, a free download, which has an Apple OS compatible version.

Why “no ghost, no machine”?

A very common metaphysical view of human beings is that we are part ghost and part machine.   And I mean “metaphysical” in both main senses of the word:  the part of philosophy that is about the fundamental kinds of things that exist and how they relate to each other, and also the “spooky irrational beliefs” sense of “metaphysical”.  The ghost is supposed to be immaterial, spiritual, invisible and intangible.  The machine is supposed to be, well, a machine.  I don’t believe in the ghost or the machine.

The best answer to the ghost+machine metaphysic that I’ve ever seen is Bakunin’s, which I’ll quote at length:

Idealists of all schools, aristocrats and bourgeois, theologians and physicians, politicians and moralists, religionists, philosophers, or poets, not forgetting the liberal economists – unbounded worshippers of the ideal, as we know – are much offended when told that man, with his magnificent intelligence, his sublime ideas, and his boundless aspirations, is, like all else existing in the world, nothing but matter, only a product of vile matter.

We may answer that the matter of which materialists speak, matter spontaneously and eternally mobile, active, productive, matter chemically or organically determined and manifested by the properties or forces, mechanical, physical, animal, and intelligent, which necessarily belong to it – that this matter has nothing in common with the vile matter of the idealists. The latter, a product of their false abstraction, is indeed a stupid, inanimate, immobile thing, incapable of giving birth to the smallest product, a caput mortuum, an ugly fancy in contrast to the beautiful fancy which they call God; as the opposite of this supreme being, matter, their matter, stripped by that constitutes its real nature, necessarily represents supreme nothingness.

–Mikhail Bakunin, God and the State