Humean laws and Humean mosaic

David Hume’s modern successors sometimes speak of a “mosaic” of particular facts or events. In my last post, I wrote:

The “Humean mosaic” is the information about local particular facts – properties at particular spacetime locations. I’ll come back to that next time.

I should have said “properties and relations” or maybe even “properties and relations held by objects.” (However, David Hume himself was very skeptical about objects, leading to first part of the quip by some of his critics, that Hume’s philosophy amounted to “No matter, never mind.”)

So, modern Humeans want to analyze natural laws as being convenient summaries of – and therefore secondary to – matters of properties and relations occurring at particular spacetime locations. In philosophy jargon, Humeans say laws are supervenient upon the local matters of fact. For examples of local facts: the ignition of the stove at 5:00 pm and the boiling of water at 5:05 pm. Or the travel of a particular water molecule away from its neighboring water molecules at 5:05:00.0001 pm. Etc.

But wait a second. Independent of other events in spacetime and independent of laws, what does it mean to say “water” or “boiling” or “travel”? Put aside for a moment (but not forever!) the question of how we know that something is water. What would it mean to say that’s water – what would water-ness be – if we take away any implications about what happened at the previous moment and what happens at the next moment? What if a “water” molecule need not break down salt, need not attract other water molecules with its electrical dipole, need not do any of the things water does? If a water molecule could start doing what an elephant does, and vice versa, what on earth distinguishes “water” from “elephant”? It’s not like objects each contain a tiny label written in Mandarin stating what kind of object they are. Nor do properties and relations bear labels. Nor do locations in spacetime.

For a water molecule (or whatever) to travel from A to B, there must be a lot in common along the spacetime path from A to B, enough to let us trace the “water molecule” fact along this path. It’s not like there is a label on the molecule saying (translated from the Mandarin) “I am water molecule #72.” Well, some philosophers might want to go near there, but definitely not a Humean who wants a lean, mean, science-friendly ontology.

Instead of making laws supervenient upon, i.e. secondary to, local properties and relations – instead of playing these metaphysical penis envy games – we need a package deal of laws and properties. The phrase “package deal” is Barry Loewer’s, and apt – it puts laws and properties on a par both metaphysically and epistemically.

Humeanism about laws

In philosophy of science, a common view about laws of nature is Humean, i.e. inspired by David Hume. I should probably say “family of views”. Recently Jenann Ismael wrote an excellent paper explaining her disillusionment with Humeanism. In this post I’ll try to summarize it. In the next, I’ll give another reason to be suspicious of Humeanism about natural laws.

Hume claimed that there was “no necessary connection” between distinct events. Rather, we form habits of expectation that one event will be followed by another, and we say that the former event “caused” the latter. We formulate scientific laws such as that water boils at 212 degrees Fahrenheit (at 1 atmosphere), which implies that if you heat water to that temperature you’ll cause it to boil. Is there, then, a hidden Essence of Water that consists partly in a disposition to boil at 212? Is there an eternal Law Of Nature, standing apart from the goings-on in the universe, and ruling over them, compelling water to behave this way? (To parody “Aristotelian” and “Platonist” views, respectively.)

No, says Hume. No, say his modern descendants. Hume’s own view comes dangerously close to suggesting that causality is a projection, in an almost Freudian sense, of human thought onto the world. But recent advocates have a way to avoid subjectivism about natural laws. Most prominently, David Lewis’s “Best Systems Analysis” has it that laws are efficient compression rules to capture the regularities in the universe. Long story short, laws make a long story short. For example, in a deterministic (in both time-directions) universe, you don’t have to list every event in history to describe the universe. You “only” have to describe one instant, then list the natural laws. (For relativity buffs: this assumes the spacetime can be foliated.) That’s a vast reduction in descriptive complexity. Such laws have great explanatory power – which is a virtue in Best Systems Analysis, and also in the practice of actual scientists. This congenial fit is a good sign!

For more on Best Systems Analysis, see Terence Tomkow’s Computational Theory of the Laws of Nature. Tomkow compares laws to the compression rules in a computer’s compressed (zipped) version of a file (and in the program that makes the compressions). The original file corresponds to the actual universe in all its glory and all its boring repetitions.

How does a Best Systems Analyst perform such Analysis? Here’s Jenann Ismael:

The idea was that science gathered a large and wide- ranging body of information about local matters of particular fact and systematized that body of fact using the methods that scientists actually use. … There were no relations among universals, no irreducible modal forces or anything added to the Humean mosaic to enforce laws. (pp.43-44)

The “Humean mosaic” is the information about local particular facts – properties at particular spacetime locations. I’ll come back to that next time.

In addition to natural laws, Lewis had a theory of chances, where chances are supposed to be relatively objective facts about probability. Ismael writes:

Lewis … introduced the Principal Principle (PP) as an implicit definition of chance that identified chances by the role they play guiding belief. What the Principle said in its original formulation was that one should adjust one’s credence to the chances no matter what other information one has, except in the presence of inadmissible information:

PP: cr(A/〈cht (A) = x〉E) = x, provided that E is admissible with respect to 〈cht (A) = x〉

Where cr(A) is one’s credence in A at some time t and cht (A) is the chance of A at t. The restriction to admissible information was needed to discount cases where PP clearly becomes inapplicable; e.g., when one possesses information from the future of the sort one might get from a crystal ball or a privileged communication from God. (pp. 44-45)

Objective(ish) facts about probability, if there are any, must include future patterns of events as well as past ones. But for that reason among others, we don’t generally know what the chances are. After much review of philosophical history, Ismael suggests that Humeans generally favor this generalization of PP:

GPP: cr(A) := ∑cr(chi )chi (A), where chi is the chance assigned to A by epistemically possible theory of chance chi . (p. 47)

And most Humeans would be Bayesians about where the credences cr come from. As long as one doesn’t assign zero prior probability to any theory, the idea goes, with enough evidence eventually one will update so that approximately-true assignments of chances are given high credence.

Now we’re in a position to state Ismael’s objection to this picture.

We start with three premises:

(i) The set of possible mosaics is obtained by a combinatorial principle; any assignment physical quantities to spacetime points represents a possible mosaic;

(ii) The laws and chances are determined by a global criterion applied to the mosaic; and

(iii) The mosaic is indefinitely extendible.

Indefinite extendibility means just what it sounds like. It means that the Humean mosaic is open-ended; it stretches indefinitely into the future. Note that it doesn’t entail that the Humean mosaic is infinite. It just means that there is no particular finite size that it is constrained to be. (pp. 49-50)

Premise (i) is Hume’s “absence of necessary connection” between distinct events. Premise (ii) is a core feature of both Best Systems Analysis and the Principal Principle. And here’s what Ismael says about premise (iii):

Why think the Humean mosaic is indefinitely extendible? There are two reasons. From a Humean perspective, to deny indefinite extendibility would be to hold that the existence of any collection of events was incompatible with the existence of some other. And that would be to deny Humeanism, because Humeanism was precisely the denial that there was any necessary connection between distinct existences. (p. 50)

And now we lose any reason to expect Bayesian convergence to approximate truth in our estimates of laws and chances. Any finite collection of evidence, such as is available to us now, is compatible with some far larger patch of Humean mosaic beyond our observations. And Humeanism tells us that the larger patch is unconstrained by our patch. Things might go very differently there, in ways that blow our favored theories of chances and laws out of the water. Out of all the finite possible ways the universe can be, even setting aside infinite ones, our patch has measure zero. (This last way of putting it is my own, but not a stretch.)

Ismael discusses a (verbal communication) response by Barry Loewer and David Albert to her argument. Their response is to restrict Bayesian priors to ones that favor induction. (For a relevant idea that I find appealing, see Solomonoff Induction.) Here’s Ismael’s diagnosis of the Loewer-Albert response:

Even though the metaphysics says that looking forward from any point in history, there are as many ways the world could be as we would get by assigning values of physical quantities to spacetime points in the future, the epistemology says that you must take as a pre- empirical assumption that the laws and chances derived from any large enough submanifold would reflect the laws and chances derived from a global systematization. This amounts heavily weighting your priors to ignore all but a small sliver of Humeanly possible completions of the mosaic. Since the metaphysics is explicitly committed to combinatorial possibilities for the future, the only thing that keeps this from being flat- out inconsistent is that one reserves nominal possibility that the future might be among the vast majority of worlds whose overall systematization is different from that of the initial segment. (p. 57)

While not making Humeanism strictly inconsistent, Ismael says we have a better alternative. Namely, not to expect long-running correlations unless there are connections between (otherwise seemingly distinct) events. In other words, we can have our priors heavily leaning toward non-Humean metaphysics.

Can causation ever be symmetric?

TLDR: it depends what you mean by “causation”. But there is a best way to make “causation” precise, and on that way the answer is no.

In my previous post, I summarized Douglas Kutach’s carefully laid-out account of causality. The key points to remember here are:

Fundamentally, events are causally linked by determining (or by fixing probabilities for) each other. (p 307) A fundamental event c determines a fundamental event e iff the occurrence of c nomologically suffices for the occurrence of e (with e’s location relative to c being built into this relation). (p 67)

Kutach, Causation and its Basis in Fundamental Physics

Location, in the above formula, means spatiotemporal location. Nomological means in accordance with natural laws. Nothing rules out that a past event can be “determined” by a present or future event in this sense. Arguably, the best interpretations of quantum mechanics and general relativity imply conservation of information. In that case, determination, as defined by Kutach (and many other physicists and philosophers use the same definition) applies in both temporal directions. Thus, causation on this definition could routinely be symmetric, provided that spacetime has an appropriate foliation. The complete set of events at time t1 would determine those at time t2, and vice versa.

This isn’t quite as violent to common sense as it might sound. These reciprocal causes – the “complete set of events” at the given times – aren’t things in our everyday experience. They include microscopic details in enormous numbers like we never see – even, never can see. Meanwhile, back in the realm of human experience, it’s not like you can affect the outcome of a baseball game by cheering loudly for your team during a TV replay. At macroscopic scales, causality (on Kutach’s definition) is still a one-way street.

There are two other reasonably workable alternatives for talking about causality while respecting known physics. One involves insisting that causality is a one-way street, and any relationships which are not must be something else. Thus Sean Carroll excludes simple nomological relationships between fundamental particles from causality, precisely because the determination works in both directions. “Neither is a cause … there’s just a pattern that particles follow.” Causation only applies between macroscopic events involving thermodynamic (entropic) irreversibility. This definition respects the intuition that cause and effect are importantly different.

The third alternative also appeals to entropy, but only to establish a global arrow of time. Then we simply label everything at the lower-entropy times “causes” in cases where they are connected by natural law to later events. We label the later (higher-entropy-time) events their “effects”. We do this even for microscopic events that don’t contribute significantly to increasing entropy, or even if their process is completely reversible. Sabine Hossenfelder takes this route in her superdeterminism paper (see section 7). This has the advantage, if you consider it one, of attributing causality to simple microscopic processes. It has the disadvantage, if you consider it one, of making causality in principle external to the cause and the effect and their immediate relation to each other.

None of these alternatives allows causality to be (A) universal, applying to processes of any size or complexity, (B) asymmetric, and (C) internal to the spatiotemporal region connecting cause to effect (where “connection” is informed by the applicable laws of nature). That’s because nothing can satisfy all these demands, barring adding new physics we don’t know about.

Something’s gotta give. Which should it be? Or in other words, which way of speaking would be least confusing? Physicists are not likely to be very confused by any of the three choices should a consensus emerge around one of them. But laypeople will. Laypeople will bring their experience and intuitions to bear. For each intuition, the more that everyday experience bears on it, the stronger it will be. The weakest contender is clearly (A). We don’t have everyday experience of microscopic events. We do have experience of asymmetry, and of spatiotemporal confinement or locality (generally, a much more confined locality than Einstein’s, but the point remains).

Since physicists want to be able to talk to laypeople, I suggest that Sean Carroll’s definition of “causality” is the least troublesome option.

Causation and its Basis in Fundamental Physics, by Douglas Kutach

Background image: How to Philosophize with a Hammer by Randon Rosenbohm, from the cover, rotated 90 degrees to fit the page

Fundamentally, events are causally linked by determining (or by fixing probabilities for) each other. These relations amount to a comprehensive set of singular causal relations. Derivatively, events are causally linked by relations of probability-raising (or probability-lowering) understood as a form of difference-making or counterfactual dependence. We can say that contrastive events prob-influence coarse-grained events or that they fix other contrastive events. These prob-influence and fixing relations constitute a comprehensive set of general causal relations.

p. 307

There you have the bottom line. Now you’d probably like to know what some of the key words above are intended to mean. And/or, why is this an issue in the first place? Let’s start with the latter.

Causation is a widely discussed topic in the sciences. Yet, if we suspect that all objects and properties treated by the sciences are ultimately physical, and we look at the equations of fundamental physics, it can be hard to see where causation comes from. After all, causation is usually supposed (spoiler alert: but not by Kutach) to be irreflexive and asymmetric. However, our best candidates for fundamental physical equations allow us to mathematically derive past states from present ones, just as easily as we derive future states from present ones. This incongruity led Bertrand Russell to opine:

The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.

Bertrand Russell, ‘On the Notion of Cause’ (1913)

Philosophers mostly disagree with Russell, but propose various different, often conflicting, ways of locating causality in the real world. Kutach, in my view, gives a better analysis than any other philosopher I’ve encountered. (Physicist Sean M. Carroll is also excellent on the topic; their views differ terminologically, but are compatible in substance.)

OK, now for those key words: “Fundamental”, “determining”, “singular causal relations”, “coarse-grained events”, “contrastive events”, “probability-raising”, “difference-making or counterfactual dependence”. I have re-arranged the terms in the order I intend to present them.

Consider a world in which classical physics is correct, and which contains two rocks. The more massive one moves slowly toward the left, contains 4 corpuscles, and is hot enough that one of its constituent corpuscles is moving to the right relative to the rest frame. The less massive object moves quickly to the right, is cold, and all 3 of its corpuscles move quickly to the right relative to the rest frame. Suppose we calculate that each rock contains 0.5 Joules of mechanical energy, the small rock contains 0.5 Joules of thermal energy, and the large one contains 1.5 Joules of thermal energy. Well, that’s one way to look at it.

Alternatively, we can lump all the corpuscles together as one mechanical system, and treat each corpuscle’s motion relative to that combined system as embodying thermal energy. Suppose the weighted average velocity is zero. We will find that there are still 3 Joules of total energy, all of it now deemed thermal. Kutach proposes that it is reasonable to think of thermal and mechanical energy as derivative. He says “metaphysically” derivative – I think they should instead be regarded as epistemically derivative, as a feature of our scientific model; and I would use a different definition – but this need not worry us.

Kutach’s definition of fundamental (p. 25):

  1. The way things are fundamentally is the way things really are.
  2. Fundamental reality is the only real basis for how things stand derivatively.
  3. Fundamental reality is as determinate as reality ever gets.
  4. Fundamental reality is consistent.

“Determining” (p. 67; note “nomologically” means according to natural laws)

A fundamental event c determines a fundamental event e iff the occurrence of c nomologically suffices for the occurrence of e (with e’s location relative to c being built into this relation).

Kutach uses/coins the word “terminance” rather than “causation” to talk about these determining relations. Part of the reason is his elaboration about e’s location relative to c, which other philosophers generally ignore. But a bigger reason is Bertrand Russell’s issue (p. 68):

determination is both reflexive and transitive, and it is compatible with being symmetric. Yet, philosophers tend to think of causation as irreflexive, non-symmetric, and not necessarily transitive.

This is just one of the many points at which Kutach avoids using familiar but controversial words, or proposes definitions of familiar words as suggestions for improving usage and clarity. This is a standard maneuver in philosophy, but applied repeatedly and remorselessly, it turns into a kind of superpower. At the end, Kutach is in a position to say something like, here are a set of relations which are clearly real, and can explain a great deal of what we correctly believe about phenomena involving “causality”. Why not identify causality with these?

“Singular” causation refers to specific actual events, e.g. “Wind caused the collapse of the Tacoma Narrows bridge.” (p. 22) “Smoking causes cancer” would count as general causation.

“Coarse-grained” events are those which leave some fundamental physical details unspecified. For example “if I stick a metal fork in this socket, I will be shocked” does not specify the exact number of iron atoms in the fork, but it may be true nonetheless. Kutach wisely leaves it up to scientists to specify how fundamental physical events can be usefully coarse-grained (201-2).

“Contrastive events” are each an ordered pair of coarse-grained events (136) each of which has a probability distribution over all its fundamental members (69). The probability need not be objectively fundamental (as QM as often interpreted). It can just be a reasonable distribution of expectations that a person assigns based on their available evidence. Kutach also sees “no reason my account cannot be adapted for more general probability-like concepts” (69) that do not fit standard probability axioms. An example of contrastive events might be {Paul finishes this post in the next hour, Paul eats a meal in the next hour}. The events need not be mutually exclusive, but the contrastive event will be trivial unless some fundamental events count under one but not the other.

“Probability-raising” and “difference-making” are straightforward for contrastive events. Continuing the above example, the first event raises the probability of someone reading my blog an hour hence, and because it is a little harder to finish a post while eating, it slightly lowers the probability of there being fewer bananas in the house an hour hence. Assuming I do post, and you got a notification from WordPress, and then started reading, we would have an intuitively straightforward sense in which my posting made a difference to your reading.

In a later post, I’ll go into some implications of Kutach’s analysis. That includes whether Kutach’s
“causation” is an antisymmetric relation in the real world, and if not whether another terminological decision about what relations to call “causal” would be better.

Entropy, ignorance, and chaos

Two articles caught my eye lately. They’re only vaguely related, and yet… they both tell us a lot about how much ignorance we just have to live with.

In Physics Today, November 2021, Katie Robertson is concerned about the scientific and philosophical significance of entropy. Specifically, how and whether entropy depends on our human limitations, as contrasted to some hypothetical more-knowledgeable observer like Laplace’s demon. She writes:

But how should we understand probabilities in statistical mechanics? … Here we will narrow our attention to one dominant view, popularized by physicist Edwin Jaynes, which argues that the fundamental assumption of statistical mechanics stems from our ignorance of the microscopic details. Because the Jaynesian view emphasizes our own ignorance, it implicitly reinforces the idea that thermal physics is anthropocentric. We must assume each state is equally likely because we don’t know which exact microstate the system is in. Here we are confronted by our third and final philosophical specter: the demon first articulated by Pierre Simon Laplace in 1814 … Laplace’s demon is a hypothetical observer that knows the position and momentum of every molecule in the universe.

Katie Robertson, “The demons haunting thermodynamics”

She goes on to cite the Gibbs formula of entropy, which integrates over the log of probabilities of all the possibilities. For Laplace’s demon, the probability of the known microstate is 1, so the demon calculates the Gibbs entropy to be zero.

She then proposes to banish Laplace’s demon:

Fortunately, it, too, can be exorcised by shifting to a quantum perspective on statistical mechanics. In classical statistical mechanics, probabilities are an additional ingredient added to the system’s microdynamics. … But in the quantum case, probabilities are already an inherent part of the theory, so there is no need to add ignorance to the picture.

But there is an apparent conflict between QM and statistical mechanics:

How can [quantum] probabilities give rise to the familiar probability distributions from statistical mechanics? That question is especially tricky because quantum mechanics assigns an isolated system a definite state known as a pure state. In contrast, statistical mechanics assigns such a system an inherently uncertain state known as a maximally mixed state, in which each possibility is equally likely. The distinctively quantum nature of entanglement holds the key to resolving that seeming conflict (see figure 5). Consider a qubit that is entangled with a surrounding heat bath. Because they are entangled, if one of the two systems is taken on its own, it will be in an intrinsically uncertain state known as a mixed state. Nonetheless, the composite system of the qubit taken together with the heat bath is in a pure state because when taken as a whole, it is isolated. Assuming that the surrounding environment—namely, the heat bath—is sufficiently large, then for almost any pure state that the composite system is in, the qubit will be in a state very, very close to the state it would be assigned by traditional statistical mechanics.

Figure 5 is the figure at the top of this post. The emphasis above is added, because I want to resist the claim that ignorance is not involved. The “almost any” qualifier reveals our ignorance. Because we don’t know which pure state the whole universe is in, and because all our simply formulable ways of describing the possibilities make her “almost any” statement true, it is a very good bet that statistical mechanics will guide us well. But it’s still a bet.

Note that this does not make thermal physics anthropocentric. There is nothing special about anthropoids here; any cognizer faces similar ignorance. As Robertson explains in her discussion of Maxwell’s demon (which I haven’t quoted; read her article for that), obtaining detailed knowledge of a system comes with an entropy cost. Laplace’s demon, tasked by his definition with obtaining such knowledge of the entire universe, runs out of room to dump the waste heat, and vanishes in a puff of logic. Laplace’s demon is physically impossible.

Now for the chaos.

in Aeon magazine, David Weinberger argues that “Our world is a black box: predictable but not understandable.” Machine learning algorithms, with their famous impenetrability, underlie his argument. MLM stands for Machine Learning Model, below:

But MLMs’ generalisations are unlike the traditional generalisations we use to explain particulars. We like traditional generalisations because (a) we can understand them; (b) they often enable deductive conclusions; and (c) we can apply them to particulars. But (a) an MLM’s generalisations are not always understandable; (b) they are statistical, probabilistic and primarily inductive; and (c) literally and practically, we usually cannot apply MLM generalisations except by running the machine learning model that resulted from them.

David Weinberger, “Learn from Machine Learning”

Weinberger says that rather than simply regarding these limitations as drawbacks, we should take them as clues to how the world actually works. Weinberger doesn’t directly discuss techniques to illuminate the sensitivities of neural networks, but he would probably point out that (a) – (c) above still apply, even after our best efforts along such lines.

Our encounter with MLMs doesn’t deny that there are generalisations, laws or principles. It denies that they are sufficient for understanding what happens in a universe as complex as ours. The contingent particulars, each affecting all others, overwhelm the explanatory power of the rules and would do so even if we knew all the rules.

Weinberger discusses a thought experiment that is basically a coin flip. If we wanted to know the exact final resting place and orientation of the coin, down to the smallest detail, we would need to be – you guessed it – Laplace’s demon.

That’s not a criticism of the pursuit of scientific laws, nor of the practice of science, which is usually empirical and sufficiently accurate for our needs­­­ – even if the degree of pragmatic accuracy possible silently shapes what we accept as our needs. But it should make us wonder why we in the West have treated the chaotic flow of the river we can’t step into twice as mere appearance, beneath which are the real and eternal principles of order that explain that flow. Why our ontological preference for the eternally unchanging over the eternally swirling water and dust?

Whaddayamean, “we”? There has always been a faction in Western thought that recognized chaos as real. Weinberger wants us to join that faction. Amen, brother.

Review: Carlo Rovelli, Reality is Not What it Seems

Carlo Rovelli is a big fan of loop quantum gravity, and of physics in general, and this book recaps the whole history of modern physics, at least partly in order to show how elegantly loop quantum gravity fits into place as a reasonable extrapolation. It’s an interesting and believable history, and the case for the plausibility of loop quantum gravity looks convincing to me. But then, I think I was an easy mark — since I already agreed with a series of strange (from the layperson’s point of view, at least) assertions Rovelli makes about known physics.

Rovelli inserts helpful diagrams every so often to summarize the history (and sometimes potential future) of “what there is” in the physical world according to physics. I can’t quite do justice to them so I use a table (please read it as one table).

NewtonSpaceTimeParticles
Faraday MaxwellSpaceTimeFieldsParticles
Einstein 1905SpacetimeFieldsParticles
Einstein 1915Covariant fieldsParticles
Quantum mech.SpacetimeQuantum fields
Quantum gravityCovariant quantum fields

In the transition from special relativity (1905) to general (1915), fields and spacetime are absorbed into “covariant fields”. This is because spacetime, Rovelli asserts (and I instinctively agree), is the gravitational field. So other fields like the electromagnetic field are covariant fields – fields that relate to each other in circumscribed ways. The curvature of spacetime depends on the energy (e.g. electromagnetic) present, and the behavior of electromagnetic fields depends on that curvature.

Rovelli likes to sum up some key features of each theory, and these summaries are very helpful. For QM, Rovelli lists three key principles:

  • Information is finite;
  • There is an elementary indeterminacy to the quantum state;
  • Reality is relational (QM describes interactions).

As a fan of Everettian QM, I don’t think we really need the indeterminacy principle. But it’s still true that we face an inevitable uncertainty every time we do a quantum experiment (it’s just that this is a kind of self-locating uncertainty).

Loop quantum gravity refines the “information is finite” principle to include spacetime as well. Not only are energy levels discrete; spacetime is also discrete. There is a smallest length and time scale. Rovelli identifies this as the Planck length (and time).

Rovelli explains loop quantum gravity as the quantization of gravity, deriving from the Wheeler-DeWitt equation. This equation can only be satisfied on closed lines aka loops. Where loops intersect, the points are called nodes, and the lines between nodes are called links. The entire network is called a graph, and also a “spin network” because the links are characterized by math familiar from the QM treatment of spin. Loop quantum gravity identifies the nodes with discrete indivisible volumes, and each link with the area of the surface dividing the two linked volumes.

Rovelli is at pains to point out that the theory really says what it’s saying. For example: “photons exist in space, whereas the quanta of gravity constitute space itself. … Quanta of space have no place to be in, because they are themselves that place.” This warning might seem too obvious to be necessary, but that’s because I didn’t reproduce the graphs of spin networks in Rovelli’s book. (I lack the artistic talent and/or internet skillz.) You know, graphs that sit there in space for you to look at.

OK, that’s space, but what about time (and aren’t these still a spacetime)? This deserves a longish excerpt:

Space as an amorphous container of things disappears from physics with quantum gravity. Things (the quanta) do not inhabit space; they dwell one over the other, and space is the fabric of their neighboring relations. As we abandon the idea of space as an inert container, similarly we must abandon the idea of time as an inert flow, along which reality unfurls.

[…] As evidenced with the Wheeler-DeWitt equation, the fundamental equations no longer contain the time variable. Time emerges, like space, from the gravitational field.

Rovelli, chapter 7

Rovelli says loop quantum gravity hews closely to QM and relativity, so I assume we get a four-dimensional spacetime which obeys the laws of general relativity at macroscopic scales.

In a section of Chapter 11 called Thermal Time, Rovelli uses thermodynamics and information theory to explain why time seems to have a preferred direction, just as “down” seems to be a preferred direction in space near a massive body. When heat flows from a hot zone into the environment, entropy increases. Since entropy reductions of any significant size are absurdly improbable, these heat flows are irreversible processes. And since basically everything in the macroscopic world (and even cellular biology) involves irreversible processes, time “flows” for us. Nevertheless, at the elementary quantum level, where entropy is undefined (or trivially defined as zero – whichever way you want to play it) time has no preferred direction. All of this will be familiar to readers of my blog who slogged through my series on free will. This is the key reason scientific determinism isn’t the scary option-stealing beast that people intuitively think it is.

There was one small section in Chap. 10 on black holes that seemed to fail as an explanation. Or maybe I’m just dense. Since spacetime is granular and there is a minimal possible size, loop quantum gravity predicts that matter inside the event horizon of a black hole must bounce. The time dilation compared to the outside universe is very long, so an observer would see no effect for a very long time, but then the black hole would “explode”. But surely “explode” is not the right word? Intuitively it would seem that any bouncing energy should emerge at a comparable rate to that at which it entered, at least for matter entering during a period of relatively stable Schwarzschild radius. Maybe by “explode” Rovelli just means the black hole would “give off substantially more energy than the usual Hawking radiation”?

BBC botches physics in series on free will

The BBC recently came out with a three-part series on free will. Part 2 is about physics. If you’re going to infer lessons from physics, it helps to get the physics right. They don’t. Part 2 of the BBC series can be found here: https://www.bbc.com/reel/playlist/free-will?vpid=p086tg3m

The picture above analogizes a series of physical events to a chain of dominoes, in order to talk about cause and effect. But there’s something odd about this metaphor, if the dominoes are supposed to represent the physical universe: look at that first domino, in black. What makes it tip over? Something from outside the universe, a “god” so to speak, intervenes to set the whole thing in motion. We seem to have jumped from physics to theology.

This would just be a nit-pick, if the negligent treatment of the “start” in the model did not affect the conclusions drawn. But it does, as we will see.

But first let’s look at some additional physics mistakes in the video. Jim Al-Khalili says “When we think we’re making free choices, it’s just the laws of physics playing themselves out.” Well no, the laws of physics alone don’t cause anything. The laws of physics are rather abstract. If you want to understand how a concrete action came about, you need not just laws of physics but also what physicists call “boundary conditions”, AKA concrete reality. Especially bits of concrete reality that heavily interact with the action in question. For example, you. Of course, perhaps Al-Khalili didn’t mean “just the laws of physics” quite so literally. But it matters how you phrase things, especially when you accuse people of only thinking they’re making free choices. Your grounds for calling them mistaken had better not be based on distorted depictions of the physics.

From the “libertarian” side of the philosophical debate, Peter Tse makes a different mistake – or maybe just poorly worded statement: “Patterns of energy don’t obey the traditional laws of physics.” Unless he means “classical physics” (in which case: say “classical”), that’s not true. The Wikipedia article on Lagrangian mechanics is a good resource for seeing just how deeply physics treats patterns of energy. “The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant.”

Block universe as a loaf of bread, from BBC video

Since Einstein, physicists have known that space and time are not independent, but aspects of a single four-dimensional manifold, spacetime. For observers in different inertial reference frames, which direction counts as “time” will differ. A metaphor called the “block universe” is sometimes used to describe this, where we only depict two spatial dimensions and then repurpose the third to represent time. Jim Al-Khalili uses a loaf of bread, with different times being different slices.

The block universe is like a loaf. OK, let’s go with this metaphor: one end of the loaf is very hot (we call it the Big Bang) and the other is cold. There are certain patterns that stretch from one end of the loaf to the other. If we know the pattern (laws of physics) and we know the boundary conditions (full state of any slice) we can derive the state of any other slice. Why say that the hot end caused the cold end to be the way it is? Why not say that the cold end caused the state of the hot end? After all, the mathematical derivation works equally well in that direction. Better yet, why not admit that “causality” is a useless concept at the level of a complete description of the universe, and just look at the bidirectional laws of nature instead? Why not start your analysis in the middle (but nearer to the hot side), and work your way toward both ends? The last option is a lot more practical, since that middling point is where you are.

The idea that the Big Bang is the Big Boss and we are just its slaves has no basis in science. Remember that “god” that tipped over the first domino? He’s creeping back in through the back door of Al-Khalili’s thinking. He thinks the Block Universe is dominated by its early times. You can only get such domination by swapping out a scientific view of time and causality, and sneaking in an intuitive picture of time and causality in its place.

Al-Khalili does that when he says “The past hasn’t gone … the future isn’t yet to be decided.” The narrator does that when she says “every single frame of that animation already exists and will exist forever.” Argh, no! Time is within the loaf! If you’re going to use a metaphor, stick with the structure you used to create it – don’t sneak your intuitive conception of time into the background while leaving scientific time in the foreground, now portrayed spatially.

Al-Khalili says “the future … is fixed, even though we don’t know it yet.” This conclusion would repeal the very laws of physics Al-Khalili was claiming to honor. The future is dependent on us because, to repeat myself, laws of physics must be applied to boundary conditions to derive a prediction about the future, and those boundary conditions include us.

Modern physics does destroy the traditional “solution” to the “problem of free will”. What these commentators don’t seem to notice is that it also destroys the traditional “problem” of free will. When you notice that your intuitive ideas of time and causality conflict with science, you need to figure out the full consequences of the science, not take one point from science and then re-apply your intuitive ideas. The future isn’t set in stone. It’s set in spacetime. And spacetime is lighter than air.

Free will, part 5 of 5-ish

Laws of nature / Causality / Determinism can be:

(A) Universal, applying to everything

(B) Unidirectional, making for controllers and the controlled

(C) Scientific.

But not more than two of (A)-(C). Causality is unidirectional and scientific, but not universal. Laws of nature are universal and scientific, but not unidirectional. Determinism as imagined in the Consequence Argument is universal and unidirectional, but not scientific. That’s why the Consequence Argument fails.

We think of the past as fixed and the future as open. Some people think science has shown that the fixed past is real and the open future is an illusion, but the truth is almost diametrically opposite. The idea that the whole past is fixed is an overgeneralization. It is a natural, and even rational, inference from our experiences as macroscopic beings, but still a mistake.

Even though (the evidence indicates) the past only depends microscopically on the present, what is advocated here is not a version of Lucretius and the “swerve”. It’s not that we get our freedom from microscopic past phenomena (such as quantum phenomena) in particular. The idea that freedom has to be handed down from past to present is wrongheaded to begin with. If in some particular case, a macroscopic past state did perfectly correlate with our macroscopic present action, that would still not be a problem: that macroscopic past state would then be up for grabs. (Aside for the really nerdy: This is why I am not a big fan of Christian List’s reply to the Consequence Argument, even though it may have a solid point. It concedes too much.)

Sourcehood arguments

An additional group of anti-free will arguments, vaguely similar to the Consequence Argument but different, are called sourcehood arguments. Let me just quote the first premise from the Stanford Encyclopedia of Philosophy article:

1. We act freely … only if we are the ultimate sources (originators, first causes) of at least some of our choices.

https://plato.stanford.edu/entries/incompatibilism-arguments/#SourArgu

This one wears its allegiance to a certain picture of time and causality on its sleeve. Why ultimate source? Why not just source? Because the proponent of the argument mistakenly thinks that physical events are in the general habit of bossing each other around, so that the only way we can avoid being controlled is to conjure something ex nihilo. Hopefully, we’ve covered this ground enough that the reader can see what’s wrong with that premise.

Moral Responsibility

People often do bad things when they could have done better things. Does that mean Retributivism is justified? (Hint: No.) Retributivism, on one definition, is the view that it’s intrinsically morally better that a wrongdoer suffer than that they do not, provided that they could have done otherwise.

Retributivism is not a metaphysical mistake. But in my view, it’s a moral mistake. Instead, punishment is justified when justifiable rules call for it, and discovering those rules depends on free and open moral dialogue among people who will be affected by the rule; people who are intent on reasoning together about how to get along. Others may not care to get along. We need a backstop to enforce livable social rules on those who would otherwise harm anyone who got in their way, and those who are a little more pro-social yet still go off the rails sometimes. But not everyone needs suffering to keep them in line, and those who do should not receive more than the minimum required.

There’s a more humane approach to justice that is common in many indigenous societies, and is making something of a comeback in ours. Here’s part of a transcript of an interview about restorative justice. Michel Martin is a show host, and Sujatha Baliga is a recent MacArthur Fellowship winner who works on restorative justice.

MARTIN: I’m glad you raised that as a crime of violence because I think many people may be familiar with a concept of restorative justice in connection with, you know, teenaged mischief, for example. Let’s say you deface somebody else’s football field before the big game, and they find out that you did it. And the consequence is you have to clean it up. In matters like this, in matters of serious crime and serious harm, where someone’s life is taken, where someone is seriously harmed, what, in your view, is the societal benefit of taking this approach?

BALIGA: Actually, restorative justice works best with more serious harms because we’re talking about people who are actually impacted. In that face-to-face dialogue, you can imagine it not having any heat or any value, really, in terms of the wake-up or the aha moments when we’re talking about graffiti versus when someone has actually entered someone’s home and taken their things, right? That’s a situation that calls for accountability, calls for a direct dialogue where someone takes responsibility for what they’ve done. So, to my mind, restorative justice – and it’s not just to my mind. There’s international data that shows that restorative justice is actually more effective with the more serious harms that people do to one another.

npr.org

Emphasis added. A humane approach to justice doesn’t depend on the denial of free will or moral responsibility. Quite the opposite, in this case.

<– Previous in series

Explaining away the “fixity” of the past (4 of 5ish)

Intuitively, we think of the future as open and the past as fixed. Meaning that the future is up to us; dependent on our actions. And the past is not; it’s independent of our actions. This way of thinking is very natural and goes deep. We think that being in the past makes those events fixed. But that’s wrong: it’s an oversimplification. It’s the fact that those events (that we are thinking of) represent a lower entropy state that makes them fixed. And an occurrence of a lower entropy state requires a large number of microscopic states which all count as the same state at some coarse-grained level, such as “the pressure of the air in this tire.”

Let us count the Ways

If all you know about “entropy” is that it’s related to “disorder” (true in a limited range of cases), the fact that entropy is only defined statistically will come as a surprise. But the classic definition for entropy given by Ludwig Boltzmann is S = k ln W. S is entropy, k is the Boltzmann constant, and W is the probability, given by the count of the ways that the macroscopic state can be realized by various microscopic arrangements. Because the numbers of microscopic states in question are enormous (18 grams of water contains 6 x 10^23 molecules for example), the probabilities quickly become overwhelming for macroscopic systems. Ultimately, the increase of entropy is “merely” probabilistic. But those probabilities can come damn close to certainty.

Why are so many processes irreversible? By reversing a process, we mean: removing a present condition, to give the future a condition like the one had in the past.  For example, suppose I dropped an egg on the kitchen floor, making a mess.  Why can’t I undo that?  The molecules of egg shell and yolk are still there on the floor (and a few in the air), and they traced in-principle reversible paths (just looking at the micro-physics of molecular motion) to get there.  So why can’t I make an intact egg from this?

The answer is entropy, and therefore the count of the Ways.  There are many ways to get from a broken egg to a more-broken egg.  There are many orders of magnitude fewer ways to get from a broken egg to a whole egg.  One would have much better odds guessing the winning lottery number, rather than trying to find a manipulation that makes the egg whole.  There is some extremely narrow range of velocities of yolk and shell-bits such that if one launched the bits with just those velocities, molecules would in the immediate future bond to form whole egg-shell, with yolk inside – but finding those conditions, even aside from implementing them, is impossible in practice. Because the more-broken egg states so vastly outnumber the whole-egg states, our attempts to reverse the mess have vanishing probability of success.

On a local level, some macroscopic processes are reversible. I accidentally knock a book off a table; I pick it up and put it back. The room is unchanged, on a suitably coarse-grained analysis — but I have changed. I used up some glucose to do that mechanical work. I could eat some more food to get it back, but the growth of the relevant plants ultimately depends on thermodynamically irreversible processes in the sun. On a global analysis, even the restoration of the book to its place is an irreversible process.

The familiar part of the past is fixed …

Entropy thus explains why we can’t arrange the future to look just like the past.  The different problem of trying to affect the past faces similar obstacles.  The “immutability of the past” arises because the events we humans care about are human-sized, naturally enough, i.e. macroscopic.  Macroscopic changes in practice always involve entropy increases, and always leave myriad microphysical traces such as emitted sounds and reflected and radiated light and heat.  These go on to interact with large systems of particles, typically causing macroscopic consequences.  While phonons (quanta of sound) and photons follow CPT-reversible paths, that does not mean we can collect those microscopic energies and their macroscopic consequences in all the right places and arrange to have the past events that we want.  As in the broken egg case, even if we had the engineering skills to direct the energies, we face insurmountable information deficits.  We know neither where to put the bits, nor with what energy to launch them.

In addition to the time-asymmetry of control over macroscopic events, we have time-asymmetric knowledge, for closely related reasons.  Stephen Hawking connected the “psychological arrow of time”, based on memory, to the “entropic arrow of time”, which orients such that lower-entropy times count as past, and higher as future.  Mlodinow and Brun argue that if a memory system is capable of remembering more than one thing, and exists in an environment where entropy increases in one time-direction, then the recording of a memory happens at a lower-entropy time than its recall.  Our knowledge of the past is better than our knowledge of the future because we have memories of the past, which are records, and the creation of records requires increasing entropy.

Consider an example adapted from David Albert.  Suppose we now, at t1, observe the aftermath of an avalanche and want to know the position of a particular rock (call it r) an hour ago, at t0, the start of the avalanche.  We can attempt to retrodict it, using the present positions and shapes of r and all other nearby rocks, the shape of the remnant of the slope they fell down, the force of gravity, our best estimates of recent wind speeds, etc.  In this practically impossible endeavor, we would be trying to reconstruct the complete history of r between t0 and t1.  Or we might be lucky enough to have a photograph of r from t0, which has been kept safe and separate from the avalanche.  In that case our knowledge about r at t0 is independent of what happened to r after t0, although it does depend on some knowledge of the fate of the photograph.  As Albert writes [p. 57], “the fact  that  our  experience  of  the  world  offers  us  such  vivid  and  plentiful  examples  of  this epistemic independence [of earlier events from later ones] very naturally brings with it the feeling of a causal and counterfactual independence as well.”

Contrast our knowledge of the future position of r an hour from now.  Here there are no records to consult, and prediction is our only option.  Almost any feature of r’s environment could be relevant to its future position, from further avalanches to freak weather events to meddling human beings.   The plenitude of causal handles on future events is what makes them so manipulable.

Note that it is not that our knowledge of the macroscopic past puts it beyond our control: we cannot keep past eggs from breaking even if we did not know about them.  Nor is it our ignorance of the future that gives us control over future macroscopic states (nor the illusion of control).  Rather, it is the increase of entropy over time, and the related fact that macroscopic changes typically leave macroscopic records at entropically-future times but not past times, that explains both the time-asymmetry of control and of memory.  A memory is a record of the past.  And a future macroscopic event (for example, a footprint) that we influence by a present act (walking in the mud) is a record of that act. If we could refer to a set of microphysical past events that did not pose insurmountable information deficits preventing us from seeing their relation to present events, might they become up to us? 

…But not the whole of the past is fixed

Yes, some microphysical arrangements, under a peculiar description, are up to us. We’ve been here before, in Betting on The Past, in the previous post in this series. There, you could guarantee that the past state of the world was such as to correspond, according to laws of nature, to your action to take Bet 2. You could do so just by taking Bet 2. Or you could guarantee that the microphysical states in question were those corresponding to your later action to take Bet 1. When you’re drawing a self-referential pie chart, you can fill it in however you like. Dealing with events specified in terms of their relation to you now is dealing in self-reference, regardless of whether those events are past, present, or future. Of course, you have no idea which microscopic events, described in microscopic terms, will have been different depending on your choice. But who cares? You have no need to know that in order to get what you want.

We’re used to the idea of asymmetric dependence relations between events, such as one causing another. And we’re used to the idea of independent events that have no link whatsoever. We’re not used to the idea of events and processes that are bidirectionally linked, with neither being master and neither being slave. But these bidirectional links are ubiquitous at the microscopic level. It is only by using our macroscopic concepts, and lumping together event-classes of various probabilities (various counts of microscopic ways to constitute the macroscopic properties), that we can find a unidirectional order in history.

There’s nothing wrong with attributing asymmetric causality to macroscopic processes – entropy and causality are reasonably well-defined there. But if we overgeneralize and attribute the asymmetry to all processes extending through time, we make a mistake. Indeed, following Hawking and Carroll [2010] and others, we can define “the arrow of time” as the direction in which entropy increases.

This gets really interesting when we consider cosmological theories which allow for times further from our time than the Big Bang, but at which entropy is higher than at the Big Bang. Don Page has a model like this for our universe. Sean Carroll and Jennifer Chen [2004] have a multiverse model with a similar feature, pictured below:

Carroll and Chen [2004] multiverse

The figure shows a parent universe spawning various baby universes. One of the ((great-(etc))grand)babies is ours. The parent universe has a timeline infinite in both directions, with a lowest (but not necessarily low!) entropy state in the middle. Observers in baby universes at the top of the diagram will think of the bottom of the diagram, including any baby universes and their occupants, as being in their past. And any observers in the babies at the bottom will return the favor. Each set of observers is equally entitled to their view. At the central time-slice, where entropy is approximately steady, there is no arrow of time. As one traverses the diagram from top to bottom, the arrow of time falters, then flips. Where the arrow of time points depends on where you sit. The direction of time and the flow of cause and effect are very different in modern physics than they are in our intuitions.

Another route to the same conclusion

So far we’ve effectively equated causation to entropy-increasing processes, where the cause is the lower-entropy state and the effect is the corresponding higher-entropy state. But there’s another way to approach causality, one which finds its roots in the way science and engineering investigations actually proceed. On Judea Pearl’s approach in his book Causality, an investigation starts with the delineation of system being investigated. Then we construct directed acyclic graphs to try to model the system. For example, a slippery sidewalk may be thought to be the result of the weather and/or people watering their grass, as shown in the tentative causal model below, side (a):

Causal modeling example from Pearl

Certain events and properties are considered endogenous, i.e. parts of the system (season, rain…), and other variables are considered exogenous (civil engineers investigating pedestrian safety …). To test the model, and determine causal relations within the system, we Do(X=x) where X is some system variable and x one of its particular states. This Do(X=x), called an “intervention”, need not involve human action, despite the name. But it does need to involve an exogenous variable setting the value of X in a way that breaks any tendencies of other endogenous variables to raise or lower the probabilities of values of X. In side (b) of the diagram this shows as the disappearance of the arrow from X1, season, to X3, sprinkler use. The usual affect of season causing dry (wet) lawns and thus inspiring sprinkler use (disuse) has been preempted by the engineer turning on a sprinkler to investigate pedestrian safety.

As Pearl writes,

If you wish to include the entire universe in the model, causality disappears because interventions disappear—the manipulator and the manipulated [lose] their distinction. … The scientist carves a piece from the universe and proclaims that piece in – namely, the focus of the investigation. The rest of the universe is then considered out. …This choice of ins and outs creates asymmetry in the way we look at things and it is this asymmetry that permits us to talk about ‘outside intervention’ and hence about causality and cause-effect directionality.

Judea Pearl, Causality (2nd ed.): 419-420

It’s only by turning variables on and off from outside the system that we can put arrow-heads on the lines connecting one variable to another. In the universe as a whole, there is no “outside the system”, and we are left with undirected links.

In Judea Pearl’s exposition of the scientific investigation of causality, causality disappears at the whole-universe level. In the entropy-based definition of causality, causality doesn’t apply between fully (microscopically) specified descriptions of different times because irreversibility only applies where the number of ways of making up the “effect” state is far greater than the number of ways of making up the “cause” state – but the number of ways to make a fully-specified state is 1.

The bottom line

Laws of nature / Causality / Determinism can be:

(A) Universal, applying to everything

(B) Unidirectional, making for controllers and the controlled

(C) Scientific.

Choose not more than two.

<– Previous | Next –>


References

Albert, David Z. After Physics. Cambridge: Harvard College, 2015.

Carroll,  Sean M. From Eternity to Here: the Quest for the Ultimate Theory of Time. New York: Penguin, 2010.

Carroll, Sean M. and Jennifer Chen 2004. Spontaneous Inflation and the Origin of the Arrow of Time, URL = <https://arxiv.org/abs/hep-th/0410270v1/>

Hawking, Stephen. A Brief History of Time. New York: Bantam Books, 1988.

Mlodinow, Leonard and Todd A. Brun. Relation between the psychological and thermodynamic arrows of time, Phys. Rev. E 89: 052102, 2014.

Page, Don 2009. Symmetric Bounce Quantum State of the Universe. URL = <https://arxiv.org/abs/0907.1893v4/>.

Pearl, Judea. Causality: Models, Reasoning, and Inference. New York: Cambridge University Press, 2000 (2nd edition 2009).

Arrow Dynamics of time

A deep look at science shows that time and causality don’t work the way most of us intuitively think they do.  For example, some models of cosmology such as the one advocated by Sean Carroll in From Eternity to Here, claim that at some time in our past the (ancestor of our) universe was at minimum entropy.  At still further times from ours, its entropy was larger than that, and in its daughter universes on that other side of the minimum, entropy may grow as one goes further into (what we consider) the past.  So far, no big deal.  However, as Sean Carroll also argues, it appears that everything we experience as making time “flow” in one direction can be explained by the gradient of entropy.  As far as we know, it is physically possible that at some time intelligent beings exist(ed) in those daughter universes and perceive time to flow in the opposite direction. And their viewpoint is just as valid as ours.

Which direction the arrow of time points, depends on where and when you sit.  Arrow dynamics.  I will go a long way for a pun.

This – and other strange and wonderful discoveries of science – obviously have serious potential to change some philosophical thinking.  The area of philosophy I am most interested in, in this connection, is “the problem of free will and determinism”.  Most of the classic statements of this problem assume things about causality that find no place in modern science.  So here I list some resources that shed light on these issues.

Carl Hoefer points out that well known scientific deterministic theories are bidirectional in time: that is, they allow us to infer from the present or future to the past, just as easily as from past to future.

Huw Price and Ken Wharton explain how “retrocausal” QM theory can account for known violations of Bell’s Theorem.

Yakir Aharonov and Lev Vaidman discuss the Two State Vector Formalism (TSVF), an empirically equivalent formulation of standard QM that wears its time symmetry on its sleeve; and Aharonov et al apply TSVF to explain weak measurement experiments.  Guido Bacciagaluppi uses an alternative formalism to argue that a time-directed interpretation of probabilities, if adopted, should be both contingent and perspectival.

E. T. Jaynes partially explains the relationship between entropy and information.

Eric Lutz and Sergio Ciliberto discuss experiments on information storage and entropy changes.

Steven Savitt explores Being and Becoming in Modern Physics.

Larry Sklar says that “The great problem remains in trying to show that the entropic asymmetry is explanatorily adequate to account for all the other [time] asymmetries in the way that the gravitational asymmetry can account for the distinction of up and down.”

Craig Callender discusses the relationships between the thermodynamic (entropic) arrow of time, and other intuitively appealing arrows like epistemic (memory), mutability (our actions affect the future), and explanatory.

Mlodinow and Brun show that given plausible physical assumptions, recording and then reading a robust memory always proceeds in the direction of increasing entropy.  H. M. Doss places their work in a larger context.

In a tour de force, Jenann Ismael explains (0:55:00 – 1:38:00) why we see the past as fixed and the future as something we can bring about.  This one requires Microsoft Silverlight to view, which is a pain, but worth it.