Я считаю, что совершенно неважно, кто и как будет в партии голосовать; но вот что чрезвычайно важно, это кто и как будет считать голоса.
I regard it as completely unimportant who in the party will vote and how, but it is extremely important who will count the votes and how.
attributed to Stalin by Boris Bazhanov, Stalin’s former personal secretary
Today (Jan 6, 2021), Congress will feature a dispute over whether to approve the Electoral College results. It is a foregone conclusion, because Democrats control the House, and it would require a majority of both the House and Senate to override the validity of the Electors that were sent by the states. Because it’s a foregone conclusion, the number of Republicans joining this putsch will be much smaller than it otherwise might have been. In other words: the symptoms will not reveal the full power of the underlying disease.
The National Archives has a useful document on the rules of the process. Part of it reads
Upon such reading of any such certificate or paper, the President of the Senate shall call for objections, if any. Every objection shall be made in writing, and shall state clearly and concisely, and without argument, the ground thereof, and shall be signed by at least one Senator and one Member of the House of Representatives before the same shall be received. When all objections so made to any vote or paper from a State shall have been received and read, the Senate shall thereupon withdraw, and such objections shall be submitted to the Senate for its decision; and the Speaker of the House of Representatives shall, in like manner, submit such objections to the House of Representatives for its decision; and no electoral vote or votes from any State which shall have been regularly given by electors whose appointment has been lawfully certified to according to section 6 of this title from which but one return has been received shall be rejected, but the two Houses concurrently may reject the vote or votes when they agree that such vote or votes have not been so regularly given by electors whose appointment has been so certified.
So, all a faction needs to install whomever it wants as the next President and Vice President, is a majority in both Houses of Congress, and enough gall to ignore the whole democracy thing. No court would seem to have any jurisdiction over the process. All the referees have skin in the game. This seems like a glaring flaw.
Right now, we only have one party with widespread preference for their favorite conspiracy theories over the actual tallies of votes by actual voters. And even in that party, there are plenty who strongly prefer democracy. But it is not obvious why the situation in both parties will not get worse. The media are still largely following policies that encourage a race to the bottom.
After today, the pundits will congratulate us and themselves, saying the system worked. Maybe, if by “the system worked” you mean that we got lucky this time. But I can hear the ghost of Stalin (or maybe it’s just Bazhanov; all Russian ghosts sound the same to me) laughing behind our backs.
Update Jan 7: Boy, was I barking at the wrong threat! I mean, we still have to fix this Congress counts the votes thing, but only after taking better measures to stop simple thuggery.
Joseph E Davis has a featured post in the Aeon/Psyche newsletter titled “Let’s avoid talk of ‘chemical imbalance’: it’s people in distress.” Davis argues that “chemical imbalance” is drastically oversimplified, and distracts from more personal and more effective treatments. I think he’s basically right. (Full disclosure: my wife is a psychologist.)
How could a treatment based in verbal exchange of fuzzy human concepts and memories outperform a scientifically based treatment based on studies of the brain? Surely I’m not denying that neurotransmitters make a difference to how a person feels and behaves? Well of course not: feelings and behaviors have to be implemented somewhere, and it’s not your left pinky toe! Even if you believe in an immaterial soul that controls what you feel and do, the control has to enter the body somewhere, and the brain is the only remotely plausible candidate (if any candidate is, which is debatable).
But then, personal encounters and verbal exchanges also affect the brain. Memories are laid down by changing the neural wiring, among other possible effects. Neurotransmitters bring about signaling across synapses, but learning affects where those signals go.
analyses of the published and the unpublished clinical trial data are consistent in showing that most (if not all) of the benefits of antidepressants in the treatment of depression and anxiety are due to the placebo response, and the difference in improvement between drug and placebo is not clinically meaningful and may be due to breaking blind by both patients and clinicians. … Other treatments (e.g., psychotherapy and physical exercise) produce the same benefits as antidepressants and do so without the side effects and health risks of the active drugs. Psychotherapy and placebo treatments also show a lower relapse rate than that reported for antidepressant medication.
It’s important to remember that a placebo effect IS an effect. It can be considerably better than nothing.
If psychotherapy is so great, why doesn’t it sell better? Davis writes:
[Jenna, a depressed patient] told me she welcomed the diagnosis of a neurobiological disorder, which confirmed her problem was ‘real’ – brought on by a physiological force external to her volition – and that it showed she’s not ‘just a slacker’. At the same time, Jenna was careful to distance her experience from that of people who are, in her words, ‘crazy’ or ‘nuts’. Their illness means a loss of control and ability to function. By contrast, she sees her problem as a common and minor glitch in neurochemistry. No one, she insisted, should mistake her for the mentally ill.
The stigmatization of mental problems is the problem. Ironically, as Davis explains but I won’t quote, the “chemical imbalance” story has if anything aggravated stigmatization.
He’s contrasts them – kinda. So do I, but for different reasons. Here are the two bottom lines from Justin Clarke-Doane’s paper “The ethics–mathematics analogy” in Philosophy Compass 2019:
This argument is a kind of radicalization of Moore’s Open Question Argument. … The point … is that an agent may know that A is F, for any property, F, whether descriptive or ethical, while failing to endorse A. … if the argument works, it works for any normative properties, whether ethical, epistemic, prudential, or all-things-considered.
In general, if one is an ethical anti-realist on the basis of epistemological considerations, then one ought to be a mathematical anti-realist too. And, yet, ethical and mathematical realism do not stand or fall together. Ethical questions, insofar as they are practical, cannot fail to be objective in a way that mathematical questions can.
But what does he mean by “practical” near the end of the second passage? Clarke-Doane repeatedly refers to “whether to do” what the ethical (or epistemic or prudential) norm says to do. Apparently a “practical” question is one that settles whether to do X, for some particular X.
Before we evaluate whether “whether to do X” questions can “fail to be objective”, I should explain how certain mathematical questions can fail to be objective, on Clarke-Doane’s view. That is because mathematical pluralism is true of at least some mathematical domains. (I know little about philosophy of mathematics, but I must say I find mathematical pluralism highly plausible.)
Clarke-Doane: “Just as Euclidean and hyperbolic geometries are equally true, albeit true of different structures, the mathematical pluralist maintains that foundational theories, like (pure) set theories, are too. It is as though the most uncompromising mathematical relativism were true.” And: “At first approximation, mathematical pluralism says that any (first-order) consistent mathematical theory is true of the entities of which it is about.” On this basis Clarke-Doane concludes that mathematics, if pluralists are correct, is truth-bearing but not objective. I’ll take this as partially definitive of what “objective” means here. So I guess this means: if you get to pick which theory to use, it’s not “objective”.
How might one conceive or defend an ethical pluralism comparable to mathematical pluralism? Clarke-Doane asks us to consider an “ethics-like” system ethics*, which has slightly different norms and as a result tells us not to do some particular X that ethics tells us to do. Then we might wonder whether to do what ethics tells us to do in the situation, or what ethics* tells us to do. As for why ethical pluralism might be defensible, Clarke-Doane suggests that Cornell Realism implies it, as do moral functionalism and Scanlon’s metaethical views. I call my own view “Cornell Constructivism”, but that’s for another time.
Of Clarke-Doane’s two bottom lines, I agree with the first and a small part of the second. The first was that one can accept that A is F, for any normative property F, and yet not endorse it. But this undercuts Clarke-Doane’s claim in the second bottom line that ethics is “practical” in his sense. Of course it may be practical for some people – ethical people. Highly ethical people may see no daylight between concluding that an act is right, and endorsing it and going for it. On the other hand, extremely sociopathic people might see no attraction at all in the ethical. And turning to philosophical thought-experiments, it seems easy to conceive a demon who regards the ethical as a property to be avoided at any cost.
Clarke-Doane might reply that you either do X, or do not, and that is what makes it objective. But that you do X (or not) does not imply that you ever evaluated X at all. I’m really not sure what Clarke-Doane is getting at, and I worry that I’ve overlooked a better interpretation. But I can find no interpretation that truly logically connects from “ethics is practical” to “it cannot fail to be objective” and also makes both plausible.
I agree all too much that “ethical and mathematical realism do not stand or fall together” – too much to have nearly as much patience with the ethics-mathematics analogy as Clarke-Doane does. Ethics is bound up with experience in ways that make the analogy a non-starter. Ethics is about how we can flourish and get along. We who address ethical reasoning and justifications to each other. We who accept or reject these reasons and justifications, and propose alternatives. In order to determine whether our interlocutors can reasonably accept our proposals, we have to study and listen to them. In order to check whether we reasonably make the proposals, we have to study ourselves – and our common humanity will allow this to shed light on others.
Ethics isn’t a priori. It’s mired in empirical learning.
Justin Clarke-Doane has done philosophy an enormous favor by radicalizing – to the point of absurdity – Moore‘s Open Question Argument. Even Moore’s own “simple non-natural property” of goodness fails to pass Moore’s own test. We can agree that an act has Moorean Goodness and still wonder whether to do it. But if no normative property can conceivably pass the test, this shows that the test is not an appropriate test of normativity. There is no pure normativity – “pure” meaning utterly empty of descriptive content – to be had in this or any other universe.
We can endorse an action as prudent, or ethical. We can endorse an inference as logical. We can endorse a theory as epistemically virtuous. In none of these cases are we simply saying “yay, action/inference/theory!” In none of them are we purely expressing approval, or an intention to act/infer/theorize. There is additional information we are implying.
We can of course just endorse. Endorse without an “as” (as ethical, as logical, etc.). Endorsing, that is, without any value judgement. But that’s not normativity.
Carlo Rovelli is a big fan of loop quantum gravity, and of physics in general, and this book recaps the whole history of modern physics, at least partly in order to show how elegantly loop quantum gravity fits into place as a reasonable extrapolation. It’s an interesting and believable history, and the case for the plausibility of loop quantum gravity looks convincing to me. But then, I think I was an easy mark — since I already agreed with a series of strange (from the layperson’s point of view, at least) assertions Rovelli makes about known physics.
Rovelli inserts helpful diagrams every so often to summarize the history (and sometimes potential future) of “what there is” in the physical world according to physics. I can’t quite do justice to them so I use a table (please read it as one table).
Covariant quantum fields
In the transition from special relativity (1905) to general (1915), fields and spacetime are absorbed into “covariant fields”. This is because spacetime, Rovelli asserts (and I instinctively agree), is the gravitational field. So other fields like the electromagnetic field are covariant fields – fields that relate to each other in circumscribed ways. The curvature of spacetime depends on the energy (e.g. electromagnetic) present, and the behavior of electromagnetic fields depends on that curvature.
Rovelli likes to sum up some key features of each theory, and these summaries are very helpful. For QM, Rovelli lists three key principles:
Information is finite;
There is an elementary indeterminacy to the quantum state;
Reality is relational (QM describes interactions).
As a fan of Everettian QM, I don’t think we really need the indeterminacy principle. But it’s still true that we face an inevitable uncertainty every time we do a quantum experiment (it’s just that this is a kind of self-locating uncertainty).
Loop quantum gravity refines the “information is finite” principle to include spacetime as well. Not only are energy levels discrete; spacetime is also discrete. There is a smallest length and time scale. Rovelli identifies this as the Planck length (and time).
Rovelli explains loop quantum gravity as the quantization of gravity, deriving from the Wheeler-DeWitt equation. This equation can only be satisfied on closed lines aka loops. Where loops intersect, the points are called nodes, and the lines between nodes are called links. The entire network is called a graph, and also a “spin network” because the links are characterized by math familiar from the QM treatment of spin. Loop quantum gravity identifies the nodes with discrete indivisible volumes, and each link with the area of the surface dividing the two linked volumes.
Rovelli is at pains to point out that the theory really says what it’s saying. For example: “photons exist in space, whereas the quanta of gravity constitute space itself. … Quanta of space have no place to be in, because they are themselves that place.” This warning might seem too obvious to be necessary, but that’s because I didn’t reproduce the graphs of spin networks in Rovelli’s book. (I lack the artistic talent and/or internet skillz.) You know, graphs that sit there in space for you to look at.
OK, that’s space, but what about time (and aren’t these still a spacetime)? This deserves a longish excerpt:
Space as an amorphous container of things disappears from physics with quantum gravity. Things (the quanta) do not inhabit space; they dwell one over the other, and space is the fabric of their neighboring relations. As we abandon the idea of space as an inert container, similarly we must abandon the idea of time as an inert flow, along which reality unfurls.
[…] As evidenced with the Wheeler-DeWitt equation, the fundamental equations no longer contain the time variable. Time emerges, like space, from the gravitational field.
Rovelli, chapter 7
Rovelli says loop quantum gravity hews closely to QM and relativity, so I assume we get a four-dimensional spacetime which obeys the laws of general relativity at macroscopic scales.
In a section of Chapter 11 called Thermal Time, Rovelli uses thermodynamics and information theory to explain why time seems to have a preferred direction, just as “down” seems to be a preferred direction in space near a massive body. When heat flows from a hot zone into the environment, entropy increases. Since entropy reductions of any significant size are absurdly improbable, these heat flows are irreversible processes. And since basically everything in the macroscopic world (and even cellular biology) involves irreversible processes, time “flows” for us. Nevertheless, at the elementary quantum level, where entropy is undefined (or trivially defined as zero – whichever way you want to play it) time has no preferred direction. All of this will be familiar to readers of my blog who slogged through my series on free will. This is the key reason scientific determinism isn’t the scary option-stealing beast that people intuitively think it is.
There was one small section in Chap. 10 on black holes that seemed to fail as an explanation. Or maybe I’m just dense. Since spacetime is granular and there is a minimal possible size, loop quantum gravity predicts that matter inside the event horizon of a black hole must bounce. The time dilation compared to the outside universe is very long, so an observer would see no effect for a very long time, but then the black hole would “explode”. But surely “explode” is not the right word? Intuitively it would seem that any bouncing energy should emerge at a comparable rate to that at which it entered, at least for matter entering during a period of relatively stable Schwarzschild radius. Maybe by “explode” Rovelli just means the black hole would “give off substantially more energy than the usual Hawking radiation”?
In a recent interview with Nigel Warburton, neuroscientist Anil Seth mentions (around 3 minutes + 30 sec) John Locke’s distinction between primary and secondary qualities. For primary qualities, the way in which it appears in our experience is pretty directly related to how things are in the world, such as solidity and movement. But there are things like colors which are secondary qualities, where the relationship between what we experience and what’s out there is more indirect and requires the participation of the observer to generate that quality.
But then, at around 6:30 in the video, Anil Seth tells us that all perception works mainly in the top-down, or inside-out direction – from high-level descriptive guesses about the world “down” to details that then fit in to or revise that picture, and from the central nervous system “out” to the periphery. From what little I know of neurology, this inside-out direction of influence is indeed quite important. But that observation threatens, or perhaps we should say trivializes, the primary/secondary distinction. (Seth may well understand this; I’m not sure. The mention of primary/secondary may only be made in order to move beyond it.)
If our perceptions of solidity and of motion are indeed primarily driven from the inside of the brain outward to the periphery, what sense can we make of the idea that our “experience is pretty directly related to how things are in the world”? Our experience is driven from guesses in the central cortical region outward, in both color and solidity experiences. It would seem that all qualities are secondary qualities.
But then, all qualities are also primary qualities, if all it takes to be a primary quality is that it can be specified without reference to an observer. For example, we can define three zones of spectral radiance, one centered at 420 nm, one at 530 nm, and one at 560 nm, each giving less weight to other wavelengths as one gets further from that peak. We can then define “red” things as those whose radiance in that highest-wavelength band bears sufficiently high ratios to the radiance in the other two bands. Of course, I had to lean on human experience of colors to get those wavelength numbers. Yet, I have to lean on human experience of solidity before I could attempt to define that, as well. The alleged primary/secondary distinction is not to be found here.
Seth points out that the solidity of a bus can impact you even when you’re not observing it. OK, but a bacterium which photosynthesizes using only rhodopsin will flourish in green light more easily than in red light of the same total intensity – regardless of whether anyone is looking. Again, no difference here.
Did you hear the latest news from the courts? They’re overhauling the rules for legal arguments in front of a jury. The rules for lawyers will be much looser. Want to ask a witness an irrelevant question? One that lacks foundation? One that the witness has already answered, but you didn’t like the answer? Want to skip the questions and just testify to the courtroom on behalf of your side? Go right ahead!
The other side can object, of course, and the objections will be noted for the record. But then the questioning, or testifying, can go on as if nothing happened.
Badgering the witness? Go for it! Hearsay? No problem! Lay witness testifying about a subject he has no expertise in? Let the jury beware!
Expert witnesses also need no particular qualifications any more. If the witnesses are good enough for one side, they’re good enough for the court. It’s strictly He said, She said, from here on out. The court will not attempt to instruct the jurors regarding which witnesses are credible or have genuine expertise. Jurors will be on their own regarding whom to believe.
OK, relax. I’m just kidding. This isn’t going to happen. But if it did, it would be a disaster. Lawyers would race to the bottom to use underhanded tricks to con jurors onto their side. Truth and evidence would largely go out the window. It’s widely known that the legal rules of evidence and argument are there to prevent just such a disaster, and there is no massive wrecking ball on the horizon headed toward destroying these rules.
OK, don’t relax. Indeed, low-grade panic would be appropriate. This isn’t going to happen to the courts, but it has already happened to the press. The mainstream US print, radio, and TV media, with the exception of a few open partisans, treat “objectivity” as if it demanded a courtroom without any rules. More precisely, with only one rule: that “both sides” will get a chance to speak. And never mind how the number of sides gets magically reduced to two. Journalists have become stenographers or videographers. Fact checking is relegated to a special segment, if it exists at all. And news outlets are embarrassed if some important figures are found to be stating falsehoods on a regular basis, especially if that looks “unbalanced”.
In recent years there has been a lot of well justified hand-wringing about our post-truth society. “How did we get here?” authors ask. To me the mystery is rather: why did it take so long?
The BBC recently came out with a three-part series on free will. Part 2 is about physics. If you’re going to infer lessons from physics, it helps to get the physics right. They don’t. Part 2 of the BBC series can be found here: https://www.bbc.com/reel/playlist/free-will?vpid=p086tg3m
The picture above analogizes a series of physical events to a chain of dominoes, in order to talk about cause and effect. But there’s something odd about this metaphor, if the dominoes are supposed to represent the physical universe: look at that first domino, in black. What makes it tip over? Something from outside the universe, a “god” so to speak, intervenes to set the whole thing in motion. We seem to have jumped from physics to theology.
This would just be a nit-pick, if the negligent treatment of the “start” in the model did not affect the conclusions drawn. But it does, as we will see.
But first let’s look at some additional physics mistakes in the video. Jim Al-Khalili says “When we think we’re making free choices, it’s just the laws of physics playing themselves out.” Well no, the laws of physics alone don’t cause anything. The laws of physics are rather abstract. If you want to understand how a concrete action came about, you need not just laws of physics but also what physicists call “boundary conditions”, AKA concrete reality. Especially bits of concrete reality that heavily interact with the action in question. For example, you. Of course, perhaps Al-Khalili didn’t mean “just the laws of physics” quite so literally. But it matters how you phrase things, especially when you accuse people of only thinking they’re making free choices. Your grounds for calling them mistaken had better not be based on distorted depictions of the physics.
From the “libertarian” side of the philosophical debate, Peter Tse makes a different mistake – or maybe just poorly worded statement: “Patterns of energy don’t obey the traditional laws of physics.” Unless he means “classical physics” (in which case: say “classical”), that’s not true. The Wikipedia article on Lagrangian mechanics is a good resource for seeing just how deeply physics treats patterns of energy. “The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant.”
Since Einstein, physicists have known that space and time are not independent, but aspects of a single four-dimensional manifold, spacetime. For observers in different inertial reference frames, which direction counts as “time” will differ. A metaphor called the “block universe” is sometimes used to describe this, where we only depict two spatial dimensions and then repurpose the third to represent time. Jim Al-Khalili uses a loaf of bread, with different times being different slices.
The block universe is like a loaf. OK, let’s go with this metaphor: one end of the loaf is very hot (we call it the Big Bang) and the other is cold. There are certain patterns that stretch from one end of the loaf to the other. If we know the pattern (laws of physics) and we know the boundary conditions (full state of any slice) we can derive the state of any other slice. Why say that the hot end caused the cold end to be the way it is? Why not say that the cold end caused the state of the hot end? After all, the mathematical derivation works equally well in that direction. Better yet, why not admit that “causality” is a useless concept at the level of a complete description of the universe, and just look at the bidirectional laws of nature instead? Why not start your analysis in the middle (but nearer to the hot side), and work your way toward both ends? The last option is a lot more practical, since that middling point is where you are.
The idea that the Big Bang is the Big Boss and we are just its slaves has no basis in science. Remember that “god” that tipped over the first domino? He’s creeping back in through the back door of Al-Khalili’s thinking. He thinks the Block Universe is dominated by its early times. You can only get such domination by swapping out a scientific view of time and causality, and sneaking in an intuitive picture of time and causality in its place.
Al-Khalili does that when he says “The past hasn’t gone … the future isn’t yet to be decided.” The narrator does that when she says “every single frame of that animation already exists and will exist forever.” Argh, no! Time is within the loaf! If you’re going to use a metaphor, stick with the structure you used to create it – don’t sneak your intuitive conception of time into the background while leaving scientific time in the foreground, now portrayed spatially.
Al-Khalili says “the future … is fixed, even though we don’t know it yet.” This conclusion would repeal the very laws of physics Al-Khalili was claiming to honor. The future is dependent on us because, to repeat myself, laws of physics must be applied to boundary conditions to derive a prediction about the future, and those boundary conditions include us.
Modern physics does destroy the traditional “solution” to the “problem of free will”. What these commentators don’t seem to notice is that it also destroys the traditional “problem” of free will. When you notice that your intuitive ideas of time and causality conflict with science, you need to figure out the full consequences of the science, not take one point from science and then re-apply your intuitive ideas. The future isn’t set in stone. It’s set in spacetime. And spacetime is lighter than air.
(B) Unidirectional, making for controllers and the controlled
But not more than two of (A)-(C). Causality is unidirectional and scientific, but not universal. Laws of nature are universal and scientific, but not unidirectional. Determinism as imagined in the Consequence Argument is universal and unidirectional, but not scientific. That’s why the Consequence Argument fails.
We think of the past as fixed and the future as open. Some people think science has shown that the fixed past is real and the open future is an illusion, but the truth is almost diametrically opposite. The idea that the whole past is fixed is an overgeneralization. It is a natural, and even rational, inference from our experiences as macroscopic beings, but still a mistake.
Even though (the evidence indicates) the past only depends microscopically on the present, what is advocated here is not a version of Lucretius and the “swerve”. It’s not that we get our freedom from microscopic past phenomena (such as quantum phenomena) in particular. The idea that freedom has to be handed down from past to present is wrongheaded to begin with. If in some particular case, a macroscopic past state did perfectly correlate with our macroscopic present action, that would still not be a problem: that macroscopic past state would then be up for grabs. (Aside for the really nerdy: This is why I am not a big fan of Christian List’s reply to the Consequence Argument, even though it may have a solid point. It concedes too much.)
An additional group of anti-free will arguments, vaguely similar to the Consequence Argument but different, are called sourcehood arguments. Let me just quote the first premise from the Stanford Encyclopedia of Philosophy article:
1. We act freely … only if we are the ultimate sources (originators, first causes) of at least some of our choices.
This one wears its allegiance to a certain picture of time and causality on its sleeve. Why ultimate source? Why not just source? Because the proponent of the argument mistakenly thinks that physical events are in the general habit of bossing each other around, so that the only way we can avoid being controlled is to conjure something ex nihilo. Hopefully, we’ve covered this ground enough that the reader can see what’s wrong with that premise.
People often do bad things when they could have done better things. Does that mean Retributivism is justified? (Hint: No.) Retributivism, on one definition, is the view that it’s intrinsically morally better that a wrongdoer suffer than that they do not, provided that they could have done otherwise.
Retributivism is not a metaphysical mistake. But in my view, it’s a moral mistake. Instead, punishment is justified when justifiable rules call for it, and discovering those rules depends on free and open moral dialogue among people who will be affected by the rule; people who are intent on reasoning together about how to get along. Others may not care to get along. We need a backstop to enforce livable social rules on those who would otherwise harm anyone who got in their way, and those who are a little more pro-social yet still go off the rails sometimes. But not everyone needs suffering to keep them in line, and those who do should not receive more than the minimum required.
There’s a more humane approach to justice that is common in many indigenous societies, and is making something of a comeback in ours. Here’s part of a transcript of an interview about restorative justice. Michel Martin is a show host, and Sujatha Baliga is a recent MacArthur Fellowship winner who works on restorative justice.
MARTIN: I’m glad you raised that as a crime of violence because I think many people may be familiar with a concept of restorative justice in connection with, you know, teenaged mischief, for example. Let’s say you deface somebody else’s football field before the big game, and they find out that you did it. And the consequence is you have to clean it up. In matters like this, in matters of serious crime and serious harm, where someone’s life is taken, where someone is seriously harmed, what, in your view, is the societal benefit of taking this approach?
BALIGA: Actually, restorative justice works best with more serious harms because we’re talking about people who are actually impacted. In that face-to-face dialogue, you can imagine it not having any heat or any value, really, in terms of the wake-up or the aha moments when we’re talking about graffiti versus when someone has actually entered someone’s home and taken their things, right? That’s a situation that calls for accountability, calls for a direct dialogue where someone takes responsibility for what they’ve done. So, to my mind, restorative justice – and it’s not just to my mind. There’s international data that shows that restorative justice is actually more effective with the more serious harms that people do to one another.
Emphasis added. A humane approach to justice doesn’t depend on the denial of free will or moral responsibility. Quite the opposite, in this case.
Intuitively, we think of the future as open and the past as fixed. Meaning that the future is up to us; dependent on our actions. And the past is not; it’s independent of our actions. This way of thinking is very natural and goes deep. We think that being in the past makes those events fixed. But that’s wrong: it’s an oversimplification. It’s the fact that those events (that we are thinking of) represent a lower entropy state that makes them fixed. And an occurrence of a lower entropy state requires a large number of microscopic states which all count as the same state at some coarse-grained level, such as “the pressure of the air in this tire.”
Let us count the Ways
If all you know about “entropy” is that it’s related to “disorder” (true in a limited range of cases), the fact that entropy is only defined statistically will come as a surprise. But the classic definition for entropy given by Ludwig Boltzmann is S = k ln W. S is entropy, k is the Boltzmann constant, and W is the probability, given by the count of the ways that the macroscopic state can be realized by various microscopic arrangements. Because the numbers of microscopic states in question are enormous (18 grams of water contains 6 x 10^23 molecules for example), the probabilities quickly become overwhelming for macroscopic systems. Ultimately, the increase of entropy is “merely” probabilistic. But those probabilities can come damn close to certainty.
Why are so many processes irreversible? By reversing a process, we mean: removing a present condition, to give the future a condition like the one had in the past. For example, suppose I dropped an egg on the kitchen floor, making a mess. Why can’t I undo that? The molecules of egg shell and yolk are still there on the floor (and a few in the air), and they traced in-principle reversible paths (just looking at the micro-physics of molecular motion) to get there. So why can’t I make an intact egg from this?
The answer is entropy, and therefore the count of the Ways. There are many ways to get from a broken egg to a more-broken egg. There are many orders of magnitude fewer ways to get from a broken egg to a whole egg. One would have much better odds guessing the winning lottery number, rather than trying to find a manipulation that makes the egg whole. There is some extremely narrow range of velocities of yolk and shell-bits such that if one launched the bits with just those velocities, molecules would in the immediate future bond to form whole egg-shell, with yolk inside – but finding those conditions, even aside from implementing them, is impossible in practice. Because the more-broken egg states so vastly outnumber the whole-egg states, our attempts to reverse the mess have vanishing probability of success.
On a local level, some macroscopic processes are reversible. I accidentally knock a book off a table; I pick it up and put it back. The room is unchanged, on a suitably coarse-grained analysis — but I have changed. I used up some glucose to do that mechanical work. I could eat some more food to get it back, but the growth of the relevant plants ultimately depends on thermodynamically irreversible processes in the sun. On a global analysis, even the restoration of the book to its place is an irreversible process.
The familiar part of the past is fixed …
Entropy thus explains why we can’t arrange the future to look just like the past. The different problem of trying to affect the past faces similar obstacles. The “immutability of the past” arises because the events we humans care about are human-sized, naturally enough, i.e. macroscopic. Macroscopic changes in practice always involve entropy increases, and always leave myriad microphysical traces such as emitted sounds and reflected and radiated light and heat. These go on to interact with large systems of particles, typically causing macroscopic consequences. While phonons (quanta of sound) and photons follow CPT-reversible paths, that does not mean we can collect those microscopic energies and their macroscopic consequences in all the right places and arrange to have the past events that we want. As in the broken egg case, even if we had the engineering skills to direct the energies, we face insurmountable information deficits. We know neither where to put the bits, nor with what energy to launch them.
In addition to the time-asymmetry of control over macroscopic events, we have time-asymmetric knowledge, for closely related reasons. Stephen Hawking connected the “psychological arrow of time”, based on memory, to the “entropic arrow of time”, which orients such that lower-entropy times count as past, and higher as future. Mlodinow and Brun argue that if a memory system is capable of remembering more than one thing, and exists in an environment where entropy increases in one time-direction, then the recording of a memory happens at a lower-entropy time than its recall. Our knowledge of the past is better than our knowledge of the future because we have memories of the past, which are records, and the creation of records requires increasing entropy.
Consider an example adapted from David Albert. Suppose we now, at t1, observe the aftermath of an avalanche and want to know the position of a particular rock (call it r) an hour ago, at t0, the start of the avalanche. We can attempt to retrodict it, using the present positions and shapes of r and all other nearby rocks, the shape of the remnant of the slope they fell down, the force of gravity, our best estimates of recent wind speeds, etc. In this practically impossible endeavor, we would be trying to reconstruct the complete history of r between t0 and t1. Or we might be lucky enough to have a photograph of r from t0, which has been kept safe and separate from the avalanche. In that case our knowledge about r at t0 is independent of what happened to r after t0, although it does depend on some knowledge of the fate of the photograph. As Albert writes [p. 57], “the fact that our experience of the world offers us such vivid and plentiful examples of this epistemic independence [of earlier events from later ones] very naturally brings with it the feeling of a causal and counterfactual independence as well.”
Contrast our knowledge of the future
position of r an hour from now. Here
there are no records to consult, and prediction is our only option. Almost any feature of r’s environment could
be relevant to its future position, from further avalanches to freak weather
events to meddling human beings. The
plenitude of causal handles on future events is what makes them so manipulable.
Note that it is not that our knowledge of the macroscopic past puts it beyond our control: we cannot keep past eggs from breaking even if we did not know about them. Nor is it our ignorance of the future that gives us control over future macroscopic states (nor the illusion of control). Rather, it is the increase of entropy over time, and the related fact that macroscopic changes typically leave macroscopic records at entropically-future times but not past times, that explains both the time-asymmetry of control and of memory. A memory is a record of the past. And a future macroscopic event (for example, a footprint) that we influence by a present act (walking in the mud) is a record of that act. If we could refer to a set of microphysical past events that did not pose insurmountable information deficits preventing us from seeing their relation to present events, might they become up to us?
…But not the whole of the past is fixed
Yes, some microphysical arrangements, under a peculiar description, are up to us. We’ve been here before, in Betting on The Past, in the previous post in this series. There, you could guarantee that the past state of the world was such as to correspond, according to laws of nature, to your action to take Bet 2. You could do so just by taking Bet 2. Or you could guarantee that the microphysical states in question were those corresponding to your later action to take Bet 1. When you’re drawing a self-referential pie chart, you can fill it in however you like. Dealing with events specified in terms of their relation to you now is dealing in self-reference, regardless of whether those events are past, present, or future. Of course, you have no idea which microscopic events, described in microscopic terms, will have been different depending on your choice. But who cares? You have no need to know that in order to get what you want.
We’re used to the idea of asymmetric dependence relations between events, such as one causing another. And we’re used to the idea of independent events that have no link whatsoever. We’re not used to the idea of events and processes that are bidirectionally linked, with neither being master and neither being slave. But these bidirectional links are ubiquitous at the microscopic level. It is only by using our macroscopic concepts, and lumping together event-classes of various probabilities (various counts of microscopic ways to constitute the macroscopic properties), that we can find a unidirectional order in history.
There’s nothing wrong with attributing asymmetric causality to macroscopic processes – entropy and causality are reasonably well-defined there. But if we overgeneralize and attribute the asymmetry to all processes extending through time, we make a mistake. Indeed, following Hawking and Carroll  and others, we can define “the arrow of time” as the direction in which entropy increases.
This gets really interesting when we consider cosmological theories which allow for times further from our time than the Big Bang, but at which entropy is higher than at the Big Bang. Don Page has a model like this for our universe. Sean Carroll and Jennifer Chen  have a multiverse model with a similar feature, pictured below:
The figure shows a parent universe spawning various baby universes. One of the ((great-(etc))grand)babies is ours. The parent universe has a timeline infinite in both directions, with a lowest (but not necessarily low!) entropy state in the middle. Observers in baby universes at the top of the diagram will think of the bottom of the diagram, including any baby universes and their occupants, as being in their past. And any observers in the babies at the bottom will return the favor. Each set of observers is equally entitled to their view. At the central time-slice, where entropy is approximately steady, there is no arrow of time. As one traverses the diagram from top to bottom, the arrow of time falters, then flips. Where the arrow of time points depends on where you sit. The direction of time and the flow of cause and effect are very different in modern physics than they are in our intuitions.
Another route to the same conclusion
So far we’ve effectively equated causation to entropy-increasing processes, where the cause is the lower-entropy state and the effect is the corresponding higher-entropy state. But there’s another way to approach causality, one which finds its roots in the way science and engineering investigations actually proceed. On Judea Pearl’s approach in his book Causality, an investigation starts with the delineation of system being investigated. Then we construct directed acyclic graphs to try to model the system. For example, a slippery sidewalk may be thought to be the result of the weather and/or people watering their grass, as shown in the tentative causal model below, side (a):
Certain events and properties are considered endogenous, i.e. parts of the system (season, rain…), and other variables are considered exogenous (civil engineers investigating pedestrian safety …). To test the model, and determine causal relations within the system, we Do(X=x) where X is some system variable and x one of its particular states. This Do(X=x), called an “intervention”, need not involve human action, despite the name. But it does need to involve an exogenous variable setting the value of X in a way that breaks any tendencies of other endogenous variables to raise or lower the probabilities of values of X. In side (b) of the diagram this shows as the disappearance of the arrow from X1, season, to X3, sprinkler use. The usual affect of season causing dry (wet) lawns and thus inspiring sprinkler use (disuse) has been preempted by the engineer turning on a sprinkler to investigate pedestrian safety.
As Pearl writes,
If you wish to include the entire universe in the model, causality disappears because interventions disappear—the manipulator and the manipulated [lose] their distinction. … The scientist carves a piece from the universe and proclaims that piece in – namely, the focus of the investigation. The rest of the universe is then considered out. …This choice of ins and outs creates asymmetry in the way we look at things and it is this asymmetry that permits us to talk about ‘outside intervention’ and hence about causality and cause-effect directionality.
Judea Pearl, Causality (2nd ed.): 419-420
It’s only by turning variables on and off from outside the system that we can put arrow-heads on the lines connecting one variable to another. In the universe as a whole, there is no “outside the system”, and we are left with undirected links.
In Judea Pearl’s exposition of the scientific investigation of causality, causality disappears at the whole-universe level. In the entropy-based definition of causality, causality doesn’t apply between fully (microscopically) specified descriptions of different times because irreversibility only applies where the number of ways of making up the “effect” state is far greater than the number of ways of making up the “cause” state – but the number of ways to make a fully-specified state is 1.
The bottom line
Laws of nature / Causality / Determinism can be:
(A) Universal, applying to everything
(B) Unidirectional, making for controllers and the controlled
Choose not more than two.
Albert, David Z. After Physics. Cambridge: Harvard College, 2015.
Carroll, Sean M. From Eternity to Here: the Quest for the Ultimate Theory of Time. New York: Penguin, 2010.