Causation and its Basis in Fundamental Physics, by Douglas Kutach

Background image: How to Philosophize with a Hammer by Randon Rosenbohm, from the cover, rotated 90 degrees to fit the page

Fundamentally, events are causally linked by determining (or by fixing probabilities for) each other. These relations amount to a comprehensive set of singular causal relations. Derivatively, events are causally linked by relations of probability-raising (or probability-lowering) understood as a form of difference-making or counterfactual dependence. We can say that contrastive events prob-influence coarse-grained events or that they fix other contrastive events. These prob-influence and fixing relations constitute a comprehensive set of general causal relations.

p. 307

There you have the bottom line. Now you’d probably like to know what some of the key words above are intended to mean. And/or, why is this an issue in the first place? Let’s start with the latter.

Causation is a widely discussed topic in the sciences. Yet, if we suspect that all objects and properties treated by the sciences are ultimately physical, and we look at the equations of fundamental physics, it can be hard to see where causation comes from. After all, causation is usually supposed (spoiler alert: but not by Kutach) to be irreflexive and asymmetric. However, our best candidates for fundamental physical equations allow us to mathematically derive past states from present ones, just as easily as we derive future states from present ones. This incongruity led Bertrand Russell to opine:

The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.

Bertrand Russell, ‘On the Notion of Cause’ (1913)

Philosophers mostly disagree with Russell, but propose various different, often conflicting, ways of locating causality in the real world. Kutach, in my view, gives a better analysis than any other philosopher I’ve encountered. (Physicist Sean M. Carroll is also excellent on the topic; their views differ terminologically, but are compatible in substance.)

OK, now for those key words: “Fundamental”, “determining”, “singular causal relations”, “coarse-grained events”, “contrastive events”, “probability-raising”, “difference-making or counterfactual dependence”. I have re-arranged the terms in the order I intend to present them.

Consider a world in which classical physics is correct, and which contains two rocks. The more massive one moves slowly toward the left, contains 4 corpuscles, and is hot enough that one of its constituent corpuscles is moving to the right relative to the rest frame. The less massive object moves quickly to the right, is cold, and all 3 of its corpuscles move quickly to the right relative to the rest frame. Suppose we calculate that each rock contains 0.5 Joules of mechanical energy, the small rock contains 0.5 Joules of thermal energy, and the large one contains 1.5 Joules of thermal energy. Well, that’s one way to look at it.

Alternatively, we can lump all the corpuscles together as one mechanical system, and treat each corpuscle’s motion relative to that combined system as embodying thermal energy. Suppose the weighted average velocity is zero. We will find that there are still 3 Joules of total energy, all of it now deemed thermal. Kutach proposes that it is reasonable to think of thermal and mechanical energy as derivative. He says “metaphysically” derivative – I think they should instead be regarded as epistemically derivative, as a feature of our scientific model; and I would use a different definition – but this need not worry us.

Kutach’s definition of fundamental (p. 25):

  1. The way things are fundamentally is the way things really are.
  2. Fundamental reality is the only real basis for how things stand derivatively.
  3. Fundamental reality is as determinate as reality ever gets.
  4. Fundamental reality is consistent.

“Determining” (p. 67; note “nomologically” means according to natural laws)

A fundamental event c determines a fundamental event e iff the occurrence of c nomologically suffices for the occurrence of e (with e’s location relative to c being built into this relation).

Kutach uses/coins the word “terminance” rather than “causation” to talk about these determining relations. Part of the reason is his elaboration about e’s location relative to c, which other philosophers generally ignore. But a bigger reason is Bertrand Russell’s issue (p. 68):

determination is both reflexive and transitive, and it is compatible with being symmetric. Yet, philosophers tend to think of causation as irreflexive, non-symmetric, and not necessarily transitive.

This is just one of the many points at which Kutach avoids using familiar but controversial words, or proposes definitions of familiar words as suggestions for improving usage and clarity. This is a standard maneuver in philosophy, but applied repeatedly and remorselessly, it turns into a kind of superpower. At the end, Kutach is in a position to say something like, here are a set of relations which are clearly real, and can explain a great deal of what we correctly believe about phenomena involving “causality”. Why not identify causality with these?

“Singular” causation refers to specific actual events, e.g. “Wind caused the collapse of the Tacoma Narrows bridge.” (p. 22) “Smoking causes cancer” would count as general causation.

“Coarse-grained” events are those which leave some fundamental physical details unspecified. For example “if I stick a metal fork in this socket, I will be shocked” does not specify the exact number of iron atoms in the fork, but it may be true nonetheless. Kutach wisely leaves it up to scientists to specify how fundamental physical events can be usefully coarse-grained (201-2).

“Contrastive events” are each an ordered pair of coarse-grained events (136) each of which has a probability distribution over all its fundamental members (69). The probability need not be objectively fundamental (as QM as often interpreted). It can just be a reasonable distribution of expectations that a person assigns based on their available evidence. Kutach also sees “no reason my account cannot be adapted for more general probability-like concepts” (69) that do not fit standard probability axioms. An example of contrastive events might be {Paul finishes this post in the next hour, Paul eats a meal in the next hour}. The events need not be mutually exclusive, but the contrastive event will be trivial unless some fundamental events count under one but not the other.

“Probability-raising” and “difference-making” are straightforward for contrastive events. Continuing the above example, the first event raises the probability of someone reading my blog an hour hence, and because it is a little harder to finish a post while eating, it slightly lowers the probability of there being fewer bananas in the house an hour hence. Assuming I do post, and you got a notification from WordPress, and then started reading, we would have an intuitively straightforward sense in which my posting made a difference to your reading.

In a later post, I’ll go into some implications of Kutach’s analysis. That includes whether Kutach’s
“causation” is an antisymmetric relation in the real world, and if not whether another terminological decision about what relations to call “causal” would be better.

Fake tautologies

A tautology is a statement that’s true just in virtue of logic and/or the definitions of its terms. For example, “If p and q are propositions and neither p nor q is true, then not-p is true and not-q is true.” Or “all bachelors are unmarried”. Or “all square things are square.”

Tautologies often sound redundant to the point of silliness, as in “all square things are square.” But if you look at how we actually use the concepts involved, the situation becomes a little more sobering. “Square” and other words for shapes have multiple uses. For example, we take ourselves to be able to judge shapes visually, at least under ideal conditions such as with good lighting, viewing the surface from a direction normal to the surface, etc. And we also take ourselves to be able to judge shapes in a tactile way. For example, we could break a stick so that it is the same length as one side of the maybe-square, and then see how it feels when lined up to the other three sides. Then we could break a stick so that it is the same length as one diagonal, then test it against the opposite one.

We could be very careful and define “square” such that an object counts as square only if it passes both visual and tactile tests (and any others we may want to throw in). But in practice, this is not what we usually do. We infer squareness from a single test and confidently expect that the supposedly square object would pass the other one. “All visually square objects are tactilely square” is definitely NOT a tautology. But when we tell ourselves that all square objects are square, that is implicitly included (at least, outside of bizarre philosophy discussions).

And it’s not just squareness that has this multifaceted character. Virtually all of our concepts are, in this way, package deals. They track clusters of properties and relations that in our experience have gone together. For example, look at how I talk about “the object” (the one that might be square) in the previous two paragraphs. How do we know it’s just one object? What the hell is “an object”?

Why does this matter? In philosophy of mind, philosophers of a dualist or panpsychist bent often portray our understanding of physical objects and properties as if it contained just a few empirical scientific laws, and a whole lot of tautological consequences. David Chalmers is famous for the following “zombie argument”; roughly stated:

(1) Zombies are conceivable.
(2) Therefore, the totality of microphysical facts does not a priori entail the facts about consciousness.

(3) Therefore, there is no metaphysical reduction of consciousness to the microphysical facts.

Summarized by Ram Neta, “Chalmers’s Frontloading Argument for A Priori
Scrutability

Now, why on earth would anyone think that (2) implies (3)? Clearly, Chalmers thinks that ordinary physical facts – there is water boiling on the stove, there is a door in the front wall of the house – are a priori entailed by the microphysical facts.

At this point, we need to worry about potential differences between “a priori” and “tautological”. Probably the plurality definition of “a priori” truths is that they are those that can be known without appealing to any experience other than whatever experience is necessary to understand the concepts involved. Kant famously argued that there were a priori truths which are not tautological, but rather express conditions we impose upon experience which are prerequisite for us to make sense of experience. Chalmers however does not seem to be going down that road, nor would I want to. So instead, let’s focus on “whatever experience is necessary to understand the concepts.” This brings us back to the fact that most concepts, like “square”, are package deals, and we have multiple modes of access to the concept.

If we want an inference to be a priori, we’ll have to be clear about which aspects of a common phenomenon we want to build into a definition. For example, we could require both visual and tactile conditions before something counts as “square”, or we could require tactile conditions only. But we cannot leave it ambiguous.

How much do we get to include? Can we add “becomes ice when sufficiently cooled” and “becomes steam when sufficiently heated” to our definition of “water”? Can we add this:

“becomes part of a painful process, when combined with 10^28 others of its kind along with 6*10^28 quarks in the following arrangement: …”

to the definition of “electron”? After all, we are licensing ourselves to make package deal concepts out of phenomena that we always find occurring together. And human beings in certain configurations feel pain, and are known to contain electrons which are conducted along nerves to facilitate that pain.

Of course in practice, we’re not going to do that. We like our concepts to balance simplicity and fertility. A concept of electrons which included all the known things they can do in large numbers with special arrangements would fail miserably on simplicity. But such a definition is coherent, and seems to be instantiated in the real world. And this presents a problem for Chalmers.

Here is a related, and punchier, thought experiment:

A shombie is a creature that is micro-physically identical to me, has conscious experience, and is completely physical.

Richard Brown, “Deprioritizing the A Priori Arguments against Physicalism,” citing Martin 1998

Shombies are conceivable. They are completely physical, and their physicality suffices for conscious experience. But then, since the shombie and I are physically identical, my physicality also suffices for conscious experience.

Of course, it is open to Chalmers to deny that shombies are conceivable. Or he might admit that they are conceivable in some weak sense, in which conceivability is not sufficient for metaphysical possibility, yet not conceivable in a stronger sense which does suffice for metaphysical possibility. But this last analysis is exactly what I and many other physicalists would say about Chalmers’s “conceivable” zombies (I also doubt that there is any “conceivability” worthy of the name that suffices for metaphysical possibility, but that seems beside the point). And we are off to a game of burden-of-argument tennis.

We feel that we have a deep understanding of how various physical objects and properties relate to each other. The conceptual relationships between them seem reasonably clear. In contrast, those between physical and mental properties do not. But our clarity about physical relations comes from deep experience of the ways in which various physical properties relate to each other – so deep that our very words and concepts already imply that they belong together; that they are in fact just aspects of the same thing. Square things (visually) are square (tactilely). And an impressive amount of the time, they are aspects of the same thing, and we are exactly right. (Sometimes – time and causality being two of my favorite topics – we overgeneralize and get something wrong, and then finding a way out of our conceptual mess gets painfully difficult.) We lack such deep experience of mental/physical relations, and can only probe a few connections in the lab. But that doesn’t mean that the properties fall into metaphysically separate domains.

References

  • Richard Brown, “Deprioritizing the A Priori Arguments against Physicalism” (2010) Journal of Consciousness Studies 17 (3-4)
  • Peter Martin (1998) “Zombies versus Materialists: The Battle for Conceivability” Southwest Philosophy Review 14 (1): 131-138

DO panic

In early January, Trump asked the Justice Department to send a letter to Georgia declaring that there was evidence of widespread fraud and corruption in Georgia’s presidential election, and suggesting that they send a replacement slate of Electors to the Congress to vote for Trump. In a meeting with then-President Trump, acting Attorney General Richard Rosen, acting Deputy Attorney General Richard Donoghue, acting Assistant Attorney General for the Civil Division Jeffrey Clark and others, Trump floated the idea of replacing his honest and competent Justice Department head, Rosen, with the unqualified toady Clark. Rosen and Donoghue threatened to resign, but told only a few others about this. Donoghue testified:

We didn’t expand the circle until the late afternoon of January 3rd, but we wanted to keep a close hold because, frankly, I thought — we thought it would create friction and maybe even panic within the leadership of the Department.

Richard Donoghue, January 6th Hearings

So, the Department of Justice was on the verge of being turned into a sham, which would try to block the actual election results from deciding the next President in the manner specified by the Constitution. That sounds like an excellent time to panic. People who work in the Justice Department should have known about this so that they could make contingency plans, e.g. to resign as well. Moreover, perhaps one of them would have had the courage to blow the whistle, informing Congressional leadership at a minimum, although informing the press and the public would be appropriate as well.

“Panic” has multiple meanings or connotations. “Sudden uncontrollable fear or anxiety, often causing wildly unthinking behavior,” says Google, citing Oxford Languages. Let’s take the part before the comma as the core definition. There’s nothing wrong with panic in that sense. Fear is your friend. It’s trying to keep you (and your loved ones, and that can include your country) safe. If information suddenly becomes available, which often happens, and the information reveals a dire threat, sudden fear is appropriate. Fear is uncomfortable, but, like pain, it’s often a good thing that you can’t just turn it off without addressing the underlying issue (i.e., fear and pain are “uncontrollable”).

If you hear a backfire – at least, it probably was a backfire, but it could also be the sound of a gunshot – if you live in a neighborhood where a gunshot is a substantial possibility, run like hell. Running away from a backfire is embarrassing. Dodging one bullet is worth hundreds of embarrassments.

Emotions are your friends. They are a vital part of System 1, in Kahneman’s useful albeit slightly fuzzy classification of thought-processes into two main groups. System 1 is quick, automatic, and subject to some predictable biases. System 2 is slow and careful, consciously directed, and still not guaranteed to be correct. Arguably, every inference you make in a System 2 logical deduction is itself supplied by System 1 (although it can be inspected, typed, and compared to know reliable types (but typing is a System 1 function (I think I’ll stop parenthesizing now))).

System 1 is not only quicker than System 2; in some sense it has more raw intellectual firepower. This is where emotion-laden thinking shines. It allows you to pick up on things that haven’t yet made it into explicit consciousness, or maybe even are too complex for routine System 2 analysis. Emotions, especially panic, can jump you out of your routine. An extraordinary effort to understand, and to protect yourself until you do, can help. Ignoring your emotions makes about as much sense as putting electrical tape over your “check engine” light because you don’t like the glare.

Maybe it really is your check engine light (function in your car’s computer) that is broken. But you’d better carefully investigate first, and obtain really good evidence of that, before going the denial route.

Related: The Straw Vulcan model of rationality.

Survey: what it feels like to think

.. People with aphantasia are unable to form mental pictures in the mind’s eye. They are also not able to visualize images of past experiences. How vividly people visualize images varies greatly between individuals. … People with aphantasia seem to have different brain signaling and activity in certain regions of the brain

https://www.goodrx.com/health-topic/neurological/aphantasia-symptoms-spectrum

Not everyone with aphantasia knows they’re any different from most people in this respect. I note this because I have a-[…]-ia with regard to the content of my thoughts. And maybe I’m unusual. Or maybe not. Hence, I’m asking for comments. But first let me describe how it is for me.

I can definitely tell the difference between “I believe it!” and “I have no idea!” and “I lean toward that view, but just a little” – and so on. But it feels basically the same to believe “I am downstairs” and “the cat is on my computer”. The aforementioned difference in feeling is about the strength of belief, not its content.

I will, of course, say different words in my head, and conjure different images, when I think these thoughts — if I say words in my head, or conjure images. But I often don’t say words in my head, and usually don’t conjure images, when I have a thought. I just have the thought. I often don’t even explicitly know I had the thought, until someone asks me, or I run into a complex reasoning task the thought bears upon.

Also, some thoughts scare me, while others are pleasing, or neutral. But some equally scary thoughts are about entirely different things. The emotional reaction is not unique to the content of any given thought.

What I don’t seem to have, is this:

According to the cognitive experience view, thinking is an experience that has phenomenal character: there is something it is like to think a thought. In particular, thinking has non-sensory phenomenal character: it has a kind of phenomenal character lacked by sensory perception, broadly construed to include bodily sensation, perceptual imagery, and inner speech. The view holds that there exists a kind of phenomenal character had by cognitive experiences like thoughts. What’s more, on the cognitive experience view, the phenomenal character of thought partially determines what the thought is about.

https://philpapers.org/archive/LENCPI.pdf

So, what about you? Does it feel any different to believe the cat is on the computer, versus that you are downstairs?

Process ontology and substance ontology – a verbal difference?

The Western philosophical tradition has mostly constructed ontologies in terms of substances, their properties and relations, and the times at which these substances exist and these properties and relations apply. Process philosophy, perhaps most famously associated with Alfred North Whitehead, prefers to talk about actions, changes, and well, processes. Is there a dispute here? That depends how much is claimed by each camp. Consider these “tasks of process philosophy” outlined in the Stanford Encyclopedia of Philosophy entry on the topic:

Given its current role as a rival to the dominant substance-geared paradigm of Western metaphysics, process philosophy has the overarching task of establishing the following three claims:

Claim 1: The basic assumptions of the ‘substance paradigm’ (i.e., a metaphysics based on static entities such as substances, objects, states of affairs, or instantaneous stages) are dispensable theoretical presuppositions rather than laws of thought.

Claim 2: Process-based theories perform just as well or better than substance-based theories in application to the familiar philosophical topics identified within the substance paradigm.

Claim 3: There are other important philosophical topics that can only be addressed within a process metaphysics.

Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/process-philosophy/#ThreTaskProcPhil

I regard claims 1 and 2 as spot-on, while 3 is highly dubious. And if you constructed parallel claims on behalf of the “substance paradigm”, I would again accept the first two and reject the third. While one way of talking or the other may be more felicitous to discuss a particular question, I do not see any untranslatable statements in either one.

Here is one area in which the author apparently means to suggest that Claim 3 would apply:

Quantum physics brought on the dematerialization of physical matter—matter in the small could no longer be conceptualized as a Rutherfordian planetary system of particle-like objects. The entities described by the mathematical formalism seemed to fit the picture of a collection of fluctuating processes organized into apparently stable structures by statistical regularities—i.e., by regularities of comportment at the level of aggregate phenomena.

SEP article

OK yes, “particles” are an emergent approximation applicable to certain (extremely common and important) interactions among fluctuating fields. But there are still fields, which seem to fit the bill of “substance” without an absurd amount of forcing: they just have a very (maximally!) extensive spatial location. They take on properties at certain times and places, and relate to other fields.

There’s another spate of over-claiming going on under the surface, on “both sides” of the “debate”. You can see it in the author’s (fair, IMO) portrayal of some traditional metaphysics:

Western metaphysics has long been obsessed with describing reality as an assembly of static individuals whose dynamic features are either taken to be mere appearances or ontologically secondary and derivative.

SEP article (emphasis added)

Ontologically secondary and derivative? What the hell is that supposed to mean, you might ask. Good question. I doubt that such invidious metaphysical comparisons make sense. Some terms successfully refer to something (“thing” in the broadest possible sense) – and all those things are ontologically equal. They exist: the highest and only prize in ontology.

“Substances” are primarily ontology’s concession to noun-phrases. James Grier Miller said: ‘Ontology recapitulates philology’ (h/t W. V. O. Quine, Word and Object). Calvin said “Verbing weirds language.” Process philosophers want us to focus more on the action, which is what verbs express. By all means, let’s verb some areas of philosophy and weird them. I just don’t see why we have to stop nouning them too.

Comments welcome.

Against one-dimensional ethics

totalizing: treating disparate parts as having one character, principle, or application. (Oxford Languages / Google Dictionary) “Totalizing ethics” might be a better way to say what I’m against here.

Philosophical ethics is often divided into three leading approaches: virtue ethics, consequentialism, and deontology. This division leaves options out, especially if you take each of these three approaches to be totalizing: to claim, as they often do, that all of ethics can be founded on the roots identified by that approach.

For virtue ethics, the roots of ethics lie in certain characteristics of the ethical agent. These include virtues, “practical wisdom”, which roughly amounts to knowing how to apply and balance virtues, and in most theories “flourishing”. Flourishing – and this quickly gets murky and controversial – is about how well an agent’s life goes. Here, “how well it goes” is already an ethically laden concept, not defined entirely by some simple metric such as how the agent rates it or how pleasant the agent’s life is.

For consequentialists, the roots of ethics lie in good consequences. Actions are right if they lead to good consequences and avoid bad ones, and likewise virtues are worth cultivating if they lead to good consequences and not bad. Consequentialisms differ over what counts as good consequences – happiness? Preference satisfaction? But set those differences aside for now. The most important point is that all other moral verdicts are to be derived from an overall score of good achieved and bad results avoided.

Deontological ethics is focused on duty. It attempts to divide actions into categories of morally prohibited, required, or permitted. Within the permitted actions, some are usually regarded as “supererogatory”, meaning morally good but not required; above and beyond the call of duty. The good results that consequentialists are concerned with usually find their main home here in deontological theories. And of course, character development can be considered a duty by the deontologist.

If this seems like a false trilemma to you, I say: exactly. When we are reasoning together about how to live with each other, there is no need to choose between virtues, good consequences, and rules of behavior. We need all three. Nor is it possible to pick any one of these foundations and adequately explain the other two in a purely derivative way. I won’t even try to survey a good sample of major attempts here – that would take a series of books. I’ll just say that I find them unconvincing. You can, for example, cook up a list of deontological duties that pays sufficient attention to good consequences and virtue development, but then it just looks like an attempt to sneak consequences and virtues in by the back door. Or you can try to derive everything from the Categorical Imperative, which would give ethics a crystalline unity, except for one minor detail: it doesn’t work. You can’t get here (ethics as we know and love it) from there. And equally importantly, why should you try?

Maybe this is a reason people try: if we don’t derive ethics from a single underlying principle, how are we supposed to resolve uncertainties and disagreements? I have a two part answer, the first of which is: sometimes you don’t. Ethics doesn’t have infinite precision, and sometimes there isn’t a uniquely right answer to an ethical question (and it’s not that there is one but we don’t know what it is, although that too can happen).

The other part I already gave: When we are reasoning together about how to live with each other, we are doing ethics. We propose virtues, goals/consequences, and rules, among other things, and listen carefully to objections and counter-proposals, looking for mutually acceptable resolutions. Mutually acceptable meaning acceptable to those who are interested in dialogue and getting along – not necessarily acceptable to psychopaths. We try to watch out for con artists who propose items that advantage themselves with no actual regard for anyone else. We rinse and repeat, ad nauseam and ad infinitum.

The dialogue is never done, both because agreement is incomplete and because of the possibility of error. We might fail to reason correctly, missing opportunities to make everyone in our society better off. We might unjustly impose a biased structure that benefits our own sub-group, telling ourselves that we heard the other sub-groups but their objections were incoherent.

Ethics does depend on the members of society who will be bound by it, but only after such errors are corrected. Ethics is complex because people are. This implies something that might be called a kind of “moral relativism” – insofar as different groups of people generally have differences that make different ways of relating reasonable for them – but not the idea that usually goes by that term. The latter being the idea that whatever a given society says, is automatically moral, for them. No; ethics is a social construct, but it can be well or poorly constructed. Usually, it still needs some work.

Entropy, ignorance, and chaos

Two articles caught my eye lately. They’re only vaguely related, and yet… they both tell us a lot about how much ignorance we just have to live with.

In Physics Today, November 2021, Katie Robertson is concerned about the scientific and philosophical significance of entropy. Specifically, how and whether entropy depends on our human limitations, as contrasted to some hypothetical more-knowledgeable observer like Laplace’s demon. She writes:

But how should we understand probabilities in statistical mechanics? … Here we will narrow our attention to one dominant view, popularized by physicist Edwin Jaynes, which argues that the fundamental assumption of statistical mechanics stems from our ignorance of the microscopic details. Because the Jaynesian view emphasizes our own ignorance, it implicitly reinforces the idea that thermal physics is anthropocentric. We must assume each state is equally likely because we don’t know which exact microstate the system is in. Here we are confronted by our third and final philosophical specter: the demon first articulated by Pierre Simon Laplace in 1814 … Laplace’s demon is a hypothetical observer that knows the position and momentum of every molecule in the universe.

Katie Robertson, “The demons haunting thermodynamics”

She goes on to cite the Gibbs formula of entropy, which integrates over the log of probabilities of all the possibilities. For Laplace’s demon, the probability of the known microstate is 1, so the demon calculates the Gibbs entropy to be zero.

She then proposes to banish Laplace’s demon:

Fortunately, it, too, can be exorcised by shifting to a quantum perspective on statistical mechanics. In classical statistical mechanics, probabilities are an additional ingredient added to the system’s microdynamics. … But in the quantum case, probabilities are already an inherent part of the theory, so there is no need to add ignorance to the picture.

But there is an apparent conflict between QM and statistical mechanics:

How can [quantum] probabilities give rise to the familiar probability distributions from statistical mechanics? That question is especially tricky because quantum mechanics assigns an isolated system a definite state known as a pure state. In contrast, statistical mechanics assigns such a system an inherently uncertain state known as a maximally mixed state, in which each possibility is equally likely. The distinctively quantum nature of entanglement holds the key to resolving that seeming conflict (see figure 5). Consider a qubit that is entangled with a surrounding heat bath. Because they are entangled, if one of the two systems is taken on its own, it will be in an intrinsically uncertain state known as a mixed state. Nonetheless, the composite system of the qubit taken together with the heat bath is in a pure state because when taken as a whole, it is isolated. Assuming that the surrounding environment—namely, the heat bath—is sufficiently large, then for almost any pure state that the composite system is in, the qubit will be in a state very, very close to the state it would be assigned by traditional statistical mechanics.

Figure 5 is the figure at the top of this post. The emphasis above is added, because I want to resist the claim that ignorance is not involved. The “almost any” qualifier reveals our ignorance. Because we don’t know which pure state the whole universe is in, and because all our simply formulable ways of describing the possibilities make her “almost any” statement true, it is a very good bet that statistical mechanics will guide us well. But it’s still a bet.

Note that this does not make thermal physics anthropocentric. There is nothing special about anthropoids here; any cognizer faces similar ignorance. As Robertson explains in her discussion of Maxwell’s demon (which I haven’t quoted; read her article for that), obtaining detailed knowledge of a system comes with an entropy cost. Laplace’s demon, tasked by his definition with obtaining such knowledge of the entire universe, runs out of room to dump the waste heat, and vanishes in a puff of logic. Laplace’s demon is physically impossible.

Now for the chaos.

in Aeon magazine, David Weinberger argues that “Our world is a black box: predictable but not understandable.” Machine learning algorithms, with their famous impenetrability, underlie his argument. MLM stands for Machine Learning Model, below:

But MLMs’ generalisations are unlike the traditional generalisations we use to explain particulars. We like traditional generalisations because (a) we can understand them; (b) they often enable deductive conclusions; and (c) we can apply them to particulars. But (a) an MLM’s generalisations are not always understandable; (b) they are statistical, probabilistic and primarily inductive; and (c) literally and practically, we usually cannot apply MLM generalisations except by running the machine learning model that resulted from them.

David Weinberger, “Learn from Machine Learning”

Weinberger says that rather than simply regarding these limitations as drawbacks, we should take them as clues to how the world actually works. Weinberger doesn’t directly discuss techniques to illuminate the sensitivities of neural networks, but he would probably point out that (a) – (c) above still apply, even after our best efforts along such lines.

Our encounter with MLMs doesn’t deny that there are generalisations, laws or principles. It denies that they are sufficient for understanding what happens in a universe as complex as ours. The contingent particulars, each affecting all others, overwhelm the explanatory power of the rules and would do so even if we knew all the rules.

Weinberger discusses a thought experiment that is basically a coin flip. If we wanted to know the exact final resting place and orientation of the coin, down to the smallest detail, we would need to be – you guessed it – Laplace’s demon.

That’s not a criticism of the pursuit of scientific laws, nor of the practice of science, which is usually empirical and sufficiently accurate for our needs­­­ – even if the degree of pragmatic accuracy possible silently shapes what we accept as our needs. But it should make us wonder why we in the West have treated the chaotic flow of the river we can’t step into twice as mere appearance, beneath which are the real and eternal principles of order that explain that flow. Why our ontological preference for the eternally unchanging over the eternally swirling water and dust?

Whaddayamean, “we”? There has always been a faction in Western thought that recognized chaos as real. Weinberger wants us to join that faction. Amen, brother.

Hard Fact, not Hard Problem

Gaute Einevoll wrote “For me it seems a priori impossible to derive an inside-out perspective (what it feels like to be me) from the outside-in perspective inherent [in] physics-type descriptions.” That sounds a lot like David Chalmers’s “Hard Problem”:

. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

To make the unanswered puzzlement more specific, why is the sight of red accompanied by this experience? Why does a cold surface feel like this? That answer really is, I suspect, impossible to derive from the outside-in perspective inherent in physics descriptions. Let’s work an example (which I wrote about some years ago).

Carol puts her left hand in a bucket of hot water, and lets it acclimate for a few minutes.  Meanwhile her right hand is acclimating to a bucket of ice water.  Then she plunges both hands into a bucket of lukewarm water.  The lukewarm water feels very different to her two hands.  To the left hand, it feels very chilly.  To the right hand, it feels very hot.  When asked to tell the temperature of the lukewarm water without looking at the thermocouple readout, she doesn’t know.  Asked to guess, she’s off by a considerable margin.

Next Carol flips the thermocouple readout to face her (as shown), and practices.  Using different lukewarm water temperatures of 10-35 C, she gets a feel for how hot-adapted and cold-adapted hands respond to the various middling temperatures.  Now she makes a guess – starting with a random hand, then moving the other one and revising the guess if necessary – each time before looking at the thermocouple.  What will happen?  I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that she will get quite good at it.

We bring Carol a bucket of 20 C water (without telling) and let her adapt her hands first as usual.  “What do you think the temperature is?” we ask.  She moves her cold hand first.  “Feels like about 20,” she says.  Hot hand follows.  “Yup, feels like 20.”

“Wait,” we ask. “You said feels-like-20 for both hands.  Does this mean the bucket no longer feels different to your two different hands, like it did when you started?”

“No!” she replies.  “Are you crazy?  It still feels very different subjectively; I’ve just learned to see past that to identify the actual temperature.”

In addition to reports on the external world, we perceive some internal states that typically (but not invariably) can serve as signals about our environment. Why would evolution build beings that sense their internal states?  Why not just have the organism know the objective facts of survival and reproduction, and be done with it?  One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts.  But another is that this separation of sense-data into “subjective” and “objective” might help us learn to overcome certain sorts of perceptual illusion – as Carol does, above.  And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction – like pain, or feelings of erotic love. 

Internal state sensations are often impossible to derive from the outside-in perspective inherent in physics descriptions. Call that the Hard Fact. But don’t call it the Hard Problem, unless you can identify someone whose problem it is. The $64,000 question is: are there any philosophical views that predict that the Hard Fact would be false? (Actually that amount would be wholly inadequate to cover the books and papers dedicated to almost-nobody’s Problem, but never mind.)

I can think of exactly one, relatively minor, philosophical view that stumbles on the Hard Fact, thereby making it a Problem. To wit, analytic functionalism. OK, so all three of those philosophers have a Problem. The rest all respect cognitive science and neuroscience too much. As a result, they will accept the evidence that humans form percepts based on particular sense modalities engaged in various encounters with the world. Percepts and concepts founded in proprioception are radically different from those formed on the basis of hearing, for example. And those founded in looking at a brain scan of someone feeling cold will be radically different from those founded in touching something cold. So of course you can’t derive one viewpoint from the other.

Know what you also can’t do? You can’t refute a hypothesis by pointing to a successful prediction that it makes.

…Except Rambo!

Sam Kinison had a comedy routine with the recurring punchline “except Rambo”. For example, Sam’s sergeant would explain that a certain grenade would spew metal fragments at high velocity killing everything within 100 yards … except … Rambo! Kinison was making fun of Hollywood script-writing. I think we should use the same idea to make fun of a bunch of philosophizing, about consciousness especially, though not exclusively.

Ironically, it doesn’t matter if you make Rambo especially immune to incoming fire, or especially powerless to affect the world outside himself. It’s equally implausible either way.

Philosophers often like to make consciousness powerless. Consciousness is the anti-Rambo. Other things can affect it, but it cannot affect the world. The most prominent philosopher of this stripe is David Chalmers. The view comes out in his “zombie argument” where he argues that subjective sensations (qualia) cannot be physical processes or properties:

In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is metaphysically possible.

https://en.wikipedia.org/wiki/Philosophical_zombie

A philosophical zombie is supposed to be physically identical with an ordinary human being, down to the last neuron and quark and speech and act, yet somehow not experience pain, joy, or any other sensation. Now put aside the absurdity of deriving the structure of the world based on what we can imagine from our armchairs, and see what Chalmers implies.

If conscious pain sensation is distinct from the physical properties that explain why we say “ouch” and grab onto our stubbed toes, it’s not actually doing any work. Conscious sensation is the anti-Rambo. Conscious pains can’t even inspire you to avoid stubbing your toes in the future. It’s your physical neural memories that do that, according to Chalmers’s logic. Thirst – and here I mean the feeling, not just the water concentration in your blood or cerebrospinal fluid – can’t explain your drinking, and hunger can’t explain your eating.

Misled by Analysis

“Conceptual analysis” has been the dominant paradigm in English-language philosophy for much of the twentieth century. and remains influential. Traditionally, a conceptual analysis provides necessary and sufficient conditions for the proper application of a concept. For example, Porphyry defined man as a “mortal rational animal”. A little less restrictively, a modern philosopher might just attempt to lay down a few necessary conditions, or a few sufficient ones. Additionally, this conceptual analysis is supposed to be a matter of pure thought – no experiments required. In a common image: it can be done from the armchair. For a great summary and critique of conceptual analysis, see Ahlstrom-Vij’s book review of McGinn.

This has been a terrible wrong turn, in my view. Although some valuable techniques are used along the way – such as finding counterexamples to proposed generalizations – the odds of achieving true conceptual analysis are miniscule. Here I want to sketch some cognitive-psychological evidence for my skepticism, mainly so that I can refer back to it later. In other words, here I’m doing some metaphilosophy which I’ll lean on when I discuss more typical philosophical topics. A lot of my readers may find this boring, in which case by all means skip it, at least until I lean on it later and you question my reasoning. (Notice how I just implied that I have a lot of readers? Clever and subtle, huh?)

Two of the dominant cognitive-psychological models of categorization are exemplar theory and prototype theory. Take the category of “birds” for example. Both theories focus on similarity, but the exemplar theory supposes that some short list of known birds (robins, pigeons, hawks, …) provides the standard, while the prototype theory proposes that the average characteristics of all known birds provides the standard. Both theories imply that some birds (e.g. penguins) are harder to recognize as birds than others (robins). Prototype theory regards category membership not as an all-or-nothing affair, but as more of a web of interlocking categories which overlap. Exemplar theory is less committed to the existence of vague or borderline category membership, but seems at least compatible with the idea that some animals – Archaeopteryx for example – might be borderline cases of birds.

In order to be useful for thought and communication, a category need not have necessary and sufficient conditions for membership, at least not in any usable form. (A long list of features and their weightings – feathers, flight, talons, beak, etc., etc., along with numerical weightings – might work mathematically, but seems psychologically unrealistic, and certainly not discoverable from the armchair.) Instead, a workable category has short distances in similarity-space between members, compared to the distance between the category and other nearby categories. Crucially, this depends on what things happen to populate the world the speakers live in. The reptiles, mammals, and other animals we contrast to birds help make birds a useful category, because the differences between categories are relatively large. Perhaps in a galaxy far away, there is a planet with many birdlike and mammal-like animals that can be lined up in a gradually differing order with no breaks. If there are intelligent beings on that planet, it is a good bet that just one of their concepts encompasses all of those species.

Another way of saying this is: Things are clustered in the (mathematical) space of properties. The “dimensions” of this space are length, mass, number of feathers, flight speed in air, etc. – any property the thinker in question can measure or observe. And the similarity/distance metric in this space is whatever is salient to the perceiver. As members of the same species, we can usually count on each other’s similarity metrics to be largely commensurate with our own. But if we ever communicate with beings on that distant planet I speculated about in the previous paragraph – the planet of the birdmammals – we need to be more circumspect. They might perceive a glaring gap in the birdmammal spectrum that we just can’t see. Clustering is a feature of the external world X cognizer(s) interaction, not of the external world alone. External here refers to what is outside the cognizers — of course, the world as a whole includes them, which is a point philosophers could stand to remember more often.

Another reason to be skeptical about the prospects for conceptual analysis is that competence far outruns explicit reflective knowledge. We can recognize instances of a concept – pictures of cats on the internet for example – far more easily than we can lay down rules by which cats should be recognized, much less to define “cat”. The fact that it took computer programmers decades to achieve Machine Vision systems which can recognize cats (and other categories) with at least as good reliability as humans can, is evidence of how hard it is to construct such rules. And those programmers had computers to do the grunt work of logical and mathematical computation; they didn’t do it from their armchairs. And those programmers still don’t know the rules for recognizing cats – rather, they know the rules for building neural networks and the rules for training neural networks.

Two other philosophy of language related essays I really like: