Against one-dimensional ethics

totalizing: treating disparate parts as having one character, principle, or application. (Oxford Languages / Google Dictionary) “Totalizing ethics” might be a better way to say what I’m against here.

Philosophical ethics is often divided into three leading approaches: virtue ethics, consequentialism, and deontology. This division leaves options out, especially if you take each of these three approaches to be totalizing: to claim, as they often do, that all of ethics can be founded on the roots identified by that approach.

For virtue ethics, the roots of ethics lie in certain characteristics of the ethical agent. These include virtues, “practical wisdom”, which roughly amounts to knowing how to apply and balance virtues, and in most theories “flourishing”. Flourishing – and this quickly gets murky and controversial – is about how well an agent’s life goes. Here, “how well it goes” is already an ethically laden concept, not defined entirely by some simple metric such as how the agent rates it or how pleasant the agent’s life is.

For consequentialists, the roots of ethics lie in good consequences. Actions are right if they lead to good consequences and avoid bad ones, and likewise virtues are worth cultivating if they lead to good consequences and not bad. Consequentialisms differ over what counts as good consequences – happiness? Preference satisfaction? But set those differences aside for now. The most important point is that all other moral verdicts are to be derived from an overall score of good achieved and bad results avoided.

Deontological ethics is focused on duty. It attempts to divide actions into categories of morally prohibited, required, or permitted. Within the permitted actions, some are usually regarded as “supererogatory”, meaning morally good but not required; above and beyond the call of duty. The good results that consequentialists are concerned with usually find their main home here in deontological theories. And of course, character development can be considered a duty by the deontologist.

If this seems like a false trilemma to you, I say: exactly. When we are reasoning together about how to live with each other, there is no need to choose between virtues, good consequences, and rules of behavior. We need all three. Nor is it possible to pick any one of these foundations and adequately explain the other two in a purely derivative way. I won’t even try to survey a good sample of major attempts here – that would take a series of books. I’ll just say that I find them unconvincing. You can, for example, cook up a list of deontological duties that pays sufficient attention to good consequences and virtue development, but then it just looks like an attempt to sneak consequences and virtues in by the back door. Or you can try to derive everything from the Categorical Imperative, which would give ethics a crystalline unity, except for one minor detail: it doesn’t work. You can’t get here (ethics as we know and love it) from there. And equally importantly, why should you try?

Maybe this is a reason people try: if we don’t derive ethics from a single underlying principle, how are we supposed to resolve uncertainties and disagreements? I have a two part answer, the first of which is: sometimes you don’t. Ethics doesn’t have infinite precision, and sometimes there isn’t a uniquely right answer to an ethical question (and it’s not that there is one but we don’t know what it is, although that too can happen).

The other part I already gave: When we are reasoning together about how to live with each other, we are doing ethics. We propose virtues, goals/consequences, and rules, among other things, and listen carefully to objections and counter-proposals, looking for mutually acceptable resolutions. Mutually acceptable meaning acceptable to those who are interested in dialogue and getting along – not necessarily acceptable to psychopaths. We try to watch out for con artists who propose items that advantage themselves with no actual regard for anyone else. We rinse and repeat, ad nauseam and ad infinitum.

The dialogue is never done, both because agreement is incomplete and because of the possibility of error. We might fail to reason correctly, missing opportunities to make everyone in our society better off. We might unjustly impose a biased structure that benefits our own sub-group, telling ourselves that we heard the other sub-groups but their objections were incoherent.

Ethics does depend on the members of society who will be bound by it, but only after such errors are corrected. Ethics is complex because people are. This implies something that might be called a kind of “moral relativism” – insofar as different groups of people generally have differences that make different ways of relating reasonable for them – but not the idea that usually goes by that term. The latter being the idea that whatever a given society says, is automatically moral, for them. No; ethics is a social construct, but it can be well or poorly constructed. Usually, it still needs some work.

Entropy, ignorance, and chaos

Two articles caught my eye lately. They’re only vaguely related, and yet… they both tell us a lot about how much ignorance we just have to live with.

In Physics Today, November 2021, Katie Robertson is concerned about the scientific and philosophical significance of entropy. Specifically, how and whether entropy depends on our human limitations, as contrasted to some hypothetical more-knowledgeable observer like Laplace’s demon. She writes:

But how should we understand probabilities in statistical mechanics? … Here we will narrow our attention to one dominant view, popularized by physicist Edwin Jaynes, which argues that the fundamental assumption of statistical mechanics stems from our ignorance of the microscopic details. Because the Jaynesian view emphasizes our own ignorance, it implicitly reinforces the idea that thermal physics is anthropocentric. We must assume each state is equally likely because we don’t know which exact microstate the system is in. Here we are confronted by our third and final philosophical specter: the demon first articulated by Pierre Simon Laplace in 1814 … Laplace’s demon is a hypothetical observer that knows the position and momentum of every molecule in the universe.

Katie Robertson, “The demons haunting thermodynamics”

She goes on to cite the Gibbs formula of entropy, which integrates over the log of probabilities of all the possibilities. For Laplace’s demon, the probability of the known microstate is 1, so the demon calculates the Gibbs entropy to be zero.

She then proposes to banish Laplace’s demon:

Fortunately, it, too, can be exorcised by shifting to a quantum perspective on statistical mechanics. In classical statistical mechanics, probabilities are an additional ingredient added to the system’s microdynamics. … But in the quantum case, probabilities are already an inherent part of the theory, so there is no need to add ignorance to the picture.

But there is an apparent conflict between QM and statistical mechanics:

How can [quantum] probabilities give rise to the familiar probability distributions from statistical mechanics? That question is especially tricky because quantum mechanics assigns an isolated system a definite state known as a pure state. In contrast, statistical mechanics assigns such a system an inherently uncertain state known as a maximally mixed state, in which each possibility is equally likely. The distinctively quantum nature of entanglement holds the key to resolving that seeming conflict (see figure 5). Consider a qubit that is entangled with a surrounding heat bath. Because they are entangled, if one of the two systems is taken on its own, it will be in an intrinsically uncertain state known as a mixed state. Nonetheless, the composite system of the qubit taken together with the heat bath is in a pure state because when taken as a whole, it is isolated. Assuming that the surrounding environment—namely, the heat bath—is sufficiently large, then for almost any pure state that the composite system is in, the qubit will be in a state very, very close to the state it would be assigned by traditional statistical mechanics.

Figure 5 is the figure at the top of this post. The emphasis above is added, because I want to resist the claim that ignorance is not involved. The “almost any” qualifier reveals our ignorance. Because we don’t know which pure state the whole universe is in, and because all our simply formulable ways of describing the possibilities make her “almost any” statement true, it is a very good bet that statistical mechanics will guide us well. But it’s still a bet.

Note that this does not make thermal physics anthropocentric. There is nothing special about anthropoids here; any cognizer faces similar ignorance. As Robertson explains in her discussion of Maxwell’s demon (which I haven’t quoted; read her article for that), obtaining detailed knowledge of a system comes with an entropy cost. Laplace’s demon, tasked by his definition with obtaining such knowledge of the entire universe, runs out of room to dump the waste heat, and vanishes in a puff of logic. Laplace’s demon is physically impossible.

Now for the chaos.

in Aeon magazine, David Weinberger argues that “Our world is a black box: predictable but not understandable.” Machine learning algorithms, with their famous impenetrability, underlie his argument. MLM stands for Machine Learning Model, below:

But MLMs’ generalisations are unlike the traditional generalisations we use to explain particulars. We like traditional generalisations because (a) we can understand them; (b) they often enable deductive conclusions; and (c) we can apply them to particulars. But (a) an MLM’s generalisations are not always understandable; (b) they are statistical, probabilistic and primarily inductive; and (c) literally and practically, we usually cannot apply MLM generalisations except by running the machine learning model that resulted from them.

David Weinberger, “Learn from Machine Learning”

Weinberger says that rather than simply regarding these limitations as drawbacks, we should take them as clues to how the world actually works. Weinberger doesn’t directly discuss techniques to illuminate the sensitivities of neural networks, but he would probably point out that (a) – (c) above still apply, even after our best efforts along such lines.

Our encounter with MLMs doesn’t deny that there are generalisations, laws or principles. It denies that they are sufficient for understanding what happens in a universe as complex as ours. The contingent particulars, each affecting all others, overwhelm the explanatory power of the rules and would do so even if we knew all the rules.

Weinberger discusses a thought experiment that is basically a coin flip. If we wanted to know the exact final resting place and orientation of the coin, down to the smallest detail, we would need to be – you guessed it – Laplace’s demon.

That’s not a criticism of the pursuit of scientific laws, nor of the practice of science, which is usually empirical and sufficiently accurate for our needs­­­ – even if the degree of pragmatic accuracy possible silently shapes what we accept as our needs. But it should make us wonder why we in the West have treated the chaotic flow of the river we can’t step into twice as mere appearance, beneath which are the real and eternal principles of order that explain that flow. Why our ontological preference for the eternally unchanging over the eternally swirling water and dust?

Whaddayamean, “we”? There has always been a faction in Western thought that recognized chaos as real. Weinberger wants us to join that faction. Amen, brother.

Hard Fact, not Hard Problem

Gaute Einevoll wrote “For me it seems a priori impossible to derive an inside-out perspective (what it feels like to be me) from the outside-in perspective inherent [in] physics-type descriptions.” That sounds a lot like David Chalmers’s “Hard Problem”:

. . .even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?

To make the unanswered puzzlement more specific, why is the sight of red accompanied by this experience? Why does a cold surface feel like this? That answer really is, I suspect, impossible to derive from the outside-in perspective inherent in physics descriptions. Let’s work an example (which I wrote about some years ago).

Carol puts her left hand in a bucket of hot water, and lets it acclimate for a few minutes.  Meanwhile her right hand is acclimating to a bucket of ice water.  Then she plunges both hands into a bucket of lukewarm water.  The lukewarm water feels very different to her two hands.  To the left hand, it feels very chilly.  To the right hand, it feels very hot.  When asked to tell the temperature of the lukewarm water without looking at the thermocouple readout, she doesn’t know.  Asked to guess, she’s off by a considerable margin.

Next Carol flips the thermocouple readout to face her (as shown), and practices.  Using different lukewarm water temperatures of 10-35 C, she gets a feel for how hot-adapted and cold-adapted hands respond to the various middling temperatures.  Now she makes a guess – starting with a random hand, then moving the other one and revising the guess if necessary – each time before looking at the thermocouple.  What will happen?  I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that she will get quite good at it.

We bring Carol a bucket of 20 C water (without telling) and let her adapt her hands first as usual.  “What do you think the temperature is?” we ask.  She moves her cold hand first.  “Feels like about 20,” she says.  Hot hand follows.  “Yup, feels like 20.”

“Wait,” we ask. “You said feels-like-20 for both hands.  Does this mean the bucket no longer feels different to your two different hands, like it did when you started?”

“No!” she replies.  “Are you crazy?  It still feels very different subjectively; I’ve just learned to see past that to identify the actual temperature.”

In addition to reports on the external world, we perceive some internal states that typically (but not invariably) can serve as signals about our environment. Why would evolution build beings that sense their internal states?  Why not just have the organism know the objective facts of survival and reproduction, and be done with it?  One thought is that it is just easier to build a brain that does both, rather than one that focuses relentlessly on objective facts.  But another is that this separation of sense-data into “subjective” and “objective” might help us learn to overcome certain sorts of perceptual illusion – as Carol does, above.  And yet another is that some internal states might be extremely good indicators and promoters of survival or reproduction – like pain, or feelings of erotic love. 

Internal state sensations are often impossible to derive from the outside-in perspective inherent in physics descriptions. Call that the Hard Fact. But don’t call it the Hard Problem, unless you can identify someone whose problem it is. The $64,000 question is: are there any philosophical views that predict that the Hard Fact would be false? (Actually that amount would be wholly inadequate to cover the books and papers dedicated to almost-nobody’s Problem, but never mind.)

I can think of exactly one, relatively minor, philosophical view that stumbles on the Hard Fact, thereby making it a Problem. To wit, analytic functionalism. OK, so all three of those philosophers have a Problem. The rest all respect cognitive science and neuroscience too much. As a result, they will accept the evidence that humans form percepts based on particular sense modalities engaged in various encounters with the world. Percepts and concepts founded in proprioception are radically different from those formed on the basis of hearing, for example. And those founded in looking at a brain scan of someone feeling cold will be radically different from those founded in touching something cold. So of course you can’t derive one viewpoint from the other.

Know what you also can’t do? You can’t refute a hypothesis by pointing to a successful prediction that it makes.

…Except Rambo!

Sam Kinison had a comedy routine with the recurring punchline “except Rambo”. For example, Sam’s sergeant would explain that a certain grenade would spew metal fragments at high velocity killing everything within 100 yards … except … Rambo! Kinison was making fun of Hollywood script-writing. I think we should use the same idea to make fun of a bunch of philosophizing, about consciousness especially, though not exclusively.

Ironically, it doesn’t matter if you make Rambo especially immune to incoming fire, or especially powerless to affect the world outside himself. It’s equally implausible either way.

Philosophers often like to make consciousness powerless. Consciousness is the anti-Rambo. Other things can affect it, but it cannot affect the world. The most prominent philosopher of this stripe is David Chalmers. The view comes out in his “zombie argument” where he argues that subjective sensations (qualia) cannot be physical processes or properties:

In fact we can conceive of a world physically indistinguishable from our world but in which there is no consciousness (a zombie world). From this (so Chalmers argues) it follows that such a world is metaphysically possible.

A philosophical zombie is supposed to be physically identical with an ordinary human being, down to the last neuron and quark and speech and act, yet somehow not experience pain, joy, or any other sensation. Now put aside the absurdity of deriving the structure of the world based on what we can imagine from our armchairs, and see what Chalmers implies.

If conscious pain sensation is distinct from the physical properties that explain why we say “ouch” and grab onto our stubbed toes, it’s not actually doing any work. Conscious sensation is the anti-Rambo. Conscious pains can’t even inspire you to avoid stubbing your toes in the future. It’s your physical neural memories that do that, according to Chalmers’s logic. Thirst – and here I mean the feeling, not just the water concentration in your blood or cerebrospinal fluid – can’t explain your drinking, and hunger can’t explain your eating.

Misled by Analysis

“Conceptual analysis” has been the dominant paradigm in English-language philosophy for much of the twentieth century. and remains influential. Traditionally, a conceptual analysis provides necessary and sufficient conditions for the proper application of a concept. For example, Porphyry defined man as a “mortal rational animal”. A little less restrictively, a modern philosopher might just attempt to lay down a few necessary conditions, or a few sufficient ones. Additionally, this conceptual analysis is supposed to be a matter of pure thought – no experiments required. In a common image: it can be done from the armchair. For a great summary and critique of conceptual analysis, see Ahlstrom-Vij’s book review of McGinn.

This has been a terrible wrong turn, in my view. Although some valuable techniques are used along the way – such as finding counterexamples to proposed generalizations – the odds of achieving true conceptual analysis are miniscule. Here I want to sketch some cognitive-psychological evidence for my skepticism, mainly so that I can refer back to it later. In other words, here I’m doing some metaphilosophy which I’ll lean on when I discuss more typical philosophical topics. A lot of my readers may find this boring, in which case by all means skip it, at least until I lean on it later and you question my reasoning. (Notice how I just implied that I have a lot of readers? Clever and subtle, huh?)

Two of the dominant cognitive-psychological models of categorization are exemplar theory and prototype theory. Take the category of “birds” for example. Both theories focus on similarity, but the exemplar theory supposes that some short list of known birds (robins, pigeons, hawks, …) provides the standard, while the prototype theory proposes that the average characteristics of all known birds provides the standard. Both theories imply that some birds (e.g. penguins) are harder to recognize as birds than others (robins). Prototype theory regards category membership not as an all-or-nothing affair, but as more of a web of interlocking categories which overlap. Exemplar theory is less committed to the existence of vague or borderline category membership, but seems at least compatible with the idea that some animals – Archaeopteryx for example – might be borderline cases of birds.

In order to be useful for thought and communication, a category need not have necessary and sufficient conditions for membership, at least not in any usable form. (A long list of features and their weightings – feathers, flight, talons, beak, etc., etc., along with numerical weightings – might work mathematically, but seems psychologically unrealistic, and certainly not discoverable from the armchair.) Instead, a workable category has short distances in similarity-space between members, compared to the distance between the category and other nearby categories. Crucially, this depends on what things happen to populate the world the speakers live in. The reptiles, mammals, and other animals we contrast to birds help make birds a useful category, because the differences between categories are relatively large. Perhaps in a galaxy far away, there is a planet with many birdlike and mammal-like animals that can be lined up in a gradually differing order with no breaks. If there are intelligent beings on that planet, it is a good bet that just one of their concepts encompasses all of those species.

Another way of saying this is: Things are clustered in the (mathematical) space of properties. The “dimensions” of this space are length, mass, number of feathers, flight speed in air, etc. – any property the thinker in question can measure or observe. And the similarity/distance metric in this space is whatever is salient to the perceiver. As members of the same species, we can usually count on each other’s similarity metrics to be largely commensurate with our own. But if we ever communicate with beings on that distant planet I speculated about in the previous paragraph – the planet of the birdmammals – we need to be more circumspect. They might perceive a glaring gap in the birdmammal spectrum that we just can’t see. Clustering is a feature of the external world X cognizer(s) interaction, not of the external world alone. External here refers to what is outside the cognizers — of course, the world as a whole includes them, which is a point philosophers could stand to remember more often.

Another reason to be skeptical about the prospects for conceptual analysis is that competence far outruns explicit reflective knowledge. We can recognize instances of a concept – pictures of cats on the internet for example – far more easily than we can lay down rules by which cats should be recognized, much less to define “cat”. The fact that it took computer programmers decades to achieve Machine Vision systems which can recognize cats (and other categories) with at least as good reliability as humans can, is evidence of how hard it is to construct such rules. And those programmers had computers to do the grunt work of logical and mathematical computation; they didn’t do it from their armchairs. And those programmers still don’t know the rules for recognizing cats – rather, they know the rules for building neural networks and the rules for training neural networks.

Two other philosophy of language related essays I really like:

Why experimental philosophy?

Experimental philosophy is an endeavor at the intersection of psychology and philosophy. In practice, it often looks like a series of surveys asking about hypothetical situations, and asking respondents to agree or disagree that in the situation, the protagonist knows a certain fact, or is conscious, or acted freely, etc. Seen uncharitably, this can look like an attempt to settle philosophical questions by popularity contest.

That’s the criticism Sean Carroll made of a brief summary by Paul Cousin of Thibaut Giraud’s work on how people think about free will.

I don’t think that’s what experimental philosophy does, at least not usually. (I speak no French so I can’t evaluate Giraud’s work.) At a minimum, experimental philosophy can act as a caution against traditional philosophical arguments which rely too glibly on the intuitions of the philosopher at some crucial point in the argument. Such experiments can support a “negative program”, in the words of Knobe and Nichols in the linked article (click on the words “Experimental philosophy” in paragraph one). By showing that the philosopher’s intuitions are not universally shared, they can raise doubts about the traditional arguments.

But I think experimental philosophy can do more than that. It can show how we got to certain “common sense” beliefs which philosophy (often with the help of science) puts into question. By carefully reconstructing the path we have taken, we can see where we went wrong, and precisely which later deductions are thrown into doubt and which are not. If scientific discoveries are involved, of course we have to understand those correctly too. Reconstructing our cognitive-developmental history is psychology. But it is also philosophy, when philosophically important ideas are at stake.

It’s easy to overlook parts of the thought-trail that got us into some “dilemma” and thus misdiagnose our problems. As I’ve argued in my previous posts on free will, free will is traditionally opposed to “determinism” because “determinism” was thought to imply universal causality, where “causality” is a one-way relationship from past events to present and future ones. But this supposed equation between determinism and universal causality is itself a scientific mistake. The traditional “problem of free will” is imaginary, and the traditional “solution” of a nonphysical mind intervening upon physics is an imaginary solution to an imaginary problem.

The “because” in my previous paragraph states a claim about the thought-trail that got us here. It is exactly the kind of claim best evaluated with a large helping of experimental philosophy.

There’s another, less important but still important use for experimental philosophy. Many philosophical problems are about how to reconcile/adapt the “manifest image” to the “scientific image”, to use Wilfrid Sellars’s phrases. In other words, supposing that we understand what science is actually telling us, how best can we state the upshots in everyday language? To know that, we have to understand how people actually use everyday language like “is conscious”, “knows”, and “acted freely”. Just asking them to set down definitions is not a good approach. (Try asking people to define “chair”, and note how few of them allow something like a beanbag chair to count.) But surveying people about hypothetical (preferably not wildly hypothetical) scenarios is a perfectly reasonable approach. If you’re interested in accurate and efficient communication – as everyone who seeks truth is – choosing the right words matters.

AI (AGI) will probably have values

Convergent evolution of the eye in vertebrates (L) and cephalopods (R). Source: Wikipedia

Some AI safety researchers (e.g. Stuart Russell) refer to an “alignment problem”. That is, they take it as highly likely that an Artificial General Intelligence (AGI) will have values, and want to ensure that those values are well aligned with human values. Here I use “AGI” to mean an AI that can do most intelligence tasks at least as well as a typical human being. In a recent Ask Me Anything podcast, Sean Carroll questioned this assumption behind the “alignment” problem: maybe AI won’t have values. There are probably many ways of achieving high performance on some tasks, he suggested; why assume that the methods implemented in AI will involve values?

I want to give two reasons why it is indeed likely that AI will have features recognizable as values, or goals. (It doesn’t matter for this purpose if what the AI has are “really” values in some deep metaphysical sense; the fact that they consistently function in a similar way is sufficient to raise and define a safety concern.) First, convergent evolution in biology has led to values at least twice. Second, the definitions of the tasks that humans will want AI to perform generally require values to understand.

The “camera” style eyes of vertebrates and cephalopods and jellyfish are a well known example of convergent evolution. The fact that multiple lineages of organisms develop the same basic solution to a problem is a good indication that the solution is a particularly good one. It would be reasonably probable to expect that were another lineage to develop independently into a niche where sight is highly useful, it too might develop a camera-style eye. This would be particularly likely if there were no known alternatives, such as compound eyes, which had also developed.

In addition to eyes, both vertebrates and cephalopods developed complex nervous systems with remarkable levels of intelligence. And both types of animals have recognizable values. They avoid danger, seek food and mate(s), and explore and play, mostly in that order of priority. Although different animals have different variations on these values – I wrote “mate(s)” for a reason – they all count as values. And there are no animals which we consider intelligent which are not guided by values in their behavior.

Maybe we’re cheating, appointing ourselves the arbiters of which animals “we consider intelligent.” But if so, we’ll also be the judges of which AI we consider intelligent, so we’re cheating fair and square.

If AGI were being developed primarily to solve, say, abstract mathematical proofs with no known applications to human life, my second argument would not get off the ground. But it’s not. AGI is being developed to assist humans with our life tasks, whether it be daily life for an elderly or injured person who needs plenty of assistance, or scientific research, or engineering, or corporate planning. Or, for a really scary thought, military strategy. But to do well on these tasks in a flexible and intelligent way, the AI has to understand what the humans want. After all it is humans, either the user or the programmer or (one hopes) both, who define what doing well on the task means. From a certain point of view – one that abstracts away from human values and just tries to describe the world “objectively” – what humans want is a very narrow and peculiar range of outcomes. And to consistently match the human-desired outcomes, the AI has to track the performance of various optional actions it could take and how well they score on these measures. There is a word for a pattern of intelligent behavior that tracks certain outcomes and makes sure that they happen. It’s called “goal” seeking. Operationally speaking, this AI will “care” about achieving these “goals.”

To repeat, for this discussion I don’t care whether the AI “really” cares, if that means for example feeling subjective emotional longing for the goal. As Edsger Dijkstra once said, “The question of whether machines can think is about as relevant as the question of whether submarines can swim.” That may not be the right attitude in all AI related areas, but when it comes to safety, it is. Whether submarines can swim or not, they can still sink your battleship.

I haven’t surveyed the reasons why AGI would not be designed with values. Maybe my readers can supply some in comments.

Who will count the votes

Я считаю, что совершенно неважно, кто и как будет в партии голосовать; но вот что чрезвычайно важно, это кто и как будет считать голоса.

I regard it as completely unimportant who in the party will vote and how, but it is extremely important who will count the votes and how.

attributed to Stalin by Boris Bazhanov, Stalin’s former personal secretary

Today (Jan 6, 2021), Congress will feature a dispute over whether to approve the Electoral College results. It is a foregone conclusion, because Democrats control the House, and it would require a majority of both the House and Senate to override the validity of the Electors that were sent by the states. Because it’s a foregone conclusion, the number of Republicans joining this putsch will be much smaller than it otherwise might have been. In other words: the symptoms will not reveal the full power of the underlying disease.

The National Archives has a useful document on the rules of the process. Part of it reads

Upon such reading of any such certificate or paper, the President of the Senate shall call for objections, if any. Every objection shall be made in writing, and shall state clearly and concisely, and without argument, the ground thereof, and shall be signed by at least one Senator and one Member of the House of Representatives before the same shall be received. When all objections so made to any vote or paper from a State shall have been received and read, the Senate shall thereupon withdraw, and such objections shall be submitted to the Senate for its decision; and the Speaker of the House of Representatives shall, in like manner, submit such objections to the House of Representatives for its decision; and no electoral vote or votes from any State which shall have been regularly given by electors whose appointment has been lawfully certified to according to section 6 of this title from which but one return has been received shall be rejected, but the two Houses concurrently may reject the vote or votes when they agree that such vote or votes have not been so regularly given by electors whose appointment has been so certified.

p. 13

So, all a faction needs to install whomever it wants as the next President and Vice President, is a majority in both Houses of Congress, and enough gall to ignore the whole democracy thing. No court would seem to have any jurisdiction over the process. All the referees have skin in the game. This seems like a glaring flaw.

Right now, we only have one party with widespread preference for their favorite conspiracy theories over the actual tallies of votes by actual voters. And even in that party, there are plenty who strongly prefer democracy. But it is not obvious why the situation in both parties will not get worse. The media are still largely following policies that encourage a race to the bottom.

After today, the pundits will congratulate us and themselves, saying the system worked. Maybe, if by “the system worked” you mean that we got lucky this time. But I can hear the ghost of Stalin (or maybe it’s just Bazhanov; all Russian ghosts sound the same to me) laughing behind our backs.

Update Jan 7: Boy, was I barking at the wrong threat! I mean, we still have to fix this Congress counts the votes thing, but only after taking better measures to stop simple thuggery.

Psychology, structures, and chemistry

Joseph E Davis has a featured post in the Aeon/Psyche newsletter titled “Let’s avoid talk of ‘chemical imbalance’: it’s people in distress.” Davis argues that “chemical imbalance” is drastically oversimplified, and distracts from more personal and more effective treatments. I think he’s basically right. (Full disclosure: my wife is a psychologist.)

How could a treatment based in verbal exchange of fuzzy human concepts and memories outperform a scientifically based treatment based on studies of the brain? Surely I’m not denying that neurotransmitters make a difference to how a person feels and behaves? Well of course not: feelings and behaviors have to be implemented somewhere, and it’s not your left pinky toe! Even if you believe in an immaterial soul that controls what you feel and do, the control has to enter the body somewhere, and the brain is the only remotely plausible candidate (if any candidate is, which is debatable).

But then, personal encounters and verbal exchanges also affect the brain. Memories are laid down by changing the neural wiring, among other possible effects. Neurotransmitters bring about signaling across synapses, but learning affects where those signals go.

Davis cites Irving Kirsch, “Placebo Effect in the Treatment of Depression and Anxiety,” whose abstract states:

analyses of the published and the unpublished clinical trial data are consistent in showing that most (if not all) of the benefits of antidepressants in the treatment of depression and anxiety are due to the placebo response, and the difference in improvement between drug and placebo is not clinically meaningful and may be due to breaking blind by both patients and clinicians. … Other treatments (e.g., psychotherapy and physical exercise) produce the same benefits as antidepressants and do so without the side effects and health risks of the active drugs. Psychotherapy and placebo treatments also show a lower relapse rate than that reported for antidepressant medication.

It’s important to remember that a placebo effect IS an effect. It can be considerably better than nothing.

If psychotherapy is so great, why doesn’t it sell better? Davis writes:

[Jenna, a depressed patient] told me she welcomed the diagnosis of a neurobiological disorder, which confirmed her problem was ‘real’ – brought on by a physiological force external to her volition – and that it showed she’s not ‘just a slacker’. At the same time, Jenna was careful to distance her experience from that of people who are, in her words, ‘crazy’ or ‘nuts’. Their illness means a loss of control and ability to function. By contrast, she sees her problem as a common and minor glitch in neurochemistry. No one, she insisted, should mistake her for the mentally ill.

The stigmatization of mental problems is the problem. Ironically, as Davis explains but I won’t quote, the “chemical imbalance” story has if anything aggravated stigmatization.

Clarke-Doane on ethics and mathematics

He’s contrasts them – kinda. So do I, but for different reasons. Here are the two bottom lines from Justin Clarke-Doane’s paper “The ethics–mathematics analogy” in Philosophy Compass 2019:

This argument is a kind of radicalization of Moore’s Open Question Argument. … The point … is that an agent may know that A is F, for any property, F, whether descriptive or ethical, while failing to endorse A. … if the argument works, it works for any normative properties, whether ethical, epistemic, prudential, or all-things-considered.

In general, if one is an ethical anti-realist on the basis of epistemological considerations, then one ought to be a mathematical anti-realist too. And, yet, ethical and mathematical realism do not stand or fall together. Ethical questions, insofar as they are practical, cannot fail to be objective in a way that mathematical questions can.

But what does he mean by “practical” near the end of the second passage? Clarke-Doane repeatedly refers to “whether to do” what the ethical (or epistemic or prudential) norm says to do. Apparently a “practical” question is one that settles whether to do X, for some particular X.

Before we evaluate whether “whether to do X” questions can “fail to be objective”, I should explain how certain mathematical questions can fail to be objective, on Clarke-Doane’s view. That is because mathematical pluralism is true of at least some mathematical domains. (I know little about philosophy of mathematics, but I must say I find mathematical pluralism highly plausible.)

Clarke-Doane: “Just as Euclidean and hyperbolic geometries are equally true, albeit true of different structures, the mathematical pluralist maintains that foundational theories, like (pure) set theories, are too. It is as though the most uncompromising mathematical relativism were true.” And: “At first approximation, mathematical pluralism says that any (first-order) consistent mathematical theory is true of the entities of which it is about.” On this basis Clarke-Doane concludes that mathematics, if pluralists are correct, is truth-bearing but not objective. I’ll take this as partially definitive of what “objective” means here. So I guess this means: if you get to pick which theory to use, it’s not “objective”.

How might one conceive or defend an ethical pluralism comparable to mathematical pluralism? Clarke-Doane asks us to consider an “ethics-like” system ethics*, which has slightly different norms and as a result tells us not to do some particular X that ethics tells us to do. Then we might wonder whether to do what ethics tells us to do in the situation, or what ethics* tells us to do. As for why ethical pluralism might be defensible, Clarke-Doane suggests that Cornell Realism implies it, as do moral functionalism and Scanlon’s metaethical views. I call my own view “Cornell Constructivism”, but that’s for another time.

Of Clarke-Doane’s two bottom lines, I agree with the first and a small part of the second. The first was that one can accept that A is F, for any normative property F, and yet not endorse it. But this undercuts Clarke-Doane’s claim in the second bottom line that ethics is “practical” in his sense. Of course it may be practical for some people – ethical people. Highly ethical people may see no daylight between concluding that an act is right, and endorsing it and going for it. On the other hand, extremely sociopathic people might see no attraction at all in the ethical. And turning to philosophical thought-experiments, it seems easy to conceive a demon who regards the ethical as a property to be avoided at any cost.

Clarke-Doane might reply that you either do X, or do not, and that is what makes it objective. But that you do X (or not) does not imply that you ever evaluated X at all. I’m really not sure what Clarke-Doane is getting at, and I worry that I’ve overlooked a better interpretation. But I can find no interpretation that truly logically connects from “ethics is practical” to “it cannot fail to be objective” and also makes both plausible.

I agree all too much that “ethical and mathematical realism do not stand or fall together” – too much to have nearly as much patience with the ethics-mathematics analogy as Clarke-Doane does. Ethics is bound up with experience in ways that make the analogy a non-starter. Ethics is about how we can flourish and get along. We who address ethical reasoning and justifications to each other. We who accept or reject these reasons and justifications, and propose alternatives. In order to determine whether our interlocutors can reasonably accept our proposals, we have to study and listen to them. In order to check whether we reasonably make the proposals, we have to study ourselves – and our common humanity will allow this to shed light on others.

Ethics isn’t a priori. It’s mired in empirical learning.

Justin Clarke-Doane has done philosophy an enormous favor by radicalizing – to the point of absurdity – Moore‘s Open Question Argument. Even Moore’s own “simple non-natural property” of goodness fails to pass Moore’s own test. We can agree that an act has Moorean Goodness and still wonder whether to do it. But if no normative property can conceivably pass the test, this shows that the test is not an appropriate test of normativity. There is no pure normativity – “pure” meaning utterly empty of descriptive content – to be had in this or any other universe.

We can endorse an action as prudent, or ethical. We can endorse an inference as logical. We can endorse a theory as epistemically virtuous. In none of these cases are we simply saying “yay, action/inference/theory!” In none of them are we purely expressing approval, or an intention to act/infer/theorize. There is additional information we are implying.

We can of course just endorse. Endorse without an “as” (as ethical, as logical, etc.). Endorsing, that is, without any value judgement. But that’s not normativity.