Why oh why computationalism?

A mammal does a million things at once. She sniffs the air. She moves her head and eyes. (She has and maintains a head and eyes.) She absorbs photons of different wavelengths and registers them differently. She metabolizes glucose. She increases entropy. She gets curious about sounds. She forms beliefs and desires. She also computes.

Why seize on the last fact and say “aha, here is where all mental aspects lie!” Why that fact to the exclusion of all others? It just looks arbitrary.

It’s worth distinguishing between computationalism and functionalism. As David Lewis’s classic version of functionalism maintained, mental states are identified by their relations to each other and to observations by, and behaviors of, the organism in question. Or as Gualtiero Piccinini (PDF) puts the functionalist thesis, “the mind is (an aspect of) the functional organization of the organ of cognition.” Meanwhile, computing mechanisms “are those mechanisms whose teleological function is manipulating medium-independent vehicles in accordance with a rule.” On Piccinini’s definitions, computationalism is a specific version of functionalism – and Piccinini advocates non-computational functionalism.

As I noted in my post Only Two Cheers for Functionalism, functionalism makes good sense for cognition and (Searle to the contrary notwithstanding) for intentionality. But I don’t see computationalism as adequate even in those domains. And in my view not even functionalism looks promising for distinguishing various aspects of phenomenal consciousness.

Philosophers have a common definition of “computation” which Piccinini rightly criticizes in another article :

If there are two descriptions of a system, a physical description and a computational description, and if the computational description maps onto the physical description, then the system is a physical implementation of the computational description and the computational description is the system‘s software. The problem with this view is that it turns everything into a computer. (p. 14)

Let’s not accept panpsychism on the basis that everything is a computer and computation is mindfulness. Piccinini suggests a different definition of computation:

program execution is understood informally as a special kind of activity pertaining to special mechanisms (cf. Fodor 1968b, 1975; Pylyshyn 1984). Computers are among the few systems whose behavior we normally explain by invoking the programs they execute. (p. 19)

The definition of “function” is also potentially problematic for the functionalism / computationalism distinction. Piccinini has a plausible approach to defining functions (pp. 23ff) that I won’t recap here.

One more worry about computationalism. This thought is inspired by Scott Aaronson in conversation. Computation is normally understood as a causal process spread over time. But computer science is a branch of mathematics, and it’s easy to see that the same mathematical relationships could be realized over a spatially extended structure. Who needs time? Time is a prerequisite for causality, but who needs causality?

But then, who needs space, or matter, or energy? Arguably, mathematical structures like the proof of Fermat’s last theorem exist regardless of whether anyone discovers them or writes them down. (Are there numbers between 6 and 9? Then there are numbers.) If my conscious life is a complex computation, I as a physical being may be redundant – depending on one’s philosophy of mathematics.

Humean laws and Humean mosaic

David Hume’s modern successors sometimes speak of a “mosaic” of particular facts or events. In my last post, I wrote:

The “Humean mosaic” is the information about local particular facts – properties at particular spacetime locations. I’ll come back to that next time.

I should have said “properties and relations” or maybe even “properties and relations held by objects.” (However, David Hume himself was very skeptical about objects, leading to first part of the quip by some of his critics, that Hume’s philosophy amounted to “No matter, never mind.”)

So, modern Humeans want to analyze natural laws as being convenient summaries of – and therefore secondary to – matters of properties and relations occurring at particular spacetime locations. In philosophy jargon, Humeans say laws are supervenient upon the local matters of fact. For examples of local facts: the ignition of the stove at 5:00 pm and the boiling of water at 5:05 pm. Or the travel of a particular water molecule away from its neighboring water molecules at 5:05:00.0001 pm. Etc.

But wait a second. Independent of other events in spacetime and independent of laws, what does it mean to say “water” or “boiling” or “travel”? Put aside for a moment (but not forever!) the question of how we know that something is water. What would it mean to say that’s water – what would water-ness be – if we take away any implications about what happened at the previous moment and what happens at the next moment? What if a “water” molecule need not break down salt, need not attract other water molecules with its electrical dipole, need not do any of the things water does? If a water molecule could start doing what an elephant does, and vice versa, what on earth distinguishes “water” from “elephant”? It’s not like objects each contain a tiny label written in Mandarin stating what kind of object they are. Nor do properties and relations bear labels. Nor do locations in spacetime.

For a water molecule (or whatever) to travel from A to B, there must be a lot in common along the spacetime path from A to B, enough to let us trace the “water molecule” fact along this path. It’s not like there is a label on the molecule saying (translated from the Mandarin) “I am water molecule #72.” Well, some philosophers might want to go near there, but definitely not a Humean who wants a lean, mean, science-friendly ontology.

Instead of making laws supervenient upon, i.e. secondary to, local properties and relations – instead of playing these metaphysical penis envy games – we need a package deal of laws and properties. The phrase “package deal” is Barry Loewer’s, and apt – it puts laws and properties on a par both metaphysically and epistemically.

Humeanism about laws

In philosophy of science, a common view about laws of nature is Humean, i.e. inspired by David Hume. I should probably say “family of views”. Recently Jenann Ismael wrote an excellent paper explaining her disillusionment with Humeanism. In this post I’ll try to summarize it. In the next, I’ll give another reason to be suspicious of Humeanism about natural laws.

Hume claimed that there was “no necessary connection” between distinct events. Rather, we form habits of expectation that one event will be followed by another, and we say that the former event “caused” the latter. We formulate scientific laws such as that water boils at 212 degrees Fahrenheit (at 1 atmosphere), which implies that if you heat water to that temperature you’ll cause it to boil. Is there, then, a hidden Essence of Water that consists partly in a disposition to boil at 212? Is there an eternal Law Of Nature, standing apart from the goings-on in the universe, and ruling over them, compelling water to behave this way? (To parody “Aristotelian” and “Platonist” views, respectively.)

No, says Hume. No, say his modern descendants. Hume’s own view comes dangerously close to suggesting that causality is a projection, in an almost Freudian sense, of human thought onto the world. But recent advocates have a way to avoid subjectivism about natural laws. Most prominently, David Lewis’s “Best Systems Analysis” has it that laws are efficient compression rules to capture the regularities in the universe. Long story short, laws make a long story short. For example, in a deterministic (in both time-directions) universe, you don’t have to list every event in history to describe the universe. You “only” have to describe one instant, then list the natural laws. (For relativity buffs: this assumes the spacetime can be foliated.) That’s a vast reduction in descriptive complexity. Such laws have great explanatory power – which is a virtue in Best Systems Analysis, and also in the practice of actual scientists. This congenial fit is a good sign!

For more on Best Systems Analysis, see Terence Tomkow’s Computational Theory of the Laws of Nature. Tomkow compares laws to the compression rules in a computer’s compressed (zipped) version of a file (and in the program that makes the compressions). The original file corresponds to the actual universe in all its glory and all its boring repetitions.

How does a Best Systems Analyst perform such Analysis? Here’s Jenann Ismael:

The idea was that science gathered a large and wide- ranging body of information about local matters of particular fact and systematized that body of fact using the methods that scientists actually use. … There were no relations among universals, no irreducible modal forces or anything added to the Humean mosaic to enforce laws. (pp.43-44)

The “Humean mosaic” is the information about local particular facts – properties at particular spacetime locations. I’ll come back to that next time.

In addition to natural laws, Lewis had a theory of chances, where chances are supposed to be relatively objective facts about probability. Ismael writes:

Lewis … introduced the Principal Principle (PP) as an implicit definition of chance that identified chances by the role they play guiding belief. What the Principle said in its original formulation was that one should adjust one’s credence to the chances no matter what other information one has, except in the presence of inadmissible information:

PP: cr(A/〈cht (A) = x〉E) = x, provided that E is admissible with respect to 〈cht (A) = x〉

Where cr(A) is one’s credence in A at some time t and cht (A) is the chance of A at t. The restriction to admissible information was needed to discount cases where PP clearly becomes inapplicable; e.g., when one possesses information from the future of the sort one might get from a crystal ball or a privileged communication from God. (pp. 44-45)

Objective(ish) facts about probability, if there are any, must include future patterns of events as well as past ones. But for that reason among others, we don’t generally know what the chances are. After much review of philosophical history, Ismael suggests that Humeans generally favor this generalization of PP:

GPP: cr(A) := ∑cr(chi )chi (A), where chi is the chance assigned to A by epistemically possible theory of chance chi . (p. 47)

And most Humeans would be Bayesians about where the credences cr come from. As long as one doesn’t assign zero prior probability to any theory, the idea goes, with enough evidence eventually one will update so that approximately-true assignments of chances are given high credence.

Now we’re in a position to state Ismael’s objection to this picture.

We start with three premises:

(i) The set of possible mosaics is obtained by a combinatorial principle; any assignment physical quantities to spacetime points represents a possible mosaic;

(ii) The laws and chances are determined by a global criterion applied to the mosaic; and

(iii) The mosaic is indefinitely extendible.

Indefinite extendibility means just what it sounds like. It means that the Humean mosaic is open-ended; it stretches indefinitely into the future. Note that it doesn’t entail that the Humean mosaic is infinite. It just means that there is no particular finite size that it is constrained to be. (pp. 49-50)

Premise (i) is Hume’s “absence of necessary connection” between distinct events. Premise (ii) is a core feature of both Best Systems Analysis and the Principal Principle. And here’s what Ismael says about premise (iii):

Why think the Humean mosaic is indefinitely extendible? There are two reasons. From a Humean perspective, to deny indefinite extendibility would be to hold that the existence of any collection of events was incompatible with the existence of some other. And that would be to deny Humeanism, because Humeanism was precisely the denial that there was any necessary connection between distinct existences. (p. 50)

And now we lose any reason to expect Bayesian convergence to approximate truth in our estimates of laws and chances. Any finite collection of evidence, such as is available to us now, is compatible with some far larger patch of Humean mosaic beyond our observations. And Humeanism tells us that the larger patch is unconstrained by our patch. Things might go very differently there, in ways that blow our favored theories of chances and laws out of the water. Out of all the finite possible ways the universe can be, even setting aside infinite ones, our patch has measure zero. (This last way of putting it is my own, but not a stretch.)

Ismael discusses a (verbal communication) response by Barry Loewer and David Albert to her argument. Their response is to restrict Bayesian priors to ones that favor induction. (For a relevant idea that I find appealing, see Solomonoff Induction.) Here’s Ismael’s diagnosis of the Loewer-Albert response:

Even though the metaphysics says that looking forward from any point in history, there are as many ways the world could be as we would get by assigning values of physical quantities to spacetime points in the future, the epistemology says that you must take as a pre- empirical assumption that the laws and chances derived from any large enough submanifold would reflect the laws and chances derived from a global systematization. This amounts heavily weighting your priors to ignore all but a small sliver of Humeanly possible completions of the mosaic. Since the metaphysics is explicitly committed to combinatorial possibilities for the future, the only thing that keeps this from being flat- out inconsistent is that one reserves nominal possibility that the future might be among the vast majority of worlds whose overall systematization is different from that of the initial segment. (p. 57)

While not making Humeanism strictly inconsistent, Ismael says we have a better alternative. Namely, not to expect long-running correlations unless there are connections between (otherwise seemingly distinct) events. In other words, we can have our priors heavily leaning toward non-Humean metaphysics.

Trick Questions

I like certain sorts of trick questions, but there’s one kind I hate: the kind that is tricky only because of deliberately misleading wording. For example: my friend can predict the score of any NFL game when it starts; how does he do it? (Answer: don’t read “when it starts” to refer to the time of the prediction, read it to refer to the time at which we look at the scoreboard which reads 0 to 0.) In contrast, misleading wording that we naively introduce because we are using the wrong concepts, or making false assumptions, is fair game. I think most philosophical questions are of the latter sort. Herewith, two trick questions of my own devising, and one from David Velleman.

Q1: Six strong, but not necessarily smart, people are standing on the ground and pushing in a NNE direction against a very heavy, stationary, passive object, when it begins to move due to their force. In what direction does it begin to move?

Hint: (Select text on the next line to see it against a white background)

Huge hint:

Q2: Which came first, the chicken or the egg? Let’s say that insect eggs don’t count. Just to have a precise cutoff, let’s say that the first organism to have 99.5% of DNA in common with the typical (modal, on each gene) modern chicken, counts as the right sort of chicken/egg.

Hint:

Q3 (hat tip to David Velleman): You are a pollster for Gallup covering the election in the nation of Bandwagonia. Your data shows that 31% of voters are solid supporters of the fascist candidate, 29% are solidly behind the liberal candidate, and 40% are get-on-the-bandwagon voters: they vote for the candidate they think is going to win. (The margin of error on these numbers is 1%.) Bandwagonians take the Gallup poll extremely seriously and scoff at other polls. Regardless of what kind of politicians they favor, 80% of Bandwagonians believe that the Gallup prediction will be correct, while the rest base their expectations on the intentions of another random voter. Your boss will be happy if your prediction is within 5% of the actual vote split. What outcome should you predict?

(No hint for this one. Even if you are stuck in the philosophical paradigm this question is meant to combat, the question should jolt you out of it.)

Answers/spoilers welcome below! Also, more trick questions if you got ’em!

“Objectivity” means keeping facts from the public

–for the New York Times (but by no means only for that news outlet). Here’s a case in point that illustrates the problem in glaring, disgusting detail. This is based on an interview conducted by NPR’s “On the Media”; the transcript is here.

This inadequate news report occurred in July 1981, in the early days of the AIDS epidemic. Press outlets popular in gay communities had already begun talking about (what would later be known as) AIDS, and the cancers that immune-suppressed patient suffer. Lawrence K. Altman was a practicing doctor, at a public hospital treating many poor patients, and simultaneously a New York Times reporter. The Morbidity and Mortality Weekly Report (MMWR), a public-heath trade publication, had just published a report by doctors working in cities with large gay populations. The symptoms of 41 gay men were described.

Fast forward to today. On The Media reporter Kai Wright asks:

Q: At this stage, people weren’t seeing beyond gay men. What about yourself? What were you seeing at that time? The report you wrote was about the 41 men? Could you see more than that?

A: Yes, because I had the experience at Bellevue, and we had women who had been former IV drug users, or injecting drug users, and they had the same generalized swollen lymph nodes that men had. To me, I didn’t see that it would be limited to the gay men population.

Dr. Lawrence K. Altman

Kai Wright: That’s not what he reported. I asked him why he didn’t write about what he was seeing in the newspaper.

Q: What do you think, if in the newsroom of 1981, if you had said, “No, I can see it’s more than these 41 gay men, and I want to write about women who are drug addicted that I’ve seen in the past”? How do you think that would’ve been received amongst your editors?

A: I think they would have to want to know how that fit into a bigger picture. Was this just an oddity? If it’s an oddity, I don’t think the Times would’ve been interested. If you could show that it was part of a broader pattern, then they presumably would’ve been interested, but we didn’t have the evidence then, nobody was reporting it. There was no data reported. Yes, it would be in my mind, but we weren’t reporting theory. We were trying to report the facts of what was known.

Q: Do you wrestle at all with the limitation of reporting on what the CDC is establishing versus being able to raise questions about what you were seeing at Bellevue that you couldn’t quite prove, but that you were like, “Something else is going on here too.”?

A: We weren’t writing personal opinion. We were reporters. I was a reporter. That kind of journalism didn’t exist at that time. I wasn’t using the word I and writing first-person accounts. It was coming off the news and explaining what was going on.

Q: Do you wrestle at all with the limitation of reporting on what the CDC is establishing versus being able to raise questions about what you were seeing at Bellevue that you couldn’t quite prove, but that you were like, “Something else is going on here too.”?

A: We weren’t writing personal opinion. We were reporters. I was a reporter. That kind of journalism didn’t exist at that time. I wasn’t using the word I and writing first-person accounts. It was coming off the news and explaining what was going on.

So, according to this code of “objectivity”, a reporter cannot report what he witnessed with his own eyes! The official narrative of selected groups is all that can be reported. Medicine, on a very very good day, is a science. Observations by doctors, nurses, and researchers are its bread and butter. Earth to NYT: you can tell us when a reliable source observes a pattern of symptoms in different populations! It’s OK! Especially when, as in this case, many of our lives will depend on it, directly or — with infectious diseases this next part is very important — indirectly.

Of course, it’s possible for all I’ve shown you that Lawrence K. Altman misinterpreted the NYT policies on objectivity. Except, if you’re at all familiar with the NYT, you know he didn’t. The paper will not even report that fossil fuels are the primary cause of global warming (in a piece on the topic) because a major political party disputes it.

Against metaphysical penis envy

Karen Crowther writes in “Levels of Fundamentality in the Metaphysics of Physics”:

Within physics there are two ways of establishing the relative fundamentality of one theory compared to another, via two senses of reduction: “inter-level” and “intra-level” (Crowther, 2018). The former is standardly recognised as roughly correlating with the chain of ontological dependence (i.e., the phenomena described by theories of macro-physics are typically supposed to be ontologically dependent on the entities/behaviour described by theories of micro-physics), and thus has been of interest to naturalised metaphysics

https://philpapers.org/archive/CROLOF.pdf

.Philosophers are strangely fond of constructing metaphysical hierarchies. Note, these are not merely explanatory or conceptual hierarchies – which by their nature would be audience-relative. The order in which Andromedans most easily understand the structures of the universe might differ from the order in which humans understand such things. This would not mean that one species perceives more clearly than the other; only that they were different. I have nothing against talk of “levels” of theory-building or concept-forming. No, the problematic hierarchies supposedly pertain to the universe itself, apart from any particular perspective.

By “hierarchy” I mean a system arranged in ranks, or “levels”. Metaphysical envy is the opposite of socioeconomic status envy, in a way: the bottom level is the most prized. Hashtag #mine’s smaller! This level is called “fundamental”. I didn’t title this post “against fundamentalism” though, because one could reasonably say that on my view, there is only one level, and everything is fundamental.

Crowther argues that the “intra-level” theoretical reductions provide equal reason for attributing “relative metaphysical fundamentality” to the reducing theory. The “level” here refers to a size scale and associated types of objects. Examples of intra-level reduction include quantum electrodynamics as more fundamental than classical electrodynamics, and special relativity as more fundamental than Newtonian mechanics. However, she notes that the multiplicity of levels can be avoided by either an eliminativist view or a reductionist view.

The eliminativist denies the existence of any objects, states, and processes on any but the allegedly fundamental level. Assuming “fundamentality” is established by explanatory priority, this would suggest there are no atoms, no spectrometers, no weight scales, and no physicists. And as Crowther points out,

This becomes more disturbing with the recognition that physicists do not consider our current theories as fundamental [Crowther 2019], and so according to this view we’d have reason to believe that nothing currently described by physics exists.

ibid.

On a “reductionist” view, which I favor, we acknowledge that objects on many size and energy scales exist, from photons to the Universe. We “reduce” the number of ontological levels down to one (but probably do not reduce the number of types of objects). Nothing is “more fundamentally real” or “derivatively real”, it is just real. This is not to say that objects don’t stand in relations of part to whole – it’s only to deny that the relation is invidious. Under one influential formalization of part/whole relations, Leśniewski’s General Extensional Mereology, any part can be specified as the overlap between two wholes, as easily as building a whole from its parts. A set of relations so symmetric cannot justify an invidious ontology of levels.

Some plausible physical ontologies are markedly holistic. For instance, Mad Dog Everettianism holds that the fundamental explanans (thing that explains other things) is the wave function of the universe. This makes for an interesting, if not downright cyclical, pattern of metaphysical grounding, for those who think levels-of-explanation can be leveraged into levels-of-reality. The wavefunction of the universe explains photons, leptons, and baryons, which explain stars, dust, and planets, which explain galaxies, which explain clusters and superclusters. A similar pattern presumably applies to dark matter. But here we have the ingredients of the universe; can we not explain the universe with them?

without Foundation

The central theme of Asimov’s Foundation series revolves around psychohistory. The Wikipedia summary gets the central points:

Psychohistory depends on the idea that, while one cannot foresee the actions of a particular individual, the laws of statistics as applied to large groups of people could predict the general flow of future events. Asimov used the analogy of a gas: An observer has great difficulty in predicting the motion of a single molecule in a gas, but with the kinetic theory can predict the mass action of the gas to a high level of accuracy. Asimov applied this concept to the population of his fictional Galactic Empire, which numbered one quintillion. The character responsible for the science’s creation, Hari Seldon, established two axioms:

  • the population whose behaviour was modelled should be sufficiently large to represent the entire society.
  • the population should remain in ignorance of the results of the application of psychohistorical analyses because if it is aware, the group changes its behaviour.
https://en.wikipedia.org/wiki/Psychohistory_(fictional)

Can one predict the mass action of a gas to a high level of accuracy? Only under special conditions. Once a gas flow (or other fluid flow) becomes turbulent, it becomes chaotic. Tiny measurement errors, or even correct but less-than-perfectly-precise measurements, soon lead to drastically wrong predictions in a computational fluid dynamics model. Take meteorology for example. If we vary the inputs to the model and apply statistics, we soon arrive at predictions no better than what one can get by consulting historical records. The weather in New York City in June will probably be much like it was in the ten or twenty previous Junes. No matter how powerful our supercomputers, that is the best we will ever do.

It’s also not possible to make a reliable long range prediction that narrows down the possibilities for the behavior of a chaotic system to some meaningful subset. That because of topological mixing – the tendency of states that are close to each other in a small region to evolve into more and more widespread conditions, until they are thoroughly mixed in with states that evolved from very different starting points.

It’s not like human beings are any more predictable than the weather, either. For one thing, we both influence the weather and are influenced by it. But more importantly, seeming trivial differences in one’s path can affect whom one meets, and later whom one marries, and who populates the next generation. Many scientific and engineering breakthroughs were sparked by exchanges of information between two or more people who in slightly different scenarios, might never have said more than hello, or even have met.

Having a large subsample of the human population to base one’s predictions on will not help. Well studied initial conditions plus chaotic dynamics equals chaotic results. Odd behavior does not average out – it ramifies over and over again.

Ironically, flouting Hari Seldon’s second “axiom” may be the best hope for predicting human behavior despite the chaos. People who hear a prediction might rebel against it, but they might instead embrace it, especially if the prediction is carefully crafted. Self-fulfilling prophecy is a thing. Make a prophecy that lots of people like – lots of powerful people, especially – and your odds of looking foresighted improve dramatically.

Only Two Cheers for Functionalism

TLDR: Intentionality? Yay! Consciousness? Hooray! Particular sensations? What makes you think functionalism captures them?

Functionalism in the philosophy of mind is probably best understood first in relation to behaviorism. The Stanford Encyclopedia of Philosophy entry says:

It seemed to the critics of behaviorism, therefore, that theories that explicitly appeal to an organism’s beliefs, desires, and other mental states, as well as to stimulations and behavior, would provide a fuller and more accurate account of why organisms behave as they do.

Functionalism (SEP)

But another point of contrast, in addition to behaviorism, is identity theory.

The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain.

The Mind/Brain Identity Theory (SEP)

We’ll see more explanation below why prototypical functionalists didn’t want to embrace identity theory as an option for what “beliefs, desires, and other mental states” amount to.

In this post I want to talk about three major issues in philosophy of mind. From professional philosophers, we have disputes about “intentionality” (plain English: reference; what thoughts or desires are about) and “qualia” (plain English: subjective experience, especially sensation). From the grass roots up as well as academia, we have the issue of which beings are or could be conscious.

Intentionality/Reference

The word intentionality comes into philosophy of mind from the Latin word intentio, meaning concept and its root tendere meaning directed toward something (SEP again). The question, then, is what words/concepts/thoughts point at, and how they do it. This “how” explanation should also, ideally, satisfy us why a thought which had the cited features would therefore point at the thing(s) it does.

An important dividing point is whether one thinks that words/sentences take priority, and thoughts and desires borrow their reference from there (Noam Chomsky seems to hold such a view), or whether organisms’ thoughts and desires take priority and bestow reference on linguistic items. I take the latter view, and won’t argue for it here. This makes a functionalist account of reference significantly harder than would a language-first approach. But I still think it is extremely promising. Here are a few thoughts on why.

A concept typically has important relationships both to other concepts and to things in the world outside the mind. Take for example the concept named with “whale” – and try to project oneself into the situation of an 18th century thinker. A whale is supposed to be a very large fish, which often surfaces and “blows”, and parts of its body can be processed into whale oil for lamps. These are conceptual role relations for “whale”. Moreover, there are experts (fishers, whalers, and naturalists) who have observed them, and readily agree on recognizing new ones. These are world-mind interactions that do much to fix the reference of “whale”. Note that some of the conceptual role assumptions can contain errors – on the best precisification of “fish”, a whale does not count as one – and yet the reference can succeed anyway. Also, perhaps a few large sharks were misidentified as whales, yet that need not alter the reference of “whale”.

For an explanation of how “whale” could mean whale despite the errors just mentioned, see Roche and Sober’s paper “Hypotheses that Attribute False Beliefs − a Two-Part Epistemology (Darwin+Akaike)”. It rebuts certain criticisms of functionalist-friendly accounts of reference. But that’s a bit of a digression, here we want positive reasons for thinking functionalism can explain the reference of mental states.

Look again at the concept-to-concept and world-to-concept relationships illustrated above. These are a perfect fit to some major themes in functionalist philosophy of mind. David Lewis used Ramsey sentences to capture the idea:

To construct the Ramsey-sentence of this “theory”, the first step is to conjoin these generalizations, then to replace all names of different types of mental states with different variables, and then to existentially quantify those variables, as follows:

xyzw(x tends to be caused by [O] & x tends to produce states yz, and w & x tends to produce [B]).

SEP on Functionalism, sec. 3.2

Thus, states y, z, and w would be other mental states, such as other concepts, O would be a particular impingement of the world on the organism (for example an observation), and B would be behavior(s). Additional logical formulae would have to be added, of course, for the other concepts y, z, and w, listing their characteristic world-to-organism and organism-to-world regularities. (Confession: I changed the example; it was originally about pain. That’s fair, though, since Lewis would give the same analysis for belief states like “there’s a whale”. And we should be willing to entertain the thought that such an analysis might work better for some mental states than for others.)

Thus, functionalist approaches to the reference of concepts and words seem to be barking up exactly the right trees. One cheer for functionalism!

Consciousness

To call a being conscious presumably implies both that it can perceive and/or desire things in the world (reference) AND that it has internal states that mean something to it: subjective experiences. Philosophers often use the phrase “something it is like to be that creature”, but that doesn’t seem very helpful. I think we can do better at extracting a relatively philosophy-neutral characterization of subjective experience, by focusing on certain sensations. Here’s an experiment.

Put your left hand in a bucket of hot water, and let it acclimate for a few minutes.  Meanwhile let your right hand acclimate to a bucket of ice water.  Then plunge both hands into a bucket of lukewarm water.  The lukewarm water feels very different to your two hands. When asked to tell the temperature of the lukewarm water without looking at a temperature readout, you probably don’t know.  Asked to guess, you’re off by a considerable margin.

Next, practice, practice, practice. I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that you will get very good at estimating the temperature of a bucket of water. After you hone your skill, we bring a bucket of 20 C water (without telling), and you move your could hand first. “Feels like 20 Celsius.” Your hot hand follows. “Yup, feels like 20,” you say.

“Wait,” we ask. “You said feels-like-20 for both hands.  Does this mean the bucket no longer feels different to your two different hands, like it did when you started?” The answer, of course, is no. Evidently, there is a feeling-of-cold and a feeling-of-hot that go beyond (though they may inform) the judgement about the water outside your hands. These, and other sensations that bear imperfect correlations to external world conditions, will be our paradigm examples of subjective sensations. The taste of banana, the smells of roses, the mellowness of a tone, the pain of a scratch and the even worse feeling of a certain itch – there are many and diverse examples. To focus on these sensations may be a little narrower than what most philosophers use the word “qualia” for. But that’s OK in this context, because (A) these are the go-to examples for philosophers who attack functionalism for utterly leaving out experience, and (B) I want to suggest that functionalism gives us a good idea for telling which creatures have subjective sensations (or when it might be indeterminate).

So what’s good about functionalism here? It’s the very diversity of the sensations that makes it doubtful that a single type of brain process accounts for all and only them. Functionalism can handle this diversity because all these sensations have a certain role in common: they give us a second angle on our experiences. We not only know (or have a best guess on) what the external world was doing at the time, but we have information about how we were affected. (The additional survival value of the latter should not be too hard to imagine, I think.)

Of course if Global Workspace Theories are right(ish), then global network activation marks all conscious mental activity. But that’s broader than sensation; it includes thoughts and concepts which go beyond any sensation that may be associated with them. The content of a thought depends on its reference. In claiming that reference goes beyond sensation, I’m denying the phenomenal intentionality theory, and I’m confident in doing so. (Those theorists need to wrestle with the later works of Wittgenstein, I’d say, and I predict they’ll lose.)

So functionalism looks promising not only for telling us which creatures are conscious, but even for suggesting a definition of which mental processes count as qualia (narrowly conceived). Another cheer!

Itches

So how about a particular sensation – say, itches? The standard functionalist strategy would appeal to the stimuli and behavior, as well as other mental states associated with itching. The trouble is that there are an awful lot of stimulus conditions that lead to itching. Dry skin, acne, and insect bites, of course. But also healing, certain drugs, perfectly normal hair follicle development … the list seems almost endless and absurdly diverse. Surely a better explanation of why all these count as itches is not: that they are on this list, but rather: because we recognize a similarity of our internal reaction. Much as we can truly say that “this 30 Celsius water feels cold to my left hand!” without thereby implying that all 30 Celsius water counts as cold.

Itches causally promote other mental states, like grumpiness. But these effects aren’t very large and don’t distinguish itches from other phenomena like pains.

Perhaps behavior is a better route. People scratch itches, while they usually avoid all contact with a painful area. Except when they apply heating pads or ice packs to painful areas. Come to think of it, heating pads or ice can calm some itches. And I’ve had pains that ease up with a gentle scratch. And scratching is supposed to worsen some itches, like poison ivy. A person with much repeated experience with poison ivy might lose even the desire to scratch the area. Scratching usually works to help itches, and usually doesn’t work on pains, and that suffices to explain the correlation. Drawing up a list of behaviors (with or without a list of stimuli) is likely to get the boundaries of itching wrong.

Besides the behaviors and stimuli – and with a perfect rather than loose correlation – all the paradigm examples of itching I can think of involve mammals. The hypothesis that itching is a process in the mammalian neural architecture (and perhaps extending beyond mammals to, say, vertebrates) jumps out as a strong contender. In other words, perhaps it’s time to move on from functionalism, when it comes to particular sensations, and embrace mind-brain identity theory.

Couldn’t a theory be both functionalist and an identity theory at the same time? In a very expansive sense of “functionalist”, yes. But:

However, if there are differences in the physical states that satisfy the functional definitions in different (actual or hypothetical) creatures, such theories – like most versions of the identity theory – would violate a key motivation for functionalism, namely, that creatures with states that play the same role in the production of other mental states and behavior possess, literally, the same mental states.

SEP on Functionalism, sec. 3.5

I suggest however that we embrace “chauvinism” instead. If hypothetical Martians lack the relevant neurology that explains our itches, they don’t itch. That doesn’t mean that we don’t respect the Martians, or that we think we are superior because we can itch and they can’t. Calling this view “chauvinism” misses most of what actual chauvinism (e.g., male chauvinism) is actually about.

Now, does this kind of identity theory make qualia (sensations) ineffable, and inaccessible to third party observation? Not at all. They’re as effable as any movie star. You might have to put a creature inside an fMRI scanner (or do even more invasive research), but its mental processes are in principle knowable. Of course, you might not be able to undergo those sensations yourself. And it might not be able to undergo yours.

But then, a Martian might digest its food in a very different way than humans do. It cannot undergo enzymatic digestion. But it can understand your enzymatic digestion just fine.

Suppose you “upload” your mind into a system that lacks the neurology underlying itches. And for the sake of argument, waive any difficulties about “uploaded-you” being you. When uploaded-you remembers your itches – here, define “remembers” as “accesses reliable information laid down at an earlier time and transformed by reliable rules” – “you” will not be in the same state you would be had you not uploaded. But again, this is no more remarkable than the fact that uploaded-you doesn’t digest food the same way.

No cheer for functionalism regarding individual sensations.

How to tell you’re in a simulation

Suppose you’re in a group of people who are in mortal danger, and someone proposes splitting up and exploring in different directions, and everyone acts like that’s a good idea. This is a glaring sign that you’re in a simulation. A little while later, you (or you and your best buddy) will be the only one(s) left, further confirming that you are a character in a horror movie.

If thousands of bullets have been fired at you, with none or only one hitting, and (if one hit) you were as good as new a few days later, you are in a simulation. You’re a character in an action movie.

Speaking of character, if you and one to a few buddies are the only ones with any character and personality, and everyone else around has the individuality of a cardboard cutout, you’re probably in a simulation. Probably a video game, possibly a really bad movie. When you meet a ridiculously evil and malicious boss, it’s case closed.

Has a disembodied voice narrated everything that happened to you lately? You’re in a simulation.

Has your country, or whole species, faced severe problems with obvious though difficult solutions, but your leaders refuse to tackle them because their rich buddies would lose money? Sadly, this is reality.

On a serious philosophical note, the “simulation hypothesis” is supposed to be more plausible than your garden variety epistemological skeptical hypothesis. Descartes’s evil demon (who gives you false sensations of an external world) is the classic example. But, while in some sense we can’t rule out the evil demon, we have absolutely no reason to take it seriously, either. Computer simulations, on the other hand, do happen. But to the extent that they portray human life, they are systematically different from real life. This looks likely to continue.

Steven Strogatz vs David Chalmers on Fading Qualia

OK, I’m cheating a bit with the title. It’s actually Miguel Ángel Sebastián and Manolo Martínez who are making the argument I’m summarizing here. Their title is “Gradualism, Bifurcation, and Fading Qualia” and their article, forthcoming in Analysis, can be found at https://philarchive.org/archive/SEBGBA . But their argument turns on mathematics that is well explained by Strogatz*, and is applicable to a wide variety of physical systems. Including, arguably, the brain.

*Strogatz, Steven (2001). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Westview Press.

Chalmers’s fading/flickering qualia argument

Chalmers is best known for his property dualism, but he also has some commitments that sound a lot like those of a physicalist. The fading qualia argument is intended to show that given that two systems have the same fine-grained functional form, the stuff they are made of (e.g. meat vs silicon) cannot matter.

Consider the brain of some human, let’s call them Geppetto, currently enjoying a perceptual experience as of a red patch. Assume that we have a sufficiently fine-grained functional specification of this brain’s activity. … Consider also a robot, GPTto, whose sensory processing happensin a silicon-based computer meeting the exact same specification. We also assume that GPTto is not enjoying an experience as of red. Finally, a sorites series is launched, in which at each step Geppetto’s brain is rewired so that one of its neurons is replaced by its silicon analog in GPTto.

Sebastián and Martínez, p. 4.

What happens to Geppetto’s consciousness during this series of operations? It seems there are only 3 alternatives. 1: Nothing happens – the robot and the human were equally conscious all along. 2: Gepetto’s consciousness gradually fades to nil, but, by the stipulations of the thought-experiment, they never notice this, or they do but somehow can’t manage to complain about it. (GPTto doesn’t complain about fading consciousness, and by stipulation they behave identically.) 3: At some point, perhaps after some very minor non noticeable degradation, Gepetto’s consciousness suddenly vanishes. They still go through the motions, but there’s nobody home.

Chalmers thinks option 3 is ruled out by two lines of reasoning. First, gradually changing causes should have gradually changing effects. (Mathematically, continuity should apply to the effects if it applies to the causes.) Second, if sudden consciousness loss is possible, we should be able to suddenly restore it by adding back that critical neuron, then make consciousness flicker on and off with a tiny change, which seems absurd.

But this is too fast. Often we are not much interested in a system’s instantaneous behavior, but its stable fixed points. Sebastián and Martínez point out that in nonlinear dynamical systems, the stable points or regions can shift permanently and drastically in response to a tiny change in one variable. Their objection turns on so-called subcritical pitchfork bifurcations (Strogatz 2001, ch. 3).

Consider a system with at least two properties of interest, H and L, which are governed by the following equation:

dH/dt = L · H + H^3 − H^5 (eq. 3 in Sebastián and Martínez)

Suppose we are interested in the fixed points of this system, i.e. points where dH/dt = 0 for a given pair of values L and H. Let’s also divide the fixed points into stable and unstable ones, where stable points are ones that would not run away to infinity upon a tiny fluctuation in L or H. Then Figure 3 represents the behavior of the system of eq. (3), with L on the horizontal axis and H on the vertical, with stable fixed points shown as thick lines and unstable fixed points shown with dashes. (Points shown as lines? Well, think of the lines as emerging from the fact that the points are packed close to other similar points.)

The arrows tell a story of the way a particular system might evolve. It starts just to the right of the point L-sub-S, and L is gradually increased. H = 0, and this will not change until the origin is reached. For values of L > 0, the system must jump to either of the two lines above or below the origin; let’s suppose it jumps up. From this point, if L is gradually increased or decreased, H will changed slowly and gradually, even if L goes below zero. The system exhibits hysteresis: some changes in a variable are resistant to changing back. This continues unless L is decreased below the value labeled L-sub-S. When that happens, H must suddenly drop down to zero, landing on the horizontal axis. There are two jumps in the value of H in this history, and they do not occur at the same value of L.

Sebastián and Martínez point out that pitchfork subcritical bifurcations are observed in actual physical systems, such as aeroelastic flutter, and in the dynamics of nerve cells.

How does this blow a hole in Chalmers’s Fading Qualia and Flickering Qualia arguments? Suppose that L is a measure of the number of active networked neurons in Gepetto’s brain, rescaled so that L = 0 at, say, a few million neurons. And let the absolute value of H represent the vividness of their experience (the sign of H has no significance in this model). If Gepetto is described by equation (3), their experience will indeed fall of a cliff, vanish, at a critical number of neurons. This can happen even though all quantities of the physical system are continuous. So Fading Qualia are not guaranteed by continuous physics. Moreover, adding that single critical neuron back will not restore Gepetto’s consciousness. So Flickering Qualia doesn’t follow.

Sebastián and Martínez don’t claim that the Sorites thought-experiment of Gepetto/GPTto is useless, or the argument irreparable:

we do not dispute the implausibility of qualia depending only on neuronal activity, all the while behavior is sensitive to functional organization implemented in both neuronal and silicon-based activity. Our point is that this implausibility cannot be spelled out as the claim that suddenly disappearing qualia are naturalistically unacceptable, because there
is no nomological mechanism that could account for them.

p. 14

Why do I love their paper? Because it demonstrates how important it is to take actual physics seriously in philosophical thought-experiments. Or conversely, how hazardous it is to rely on physical intuition when physics matters in philosophical arguments. If that sounds familiar, you may have been reading some of my other posts here.