Why oh why computationalism?

A mammal does a million things at once. She sniffs the air. She moves her head and eyes. (She has and maintains a head and eyes.) She absorbs photons of different wavelengths and registers them differently. She metabolizes glucose. She increases entropy. She gets curious about sounds. She forms beliefs and desires. She also computes.

Why seize on the last fact and say “aha, here is where all mental aspects lie!” Why that fact to the exclusion of all others? It just looks arbitrary.

It’s worth distinguishing between computationalism and functionalism. As David Lewis’s classic version of functionalism maintained, mental states are identified by their relations to each other and to observations by, and behaviors of, the organism in question. Or as Gualtiero Piccinini (PDF) puts the functionalist thesis, “the mind is (an aspect of) the functional organization of the organ of cognition.” Meanwhile, computing mechanisms “are those mechanisms whose teleological function is manipulating medium-independent vehicles in accordance with a rule.” On Piccinini’s definitions, computationalism is a specific version of functionalism – and Piccinini advocates non-computational functionalism.

As I noted in my post Only Two Cheers for Functionalism, functionalism makes good sense for cognition and (Searle to the contrary notwithstanding) for intentionality. But I don’t see computationalism as adequate even in those domains. And in my view not even functionalism looks promising for distinguishing various aspects of phenomenal consciousness.

Philosophers have a common definition of “computation” which Piccinini rightly criticizes in another article :

If there are two descriptions of a system, a physical description and a computational description, and if the computational description maps onto the physical description, then the system is a physical implementation of the computational description and the computational description is the system‘s software. The problem with this view is that it turns everything into a computer. (p. 14)

Let’s not accept panpsychism on the basis that everything is a computer and computation is mindfulness. Piccinini suggests a different definition of computation:

program execution is understood informally as a special kind of activity pertaining to special mechanisms (cf. Fodor 1968b, 1975; Pylyshyn 1984). Computers are among the few systems whose behavior we normally explain by invoking the programs they execute. (p. 19)

The definition of “function” is also potentially problematic for the functionalism / computationalism distinction. Piccinini has a plausible approach to defining functions (pp. 23ff) that I won’t recap here.

One more worry about computationalism. This thought is inspired by Scott Aaronson in conversation. Computation is normally understood as a causal process spread over time. But computer science is a branch of mathematics, and it’s easy to see that the same mathematical relationships could be realized over a spatially extended structure. Who needs time? Time is a prerequisite for causality, but who needs causality?

But then, who needs space, or matter, or energy? Arguably, mathematical structures like the proof of Fermat’s last theorem exist regardless of whether anyone discovers them or writes them down. (Are there numbers between 6 and 9? Then there are numbers.) If my conscious life is a complex computation, I as a physical being may be redundant – depending on one’s philosophy of mathematics.

Only Two Cheers for Functionalism

TLDR: Intentionality? Yay! Consciousness? Hooray! Particular sensations? What makes you think functionalism captures them?

Functionalism in the philosophy of mind is probably best understood first in relation to behaviorism. The Stanford Encyclopedia of Philosophy entry says:

It seemed to the critics of behaviorism, therefore, that theories that explicitly appeal to an organism’s beliefs, desires, and other mental states, as well as to stimulations and behavior, would provide a fuller and more accurate account of why organisms behave as they do.

Functionalism (SEP)

But another point of contrast, in addition to behaviorism, is identity theory.

The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain.

The Mind/Brain Identity Theory (SEP)

We’ll see more explanation below why prototypical functionalists didn’t want to embrace identity theory as an option for what “beliefs, desires, and other mental states” amount to.

In this post I want to talk about three major issues in philosophy of mind. From professional philosophers, we have disputes about “intentionality” (plain English: reference; what thoughts or desires are about) and “qualia” (plain English: subjective experience, especially sensation). From the grass roots up as well as academia, we have the issue of which beings are or could be conscious.

Intentionality/Reference

The word intentionality comes into philosophy of mind from the Latin word intentio, meaning concept and its root tendere meaning directed toward something (SEP again). The question, then, is what words/concepts/thoughts point at, and how they do it. This “how” explanation should also, ideally, satisfy us why a thought which had the cited features would therefore point at the thing(s) it does.

An important dividing point is whether one thinks that words/sentences take priority, and thoughts and desires borrow their reference from there (Noam Chomsky seems to hold such a view), or whether organisms’ thoughts and desires take priority and bestow reference on linguistic items. I take the latter view, and won’t argue for it here. This makes a functionalist account of reference significantly harder than would a language-first approach. But I still think it is extremely promising. Here are a few thoughts on why.

A concept typically has important relationships both to other concepts and to things in the world outside the mind. Take for example the concept named with “whale” – and try to project oneself into the situation of an 18th century thinker. A whale is supposed to be a very large fish, which often surfaces and “blows”, and parts of its body can be processed into whale oil for lamps. These are conceptual role relations for “whale”. Moreover, there are experts (fishers, whalers, and naturalists) who have observed them, and readily agree on recognizing new ones. These are world-mind interactions that do much to fix the reference of “whale”. Note that some of the conceptual role assumptions can contain errors – on the best precisification of “fish”, a whale does not count as one – and yet the reference can succeed anyway. Also, perhaps a few large sharks were misidentified as whales, yet that need not alter the reference of “whale”.

For an explanation of how “whale” could mean whale despite the errors just mentioned, see Roche and Sober’s paper “Hypotheses that Attribute False Beliefs − a Two-Part Epistemology (Darwin+Akaike)”. It rebuts certain criticisms of functionalist-friendly accounts of reference. But that’s a bit of a digression, here we want positive reasons for thinking functionalism can explain the reference of mental states.

Look again at the concept-to-concept and world-to-concept relationships illustrated above. These are a perfect fit to some major themes in functionalist philosophy of mind. David Lewis used Ramsey sentences to capture the idea:

To construct the Ramsey-sentence of this “theory”, the first step is to conjoin these generalizations, then to replace all names of different types of mental states with different variables, and then to existentially quantify those variables, as follows:

xyzw(x tends to be caused by [O] & x tends to produce states yz, and w & x tends to produce [B]).

SEP on Functionalism, sec. 3.2

Thus, states y, z, and w would be other mental states, such as other concepts, O would be a particular impingement of the world on the organism (for example an observation), and B would be behavior(s). Additional logical formulae would have to be added, of course, for the other concepts y, z, and w, listing their characteristic world-to-organism and organism-to-world regularities. (Confession: I changed the example; it was originally about pain. That’s fair, though, since Lewis would give the same analysis for belief states like “there’s a whale”. And we should be willing to entertain the thought that such an analysis might work better for some mental states than for others.)

Thus, functionalist approaches to the reference of concepts and words seem to be barking up exactly the right trees. One cheer for functionalism!

Consciousness

To call a being conscious presumably implies both that it can perceive and/or desire things in the world (reference) AND that it has internal states that mean something to it: subjective experiences. Philosophers often use the phrase “something it is like to be that creature”, but that doesn’t seem very helpful. I think we can do better at extracting a relatively philosophy-neutral characterization of subjective experience, by focusing on certain sensations. Here’s an experiment.

Put your left hand in a bucket of hot water, and let it acclimate for a few minutes.  Meanwhile let your right hand acclimate to a bucket of ice water.  Then plunge both hands into a bucket of lukewarm water.  The lukewarm water feels very different to your two hands. When asked to tell the temperature of the lukewarm water without looking at a temperature readout, you probably don’t know.  Asked to guess, you’re off by a considerable margin.

Next, practice, practice, practice. I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that you will get very good at estimating the temperature of a bucket of water. After you hone your skill, we bring a bucket of 20 C water (without telling), and you move your could hand first. “Feels like 20 Celsius.” Your hot hand follows. “Yup, feels like 20,” you say.

“Wait,” we ask. “You said feels-like-20 for both hands.  Does this mean the bucket no longer feels different to your two different hands, like it did when you started?” The answer, of course, is no. Evidently, there is a feeling-of-cold and a feeling-of-hot that go beyond (though they may inform) the judgement about the water outside your hands. These, and other sensations that bear imperfect correlations to external world conditions, will be our paradigm examples of subjective sensations. The taste of banana, the smells of roses, the mellowness of a tone, the pain of a scratch and the even worse feeling of a certain itch – there are many and diverse examples. To focus on these sensations may be a little narrower than what most philosophers use the word “qualia” for. But that’s OK in this context, because (A) these are the go-to examples for philosophers who attack functionalism for utterly leaving out experience, and (B) I want to suggest that functionalism gives us a good idea for telling which creatures have subjective sensations (or when it might be indeterminate).

So what’s good about functionalism here? It’s the very diversity of the sensations that makes it doubtful that a single type of brain process accounts for all and only them. Functionalism can handle this diversity because all these sensations have a certain role in common: they give us a second angle on our experiences. We not only know (or have a best guess on) what the external world was doing at the time, but we have information about how we were affected. (The additional survival value of the latter should not be too hard to imagine, I think.)

Of course if Global Workspace Theories are right(ish), then global network activation marks all conscious mental activity. But that’s broader than sensation; it includes thoughts and concepts which go beyond any sensation that may be associated with them. The content of a thought depends on its reference. In claiming that reference goes beyond sensation, I’m denying the phenomenal intentionality theory, and I’m confident in doing so. (Those theorists need to wrestle with the later works of Wittgenstein, I’d say, and I predict they’ll lose.)

So functionalism looks promising not only for telling us which creatures are conscious, but even for suggesting a definition of which mental processes count as qualia (narrowly conceived). Another cheer!

Itches

So how about a particular sensation – say, itches? The standard functionalist strategy would appeal to the stimuli and behavior, as well as other mental states associated with itching. The trouble is that there are an awful lot of stimulus conditions that lead to itching. Dry skin, acne, and insect bites, of course. But also healing, certain drugs, perfectly normal hair follicle development … the list seems almost endless and absurdly diverse. Surely a better explanation of why all these count as itches is not: that they are on this list, but rather: because we recognize a similarity of our internal reaction. Much as we can truly say that “this 30 Celsius water feels cold to my left hand!” without thereby implying that all 30 Celsius water counts as cold.

Itches causally promote other mental states, like grumpiness. But these effects aren’t very large and don’t distinguish itches from other phenomena like pains.

Perhaps behavior is a better route. People scratch itches, while they usually avoid all contact with a painful area. Except when they apply heating pads or ice packs to painful areas. Come to think of it, heating pads or ice can calm some itches. And I’ve had pains that ease up with a gentle scratch. And scratching is supposed to worsen some itches, like poison ivy. A person with much repeated experience with poison ivy might lose even the desire to scratch the area. Scratching usually works to help itches, and usually doesn’t work on pains, and that suffices to explain the correlation. Drawing up a list of behaviors (with or without a list of stimuli) is likely to get the boundaries of itching wrong.

Besides the behaviors and stimuli – and with a perfect rather than loose correlation – all the paradigm examples of itching I can think of involve mammals. The hypothesis that itching is a process in the mammalian neural architecture (and perhaps extending beyond mammals to, say, vertebrates) jumps out as a strong contender. In other words, perhaps it’s time to move on from functionalism, when it comes to particular sensations, and embrace mind-brain identity theory.

Couldn’t a theory be both functionalist and an identity theory at the same time? In a very expansive sense of “functionalist”, yes. But:

However, if there are differences in the physical states that satisfy the functional definitions in different (actual or hypothetical) creatures, such theories – like most versions of the identity theory – would violate a key motivation for functionalism, namely, that creatures with states that play the same role in the production of other mental states and behavior possess, literally, the same mental states.

SEP on Functionalism, sec. 3.5

I suggest however that we embrace “chauvinism” instead. If hypothetical Martians lack the relevant neurology that explains our itches, they don’t itch. That doesn’t mean that we don’t respect the Martians, or that we think we are superior because we can itch and they can’t. Calling this view “chauvinism” misses most of what actual chauvinism (e.g., male chauvinism) is actually about.

Now, does this kind of identity theory make qualia (sensations) ineffable, and inaccessible to third party observation? Not at all. They’re as effable as any movie star. You might have to put a creature inside an fMRI scanner (or do even more invasive research), but its mental processes are in principle knowable. Of course, you might not be able to undergo those sensations yourself. And it might not be able to undergo yours.

But then, a Martian might digest its food in a very different way than humans do. It cannot undergo enzymatic digestion. But it can understand your enzymatic digestion just fine.

Suppose you “upload” your mind into a system that lacks the neurology underlying itches. And for the sake of argument, waive any difficulties about “uploaded-you” being you. When uploaded-you remembers your itches – here, define “remembers” as “accesses reliable information laid down at an earlier time and transformed by reliable rules” – “you” will not be in the same state you would be had you not uploaded. But again, this is no more remarkable than the fact that uploaded-you doesn’t digest food the same way.

No cheer for functionalism regarding individual sensations.

AI Narcissism?

David Bentley Hart claims at Aeon magazine that AI is just a shiny mirror in which some people see themselves. They are like Narcissus seeing his reflection in a pond and mistaking the image for a(nother) real person.

Sure enough, humans have overactive “person” detectors. A tree’s shadow moving in your hall makes you think there’s an intruder in your house. Blake Lemoine thinks LaMDA AI is sentient. But Hart’s critique goes far beyond current or near future AI and its admirers. He rejects functionalism in the philosophy of mind. You need a living mind, Hart says (why? – he’s not telling).

Hart writes:

To describe the mind as something like a digital computer is no more sensible than describing it as a kind of abacus, or as a library. In the physical functions of a computer, there is nothing resembling thought: no intentionality or anything remotely analogous to intentionality, no consciousness, no unified field of perception, no reflective subjectivity. Even the syntax that generates coding has no actual existence within a computer. To think it does is rather like mistaking the ink, paper and glue in a bound volume for the contents of its text.

I wonder, can we run this argument against a living brain? If not, why not? If you look at individual neurons, you won’t see subjectivity either. Nor is it fathomable how the coordinated activity of billions of those neurons gives rise to subjectivity.

Hart’s bio says he’s the author of Tradition and Apocalypse: An Essay on the Future of Christian Belief. Aha, maybe there’s where the ticket is supposed to be, in a nonphysical soul. (How the heck that would help, I have no idea.)

Still, functionalism is not the only possible non-religious answer to how various features of consciousness arise from atoms and fields. After all, brains do many other things besides compute, that are also vital to our subjective experience. They consume glucose. They consume oxygen. They send and receive electrical signals. They send and receive chemical signals. They are brains, and all paradigmatically conscious creatures have brains. Why pick one of these things – computation – and say here is the magic essence? Why ignore all the others?