Only Two Cheers for Functionalism

TLDR: Intentionality? Yay! Consciousness? Hooray! Particular sensations? What makes you think functionalism captures them?

Functionalism in the philosophy of mind is probably best understood first in relation to behaviorism. The Stanford Encyclopedia of Philosophy entry says:

It seemed to the critics of behaviorism, therefore, that theories that explicitly appeal to an organism’s beliefs, desires, and other mental states, as well as to stimulations and behavior, would provide a fuller and more accurate account of why organisms behave as they do.

Functionalism (SEP)

But another point of contrast, in addition to behaviorism, is identity theory.

The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain.

The Mind/Brain Identity Theory (SEP)

We’ll see more explanation below why prototypical functionalists didn’t want to embrace identity theory as an option for what “beliefs, desires, and other mental states” amount to.

In this post I want to talk about three major issues in philosophy of mind. From professional philosophers, we have disputes about “intentionality” (plain English: reference; what thoughts or desires are about) and “qualia” (plain English: subjective experience, especially sensation). From the grass roots up as well as academia, we have the issue of which beings are or could be conscious.

Intentionality/Reference

The word intentionality comes into philosophy of mind from the Latin word intentio, meaning concept and its root tendere meaning directed toward something (SEP again). The question, then, is what words/concepts/thoughts point at, and how they do it. This “how” explanation should also, ideally, satisfy us why a thought which had the cited features would therefore point at the thing(s) it does.

An important dividing point is whether one thinks that words/sentences take priority, and thoughts and desires borrow their reference from there (Noam Chomsky seems to hold such a view), or whether organisms’ thoughts and desires take priority and bestow reference on linguistic items. I take the latter view, and won’t argue for it here. This makes a functionalist account of reference significantly harder than would a language-first approach. But I still think it is extremely promising. Here are a few thoughts on why.

A concept typically has important relationships both to other concepts and to things in the world outside the mind. Take for example the concept named with “whale” – and try to project oneself into the situation of an 18th century thinker. A whale is supposed to be a very large fish, which often surfaces and “blows”, and parts of its body can be processed into whale oil for lamps. These are conceptual role relations for “whale”. Moreover, there are experts (fishers, whalers, and naturalists) who have observed them, and readily agree on recognizing new ones. These are world-mind interactions that do much to fix the reference of “whale”. Note that some of the conceptual role assumptions can contain errors – on the best precisification of “fish”, a whale does not count as one – and yet the reference can succeed anyway. Also, perhaps a few large sharks were misidentified as whales, yet that need not alter the reference of “whale”.

For an explanation of how “whale” could mean whale despite the errors just mentioned, see Roche and Sober’s paper “Hypotheses that Attribute False Beliefs − a Two-Part Epistemology (Darwin+Akaike)”. It rebuts certain criticisms of functionalist-friendly accounts of reference. But that’s a bit of a digression, here we want positive reasons for thinking functionalism can explain the reference of mental states.

Look again at the concept-to-concept and world-to-concept relationships illustrated above. These are a perfect fit to some major themes in functionalist philosophy of mind. David Lewis used Ramsey sentences to capture the idea:

To construct the Ramsey-sentence of this “theory”, the first step is to conjoin these generalizations, then to replace all names of different types of mental states with different variables, and then to existentially quantify those variables, as follows:

xyzw(x tends to be caused by [O] & x tends to produce states yz, and w & x tends to produce [B]).

SEP on Functionalism, sec. 3.2

Thus, states y, z, and w would be other mental states, such as other concepts, O would be a particular impingement of the world on the organism (for example an observation), and B would be behavior(s). Additional logical formulae would have to be added, of course, for the other concepts y, z, and w, listing their characteristic world-to-organism and organism-to-world regularities. (Confession: I changed the example; it was originally about pain. That’s fair, though, since Lewis would give the same analysis for belief states like “there’s a whale”. And we should be willing to entertain the thought that such an analysis might work better for some mental states than for others.)

Thus, functionalist approaches to the reference of concepts and words seem to be barking up exactly the right trees. One cheer for functionalism!

Consciousness

To call a being conscious presumably implies both that it can perceive and/or desire things in the world (reference) AND that it has internal states that mean something to it: subjective experiences. Philosophers often use the phrase “something it is like to be that creature”, but that doesn’t seem very helpful. I think we can do better at extracting a relatively philosophy-neutral characterization of subjective experience, by focusing on certain sensations. Here’s an experiment.

Put your left hand in a bucket of hot water, and let it acclimate for a few minutes.  Meanwhile let your right hand acclimate to a bucket of ice water.  Then plunge both hands into a bucket of lukewarm water.  The lukewarm water feels very different to your two hands. When asked to tell the temperature of the lukewarm water without looking at a temperature readout, you probably don’t know.  Asked to guess, you’re off by a considerable margin.

Next, practice, practice, practice. I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that you will get very good at estimating the temperature of a bucket of water. After you hone your skill, we bring a bucket of 20 C water (without telling), and you move your could hand first. “Feels like 20 Celsius.” Your hot hand follows. “Yup, feels like 20,” you say.

“Wait,” we ask. “You said feels-like-20 for both hands.  Does this mean the bucket no longer feels different to your two different hands, like it did when you started?” The answer, of course, is no. Evidently, there is a feeling-of-cold and a feeling-of-hot that go beyond (though they may inform) the judgement about the water outside your hands. These, and other sensations that bear imperfect correlations to external world conditions, will be our paradigm examples of subjective sensations. The taste of banana, the smells of roses, the mellowness of a tone, the pain of a scratch and the even worse feeling of a certain itch – there are many and diverse examples. To focus on these sensations may be a little narrower than what most philosophers use the word “qualia” for. But that’s OK in this context, because (A) these are the go-to examples for philosophers who attack functionalism for utterly leaving out experience, and (B) I want to suggest that functionalism gives us a good idea for telling which creatures have subjective sensations (or when it might be indeterminate).

So what’s good about functionalism here? It’s the very diversity of the sensations that makes it doubtful that a single type of brain process accounts for all and only them. Functionalism can handle this diversity because all these sensations have a certain role in common: they give us a second angle on our experiences. We not only know (or have a best guess on) what the external world was doing at the time, but we have information about how we were affected. (The additional survival value of the latter should not be too hard to imagine, I think.)

Of course if Global Workspace Theories are right(ish), then global network activation marks all conscious mental activity. But that’s broader than sensation; it includes thoughts and concepts which go beyond any sensation that may be associated with them. The content of a thought depends on its reference. In claiming that reference goes beyond sensation, I’m denying the phenomenal intentionality theory, and I’m confident in doing so. (Those theorists need to wrestle with the later works of Wittgenstein, I’d say, and I predict they’ll lose.)

So functionalism looks promising not only for telling us which creatures are conscious, but even for suggesting a definition of which mental processes count as qualia (narrowly conceived). Another cheer!

Itches

So how about a particular sensation – say, itches? The standard functionalist strategy would appeal to the stimuli and behavior, as well as other mental states associated with itching. The trouble is that there are an awful lot of stimulus conditions that lead to itching. Dry skin, acne, and insect bites, of course. But also healing, certain drugs, perfectly normal hair follicle development … the list seems almost endless and absurdly diverse. Surely a better explanation of why all these count as itches is not: that they are on this list, but rather: because we recognize a similarity of our internal reaction. Much as we can truly say that “this 30 Celsius water feels cold to my left hand!” without thereby implying that all 30 Celsius water counts as cold.

Itches causally promote other mental states, like grumpiness. But these effects aren’t very large and don’t distinguish itches from other phenomena like pains.

Perhaps behavior is a better route. People scratch itches, while they usually avoid all contact with a painful area. Except when they apply heating pads or ice packs to painful areas. Come to think of it, heating pads or ice can calm some itches. And I’ve had pains that ease up with a gentle scratch. And scratching is supposed to worsen some itches, like poison ivy. A person with much repeated experience with poison ivy might lose even the desire to scratch the area. Scratching usually works to help itches, and usually doesn’t work on pains, and that suffices to explain the correlation. Drawing up a list of behaviors (with or without a list of stimuli) is likely to get the boundaries of itching wrong.

Besides the behaviors and stimuli – and with a perfect rather than loose correlation – all the paradigm examples of itching I can think of involve mammals. The hypothesis that itching is a process in the mammalian neural architecture (and perhaps extending beyond mammals to, say, vertebrates) jumps out as a strong contender. In other words, perhaps it’s time to move on from functionalism, when it comes to particular sensations, and embrace mind-brain identity theory.

Couldn’t a theory be both functionalist and an identity theory at the same time? In a very expansive sense of “functionalist”, yes. But:

However, if there are differences in the physical states that satisfy the functional definitions in different (actual or hypothetical) creatures, such theories – like most versions of the identity theory – would violate a key motivation for functionalism, namely, that creatures with states that play the same role in the production of other mental states and behavior possess, literally, the same mental states.

SEP on Functionalism, sec. 3.5

I suggest however that we embrace “chauvinism” instead. If hypothetical Martians lack the relevant neurology that explains our itches, they don’t itch. That doesn’t mean that we don’t respect the Martians, or that we think we are superior because we can itch and they can’t. Calling this view “chauvinism” misses most of what actual chauvinism (e.g., male chauvinism) is actually about.

Now, does this kind of identity theory make qualia (sensations) ineffable, and inaccessible to third party observation? Not at all. They’re as effable as any movie star. You might have to put a creature inside an fMRI scanner (or do even more invasive research), but its mental processes are in principle knowable. Of course, you might not be able to undergo those sensations yourself. And it might not be able to undergo yours.

But then, a Martian might digest its food in a very different way than humans do. It cannot undergo enzymatic digestion. But it can understand your enzymatic digestion just fine.

Suppose you “upload” your mind into a system that lacks the neurology underlying itches. And for the sake of argument, waive any difficulties about “uploaded-you” being you. When uploaded-you remembers your itches – here, define “remembers” as “accesses reliable information laid down at an earlier time and transformed by reliable rules” – “you” will not be in the same state you would be had you not uploaded. But again, this is no more remarkable than the fact that uploaded-you doesn’t digest food the same way.

No cheer for functionalism regarding individual sensations.

15 thoughts on “Only Two Cheers for Functionalism

  1. It seems to me that itches are functional, probably an evolutionary adaptation useful for getting potentially harmful things off your outer surface. It’s probably more relevant to something that has a skin rather than just a hard shell, so I could see arthropods not necessarily having it. It sounds like fish itch, since they’re sometimes seen rubbing their body against things. I wonder if cephalopods itch.

    On individual sensations, a lot depends on what actually makes it into consciousness. And it’s worth noting that brains remember a sensation with much the same neural circuits that experienced it, or at least a subset of them. In other words, the memory of a previous sensation isn’t stored somewhere independent of where the sensation happens. Remembering a previous sensation is done with the same machinery used to experience new ones. So uploaded-you remembers itches from his previous life in the same way he’d experience an itch in the new one.

    I suppose we could imagine providing something like the original circuitry for remembering previous sensations and different circuitry for new ones. Maybe the new circuitry is more efficient or something but we just allow the older version for nostalgia. But that means we could probably choose to use the old circuitry if we wanted for the new sensations, just knowing it might be more costly. A lot depends on what from those circuits makes it to the rest of the network and so into consciousness. Maybe upload-you wouldn’t be able to tell the difference regardless.

    Liked by 2 people

    1. A question.

      Is there one “itch” circuit for the back, legs, arms, and head? Or, does each one has its own “itch” circuit? Is memory for an itch on the leg stored in the same or different place than an itch on the head?

      Liked by 1 person

      1. Similar to pain, I don’t think all itches activate the same circuits, at least before things converge on the categorization of it being an itch. And I’m being careful to use the word “circuit” here, because we’re not talking about one localized spot, but a distributed firing pattern throughout the brain, with some spots being crucial.

        On how we know, the old evidence is that lesions that take out the ability to recognize a particular type of object also tend to take out any ability to remember or imagine it. (Although if it’s small enough or on the fringes, it can lead to unusual combinations.) I think the newer evidence comes from brain scans, but I’d have to dig out my neuroscience books to be sure.

        Liked by 1 person

      2. “not talking about one localized spot, but a distributed firing pattern throughout the brain”

        Wouldn’t the firing pattern for an itch on the knee be in the cortical homunculus tied to the leg? And possibly the same pattern in the cortical homunculus tied to the arm for an similar itch on the arm? And possibly the more specific location be a relative position in a grid representing the arm or leg.

        Regarding memory. Can you generalize types from objects to itches? An object is going to involve other parts of the brain that assign names to objects, isn’t it, unless some sort of no-report protocol was used? Or, is was it those parts of the brain that assign names where the lesions were in the studies?

        Also, there is a difference between remembering a particular instance of an object and an entire category of objects. For example, my ability to remember tables might be generally intact but my ability to remember my grandmother’s table could be missing.

        Honestly I’m not sure but memory still seems to be something of mystery in itself.

        Like

      3. The homunculus area would light up, but it’s not the only place that lights up when those regions are stimulated. And I remember reading somewhere that the homunculus regions are actually not just simple detectors but tied to particular motor responses.

        There have been lesions all over the place. Prior to brain scanning technology, the only way neuroscientists could figure out what brain regions were crucial for particular capabilities was correlating the disabilities of brain injured patients with a later post-mortem examination of their brain.

        On remembering particular instances vs categories of objects, have you ever tried to remember the details of a particular instance days or weeks later without the general category bleeding into it? I’m not sure but I think we start with the general pattern and then add variations for the particular instances we remember. Although there are some things, like faces, we devote a lot more substrate to remembering the instances. The rest probably depends on our experiences. When I’m in a big city, everything looks the same to me because I haven’t spent much time in big cities. But I’m sure a NY native has a much easier time keeping track of locations.

        There’s still a lot to learn about memory, but what is known constrains the possibilities.

        Liked by 1 person

      4. “The homunculus area would light up, but it’s not the only place that lights up when those regions are stimulated. And I remember reading somewhere that the homunculus regions are actually not just simple detectors but tied to particular motor responses”.

        Yes and not surprising. There likely would be consciousness in other places than the homunculus because information is being routing to other nodes from the homunculus.

        Liked by 1 person

    2. Well of course itches are functional in *that* sense. Most features and processes of organisms have biological functions, as opposed to being pure spandrels. This leaves open the question of whether itches are *a* way to achieve the biological advantage, or the *only* way (and the only way *by definition* because a definition equates itch with harmful-skin-substance-removal).

      Uploaded-you remembers itches from your biological life in the functionalistically-same way he’d experience a new itch. But he wouldn’t do it in the same detailed way, with the same neural circuits. By hypothesis, he doesn’t even have neurons! (Except in the metaphorical way that current AI programs have “neurons” in their computational layers.) I gave a relevant definition of memory that shows that he can remember pre-upload events, while the definition remains silent on whether the memory involves similar experience.

      If android-you contains both original and new circuits, and if the modifications are modeled on the way our sensory neurology works, then he probably would be able to tell the difference. That is, the difference between activating his bio skin-irritant-detection system and his silicon-based irritant-detection system. The reason to think so is precisely that we can tell the difference between our hearing-based memory of observing our lover saying “I love you” and our lip-reading based memory of the same. And so on for other sensory modalities.

      Liked by 1 person

      1. We know invertebrates don’t have c-fibers, so assuming they feel persistent pain, their mechanisms for feeling it aren’t the same as vertebrates. Pain within biology seems multi-realizable. I imagine it would be the same for itching. But I haven’t investigated itching, so I’m extrapolating.

        Even if we gave upload-you a recording of the original sensory signals, I think he’d still only be able to process it with his current framework. He just wouldn’t have what is needed to tell any difference.

        With the two sets of circuits, assuming there are actual functional differences in their interaction with the rest of the system, upload-you might be able to tell. As you note, it might be like comparing different modalities. Although we’d have to alter his mental structure so he could do the comparison, which might have its own interesting effects.

        Like

      2. What do you mean *alter* android-you’s mental structure? We have to *build* it in the first place, and I’m suggesting building on similar principles to organisms – allowing multiple channels to target the same world-data.

        Like

      3. The organism has a certain number of modalities that can be integrated in various ways. You’re adding at least one more by having an itch-old and an itch-new. Depending how far into affective processing that split goes, the new system needs to be able to do comparisons the organism never could or had to do, requiring functionality it never had.

        Liked by 1 person

  2. I’ve never seen functionalism in philosophy of mind any more useful than functionalism in the social sciences. In most cases, it is simply a restating of what something does (or appears to do) but provides no explanation for how it does it. If car exists, it is because it serves the function of transporting people. A definition of “car” is that it is a machine for transporting humans.

    Liked by 1 person

    1. I think *most* phil-mind functionalists go further out on a limb than asserting the existence of that kind of function. But you’re right to point out the triviality of such a minimal “functionalism”.

      Liked by 1 person

Leave a comment