TLDR: Intentionality? Yay! Consciousness? Hooray! Particular sensations? What makes you think functionalism captures them?
Functionalism in the philosophy of mind is probably best understood first in relation to behaviorism. The Stanford Encyclopedia of Philosophy entry says:
It seemed to the critics of behaviorism, therefore, that theories that explicitly appeal to an organism’s beliefs, desires, and other mental states, as well as to stimulations and behavior, would provide a fuller and more accurate account of why organisms behave as they do.
Functionalism (SEP)
But another point of contrast, in addition to behaviorism, is identity theory.
The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain.
The Mind/Brain Identity Theory (SEP)
We’ll see more explanation below why prototypical functionalists didn’t want to embrace identity theory as an option for what “beliefs, desires, and other mental states” amount to.
In this post I want to talk about three major issues in philosophy of mind. From professional philosophers, we have disputes about “intentionality” (plain English: reference; what thoughts or desires are about) and “qualia” (plain English: subjective experience, especially sensation). From the grass roots up as well as academia, we have the issue of which beings are or could be conscious.
Intentionality/Reference
The word intentionality comes into philosophy of mind from the Latin word intentio, meaning concept and its root tendere meaning directed toward something (SEP again). The question, then, is what words/concepts/thoughts point at, and how they do it. This “how” explanation should also, ideally, satisfy us why a thought which had the cited features would therefore point at the thing(s) it does.
An important dividing point is whether one thinks that words/sentences take priority, and thoughts and desires borrow their reference from there (Noam Chomsky seems to hold such a view), or whether organisms’ thoughts and desires take priority and bestow reference on linguistic items. I take the latter view, and won’t argue for it here. This makes a functionalist account of reference significantly harder than would a language-first approach. But I still think it is extremely promising. Here are a few thoughts on why.
A concept typically has important relationships both to other concepts and to things in the world outside the mind. Take for example the concept named with “whale” – and try to project oneself into the situation of an 18th century thinker. A whale is supposed to be a very large fish, which often surfaces and “blows”, and parts of its body can be processed into whale oil for lamps. These are conceptual role relations for “whale”. Moreover, there are experts (fishers, whalers, and naturalists) who have observed them, and readily agree on recognizing new ones. These are world-mind interactions that do much to fix the reference of “whale”. Note that some of the conceptual role assumptions can contain errors – on the best precisification of “fish”, a whale does not count as one – and yet the reference can succeed anyway. Also, perhaps a few large sharks were misidentified as whales, yet that need not alter the reference of “whale”.
For an explanation of how “whale” could mean whale despite the errors just mentioned, see Roche and Sober’s paper “Hypotheses that Attribute False Beliefs − a Two-Part Epistemology (Darwin+Akaike)”. It rebuts certain criticisms of functionalist-friendly accounts of reference. But that’s a bit of a digression, here we want positive reasons for thinking functionalism can explain the reference of mental states.
Look again at the concept-to-concept and world-to-concept relationships illustrated above. These are a perfect fit to some major themes in functionalist philosophy of mind. David Lewis used Ramsey sentences to capture the idea:
To construct the Ramsey-sentence of this “theory”, the first step is to conjoin these generalizations, then to replace all names of different types of mental states with different variables, and then to existentially quantify those variables, as follows:
∃x∃y∃z∃w(x tends to be caused by [O] & x tends to produce states y, z, and w & x tends to produce [B]).
SEP on Functionalism, sec. 3.2
Thus, states y, z, and w would be other mental states, such as other concepts, O would be a particular impingement of the world on the organism (for example an observation), and B would be behavior(s). Additional logical formulae would have to be added, of course, for the other concepts y, z, and w, listing their characteristic world-to-organism and organism-to-world regularities. (Confession: I changed the example; it was originally about pain. That’s fair, though, since Lewis would give the same analysis for belief states like “there’s a whale”. And we should be willing to entertain the thought that such an analysis might work better for some mental states than for others.)
Thus, functionalist approaches to the reference of concepts and words seem to be barking up exactly the right trees. One cheer for functionalism!
Consciousness
To call a being conscious presumably implies both that it can perceive and/or desire things in the world (reference) AND that it has internal states that mean something to it: subjective experiences. Philosophers often use the phrase “something it is like to be that creature”, but that doesn’t seem very helpful. I think we can do better at extracting a relatively philosophy-neutral characterization of subjective experience, by focusing on certain sensations. Here’s an experiment.
Put your left hand in a bucket of hot water, and let it acclimate for a few minutes. Meanwhile let your right hand acclimate to a bucket of ice water. Then plunge both hands into a bucket of lukewarm water. The lukewarm water feels very different to your two hands. When asked to tell the temperature of the lukewarm water without looking at a temperature readout, you probably don’t know. Asked to guess, you’re off by a considerable margin.
Next, practice, practice, practice. I haven’t done the experiment, but human performance on similar perceptual learning tasks suggests that you will get very good at estimating the temperature of a bucket of water. After you hone your skill, we bring a bucket of 20 C water (without telling), and you move your could hand first. “Feels like 20 Celsius.” Your hot hand follows. “Yup, feels like 20,” you say.
“Wait,” we ask. “You said feels-like-20 for both hands. Does this mean the bucket no longer feels different to your two different hands, like it did when you started?” The answer, of course, is no. Evidently, there is a feeling-of-cold and a feeling-of-hot that go beyond (though they may inform) the judgement about the water outside your hands. These, and other sensations that bear imperfect correlations to external world conditions, will be our paradigm examples of subjective sensations. The taste of banana, the smells of roses, the mellowness of a tone, the pain of a scratch and the even worse feeling of a certain itch – there are many and diverse examples. To focus on these sensations may be a little narrower than what most philosophers use the word “qualia” for. But that’s OK in this context, because (A) these are the go-to examples for philosophers who attack functionalism for utterly leaving out experience, and (B) I want to suggest that functionalism gives us a good idea for telling which creatures have subjective sensations (or when it might be indeterminate).
So what’s good about functionalism here? It’s the very diversity of the sensations that makes it doubtful that a single type of brain process accounts for all and only them. Functionalism can handle this diversity because all these sensations have a certain role in common: they give us a second angle on our experiences. We not only know (or have a best guess on) what the external world was doing at the time, but we have information about how we were affected. (The additional survival value of the latter should not be too hard to imagine, I think.)
Of course if Global Workspace Theories are right(ish), then global network activation marks all conscious mental activity. But that’s broader than sensation; it includes thoughts and concepts which go beyond any sensation that may be associated with them. The content of a thought depends on its reference. In claiming that reference goes beyond sensation, I’m denying the phenomenal intentionality theory, and I’m confident in doing so. (Those theorists need to wrestle with the later works of Wittgenstein, I’d say, and I predict they’ll lose.)
So functionalism looks promising not only for telling us which creatures are conscious, but even for suggesting a definition of which mental processes count as qualia (narrowly conceived). Another cheer!
Itches
So how about a particular sensation – say, itches? The standard functionalist strategy would appeal to the stimuli and behavior, as well as other mental states associated with itching. The trouble is that there are an awful lot of stimulus conditions that lead to itching. Dry skin, acne, and insect bites, of course. But also healing, certain drugs, perfectly normal hair follicle development … the list seems almost endless and absurdly diverse. Surely a better explanation of why all these count as itches is not: that they are on this list, but rather: because we recognize a similarity of our internal reaction. Much as we can truly say that “this 30 Celsius water feels cold to my left hand!” without thereby implying that all 30 Celsius water counts as cold.
Itches causally promote other mental states, like grumpiness. But these effects aren’t very large and don’t distinguish itches from other phenomena like pains.
Perhaps behavior is a better route. People scratch itches, while they usually avoid all contact with a painful area. Except when they apply heating pads or ice packs to painful areas. Come to think of it, heating pads or ice can calm some itches. And I’ve had pains that ease up with a gentle scratch. And scratching is supposed to worsen some itches, like poison ivy. A person with much repeated experience with poison ivy might lose even the desire to scratch the area. Scratching usually works to help itches, and usually doesn’t work on pains, and that suffices to explain the correlation. Drawing up a list of behaviors (with or without a list of stimuli) is likely to get the boundaries of itching wrong.
Besides the behaviors and stimuli – and with a perfect rather than loose correlation – all the paradigm examples of itching I can think of involve mammals. The hypothesis that itching is a process in the mammalian neural architecture (and perhaps extending beyond mammals to, say, vertebrates) jumps out as a strong contender. In other words, perhaps it’s time to move on from functionalism, when it comes to particular sensations, and embrace mind-brain identity theory.
Couldn’t a theory be both functionalist and an identity theory at the same time? In a very expansive sense of “functionalist”, yes. But:
However, if there are differences in the physical states that satisfy the functional definitions in different (actual or hypothetical) creatures, such theories – like most versions of the identity theory – would violate a key motivation for functionalism, namely, that creatures with states that play the same role in the production of other mental states and behavior possess, literally, the same mental states.
SEP on Functionalism, sec. 3.5
I suggest however that we embrace “chauvinism” instead. If hypothetical Martians lack the relevant neurology that explains our itches, they don’t itch. That doesn’t mean that we don’t respect the Martians, or that we think we are superior because we can itch and they can’t. Calling this view “chauvinism” misses most of what actual chauvinism (e.g., male chauvinism) is actually about.
Now, does this kind of identity theory make qualia (sensations) ineffable, and inaccessible to third party observation? Not at all. They’re as effable as any movie star. You might have to put a creature inside an fMRI scanner (or do even more invasive research), but its mental processes are in principle knowable. Of course, you might not be able to undergo those sensations yourself. And it might not be able to undergo yours.
But then, a Martian might digest its food in a very different way than humans do. It cannot undergo enzymatic digestion. But it can understand your enzymatic digestion just fine.
Suppose you “upload” your mind into a system that lacks the neurology underlying itches. And for the sake of argument, waive any difficulties about “uploaded-you” being you. When uploaded-you remembers your itches – here, define “remembers” as “accesses reliable information laid down at an earlier time and transformed by reliable rules” – “you” will not be in the same state you would be had you not uploaded. But again, this is no more remarkable than the fact that uploaded-you doesn’t digest food the same way.
No cheer for functionalism regarding individual sensations.