Why oh why computationalism?

A mammal does a million things at once. She sniffs the air. She moves her head and eyes. (She has and maintains a head and eyes.) She absorbs photons of different wavelengths and registers them differently. She metabolizes glucose. She increases entropy. She gets curious about sounds. She forms beliefs and desires. She also computes.

Why seize on the last fact and say “aha, here is where all mental aspects lie!” Why that fact to the exclusion of all others? It just looks arbitrary.

It’s worth distinguishing between computationalism and functionalism. As David Lewis’s classic version of functionalism maintained, mental states are identified by their relations to each other and to observations by, and behaviors of, the organism in question. Or as Gualtiero Piccinini (PDF) puts the functionalist thesis, “the mind is (an aspect of) the functional organization of the organ of cognition.” Meanwhile, computing mechanisms “are those mechanisms whose teleological function is manipulating medium-independent vehicles in accordance with a rule.” On Piccinini’s definitions, computationalism is a specific version of functionalism – and Piccinini advocates non-computational functionalism.

As I noted in my post Only Two Cheers for Functionalism, functionalism makes good sense for cognition and (Searle to the contrary notwithstanding) for intentionality. But I don’t see computationalism as adequate even in those domains. And in my view not even functionalism looks promising for distinguishing various aspects of phenomenal consciousness.

Philosophers have a common definition of “computation” which Piccinini rightly criticizes in another article :

If there are two descriptions of a system, a physical description and a computational description, and if the computational description maps onto the physical description, then the system is a physical implementation of the computational description and the computational description is the system‘s software. The problem with this view is that it turns everything into a computer. (p. 14)

Let’s not accept panpsychism on the basis that everything is a computer and computation is mindfulness. Piccinini suggests a different definition of computation:

program execution is understood informally as a special kind of activity pertaining to special mechanisms (cf. Fodor 1968b, 1975; Pylyshyn 1984). Computers are among the few systems whose behavior we normally explain by invoking the programs they execute. (p. 19)

The definition of “function” is also potentially problematic for the functionalism / computationalism distinction. Piccinini has a plausible approach to defining functions (pp. 23ff) that I won’t recap here.

One more worry about computationalism. This thought is inspired by Scott Aaronson in conversation. Computation is normally understood as a causal process spread over time. But computer science is a branch of mathematics, and it’s easy to see that the same mathematical relationships could be realized over a spatially extended structure. Who needs time? Time is a prerequisite for causality, but who needs causality?

But then, who needs space, or matter, or energy? Arguably, mathematical structures like the proof of Fermat’s last theorem exist regardless of whether anyone discovers them or writes them down. (Are there numbers between 6 and 9? Then there are numbers.) If my conscious life is a complex computation, I as a physical being may be redundant – depending on one’s philosophy of mathematics.

4 thoughts on “Why oh why computationalism?

  1. I’m open to alternatives to computational functionalism, non-computational functionalism in particular. But it needs to be an actual alternative, with a discussion of the alternate ways to interpret neural activity. Vague gestures toward “causal powers” or implications of strong (magical) emergence, a tradition followed by Piccinini’s paper, seem to me to do little more than emote, “Boo computation!”

    Liked by 1 person

  2. I didn’t see strong emergence there, just emergence. I’ll give it a reread later.

    I do agree about causal powers. Depending what causal means in the phrase, causal powers are just all the properties of a system, or a subset with an asymmetrical relationship to later events. Either way, too unspecified to help the discussion.

    Like

  3. The problem I’ve had with functionalism is that simply doesn’t explain anything about consciousness. It is more of a position that the mind is what the mind does, but that doesn’t explain anything about how it does what it does. Observable behavior doesn’t help in this regard because we are talking about something that is internal. An EV and gasoline vehicle might be observed to do the same things but how they do it is completely different.

    Computationalism, however, can’t distinguish between the “computations” the brain does that are conscious and those that are unconscious. It’s entirely possible that “computations” can do 100% of what the brain does both consciously and unconsciously, so the theory has no ability the theory has no ability to explain what consciousness in particular does. Hence, many computationists are going to lean toward, fi not accept, the view that consciousness does almost nothing. But, if that is so, why does it exist and seem pervasive in organisms with even modestly complex brains.

    “Who needs time?”

    Minds do. If time didn’t exist, then the mind would need to invent it.

    A succession of instantaneous events provides zero information without a recording of past events, Without time, you can’t have past events. Zero information leads to no ability to predict. No knowledge. No mind.

    Liked by 1 person

  4. Coincidentally, EV (or fuel-cell) vs gasoline is my go-to example for the point that conscious feelings are *a* way to generate behavior, not the only way. Great minds think alike, I guess. And sometimes, so do I.

    Functionalism itself doesn’t explain much of anything, but one could have a specific neuroscience-inspired version of it that would explain a lot.

    I totally agree that time – and more specifically, increasing entropy – is required for memory and knowledge. I am just pointing out another arbitrary aspect of computationalism: it discards many physical traits on the grounds of supposed irrelevance, but retains others like time and causality. Were computationalists to give principled reasons for retaining time and causality, I suspect those reasons would also bring some non-computational properties back into the picture.

    Liked by 1 person

Leave a comment