Saturday, January 7, 2006

The "explanatory gap" series: a summary and a Q-and-A

(The end of the series, but likely not the topic. Earlier posts are:

In this series of posts, I've tried to look more closely at the widespread thesis that any attempt to explain conscious experience in purely physical terms -- i.e., any attempt to "reduce" experience to a physical level -- is fraught with a very fundamental problem, termed, by Chalmers (after Levine), an "explanatory gap" or "the hard problem of consciousness". Neither Chalmers nor Nagel, the two authors of seminal papers in support of this thesis that I've quoted from here, want to say that this means such a project is necessarily doomed -- in fact, towards the end of their papers, both make useful suggestions for making some headway with it -- but the thrust of both their suggestions and their texts as wholes is that, for any purely physical explanation, the "hard problem" isn't going to go away.

In this, as I've indicated, both philosophers are tapping into a very deep and very long-standing intuition -- which is that experience is an inherently non-physical phenomenon, or, in other words, that there is an absolute and fundamental gulf between the mental, whatever that may be, and the physical (whatever that may be). Such a gulf, of course, creates all kinds of problems concerning the interaction of such radically disparate realms, which is no doubt among the reasons that neither of the two wants to assert that the gap is unbridgeable in principle. But in general when we try to get down to details about mental-physical interaction, we're often met with mystification or ad-hocery or both -- e.g., the mental (aka "experience") is a new fundamental entity in the world; the mental operates in mysterious synch with the physical; the mental is just some mysterious by-product of the physical; the mental is some quantum thingy, etc. What I've wanted to propose is that, instead of such evasions, we question that deep-seated intuition of a gulf between mental and physical in the first place -- it wouldn't be the first time that our intuitions concerning ourselves and our place in things have lead us astray.

The preceding posts in this series (see the list above) have all just been suggestions toward this end. The last one (before this) in particular puts forth an alternative to the ontological divide, and an explanation of the intuition itself, in the form of a difference in orientation toward, or perspective on, experience -- a difference between viewing it as a phenomenon, on the one hand, and being a part of the phenomenon, on the other hand. But, after all, even if we're willing to accept that the intuition of a fundamental gulf between "mental" and "physical" realms may be mistaken (with the scare quotes indicating that these terms themselves may be part of the problem), we're still left with the considerable difficulty of coming up with an actual physical explanation of the phenomenon of conscious experience. That's properly a scientific, not philosophic or speculative, task, but I thought it would be worthwhile, after a series of often critical posts, to try to answer a few of the more obvious questions about this approach in a more positive vein:

If experience is, as you say, just another phenomenon among phenomena, then why is it, after all, that we can't observe it as we can any other phenomenon?
The key point about conscious experience as we experience it is that it's actually a function of two things: a particular kind of behavior-control mechanism, on the one hand, and our situation as a component of such a mechanism, on the other hand. So we can't "observe" this experience directly in any other entity simply because the "we" component isn't in those other entities (see this post). If we could unplug the "we" from our own brains and plug it into the analogous slot in another entity, then we could indeed know what it's like to be a bat, for example. Short of that, all we can do instead is observe the effects of conscious experience in the behavior of other organisms, and infer that the underlying mode of behavior control is consciousness.

One can understand that experience, as a form of information, might have causal effects -- that is, that it makes a difference -- but why does experience have content (like red or ringing or sour)? What does such qualitative content add?
The short answer is because information, unlike the smile of the Cheshire Cat, has to have a carrier. The content itself is largely arbitrary and unimportant (in principle -- there may be technical advantages to certain content in practice) -- what carries the information, and therefore does the work, of experience is simply the difference between one token-like quale and another. Which is the reason that "inverted qualia" arguments -- sometimes used as arguments demonstrating the ineffectiveness of qualia in general -- are irrelevant here.

But why is the content not just voltages or magnetized regions or some such, rather than red, sour, etc.?
Because if voltages, magnetic regions, etc. are interpreted simply as causal factors (which is what we mean by invoking them), then this violates the principle of "loose connectivity" that defines consciousness as a behavior control mechanism. Regardless of how qualia are caused or instantiated themselves, they must be detected solely as distinct tokens of information -- at this stage, there is no notion of quantity as opposed to quality, and certainly no notion of "voltage", etc.; there is only arbitrary content and difference.

But then notice the difference between this desiccated language of "tokens" and what David Chalmers rightly calls our "rich inner life" -- doesn't that alone suggest that there's more to experience than mere information tokens?
Well, there's certainly a linguistic or conceptual difference, but that has to do with the different aims or objectives of phenomenological description, on the one hand, and mechanistic explanation, on the other -- and those different objectives derive, in turn, from the distinct orientations toward experience spoken of already. But it's interesting that Nagel, toward the end of his paper "What is it like to be a bat", suggested the possibility of developing what he called an "objective phenomenology": "... its goal would be to describe, at least in part, the subjective character of experiences in a form comprehensible to beings incapable of having those experiences". Examples of concepts for such a phenomenology of qualia, abstracted across different sensory channels, might be the notion of a spectrum (linear or circular?), intensity, locality, definition, motivation, duration, etc. With such as this, though we could never match the richness of experience itself in objective terms, we might begin to approach it, from the outside as it were, by deliberately trying to avoid terms dependent upon a particular kind of experience for their meaning.

In summary and conclusion:
  • The "mental" and the "physical" (in reference to consciousness) are not two different realms, nor two different aspects of things, but are just two different ways of speaking about the same thing, their difference a consequence of the simple fact that we the speakers are at the center of the thing we're speaking about.
  • In physical terms, "we" are not agents lurking in the machine, but are complex components of the machine -- a component specialized to receive standardized (tokenized) signals, and integrate such tokens into the processes of behavioral decision-making. (Among other things, this implies that the notion of "feeling" does indeed apply to certain mechanical structures.)
  • Far from being a pointless and/or mysterious after-effect of physical processes, then, phenomenal experience or feeling is just information in the form of distinct tokens, and is the linchpin in the physical explanation of consciousness -- it provides the essential,loose (i.e., informational as opposed to directly causal) connection between the two key components of consciousness as a uniquely flexible behavior-control system.

And to anyone who makes it through the whole of this post, my apologies for its length.


2 comments:

  1. Blogger Peter said...
    I think this is a pretty good, coherent account, but I could only go about half-way with you. I still don't really see why the tokens couldn't be 'colourless'. Most of the mental symbols we deal with are phenomenally neutral (excluding synaesthesia. if that's allowable), so why not these too?

    1:43 PM, January 15, 2006
    Blogger Ellis Seagh said...
    Thanks for the comment, Peter, and the question.

    I think the tokens could indeed be colorless -- as they are, in a sense, for example, for the blind or organisms insensible to light -- but they'd need to be something. If not color, what?

    I guess, in other words, I'm not clear what you're thinking of when you speak of "mental symbols". What I mean by such are linguistic constructs built out of phenomenal experience, which complicate the issue, admittedly (the reason I often suggest that we focus on non-linguistic consciousness), but wouldn't be able to replace the foundational content provided by experience itself. There can't, finally, be difference without there being something to be different.

    8:57 AM, January 16, 2006

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete