Here I'd like to step back from cultural or language-based consciousness, and look again at just the phenomenon of general consciousness, or what's often called simply "awareness" or "sentience". General consciousness, in other words, is consciousness without language. That such awareness is possible without words, in the first place, should be evident from our common experience of pre-linguistic infants, where I don't think anyone would doubt that they're aware of colors, tastes, touches, sounds, etc. -- that is, that they experience so-called "raw feels". But this common observation is then easily applied to the animals, like dogs or cats, that often share our lives as well. What about budgies? Goldfish? Again, it seems apparent just from the fact that they react to stimuli that they must also experience those stimuli in some manner. But as we move toward progressively simpler organisms, this becomes less apparent -- it's not so hard to imagine systems with relatively few behavioral options reacting to their environment in some hard-wired manner, as in reflex arcs, without necessarily any sort of internal experience. And when we go below the level of nervous systems altogether, it becomes hard to believe that "responses" to stimuli are anything other than merely chemical or physical.
Of course, ultimately everything is merely chemical or physical. But what these admittedly simple observations imply is that at some point in the sequence from, say, amoeba to dog, consciousness in the sense of experiential awareness appears -- at some point it becomes meaningful to assume that it is indeed "like" something at least to be that organism. If we accept that (if only for the sake of the argument) and if we reject dualism in all its forms, then this is saying that the chemical/physical behavior of neurons has produced a peculiar kind of mechanism, a mechanism that, however astonishing it may seem, is responsible for the appearance of the phenomenon of awareness. The hypothesis of this blog, as I've said, is that such a phenomenon is functional -- that is, that awareness, as such, has an adaptational purpose, in that it introduces a gap or distance between stimulus and response, which makes the stimulus available but not determinate. And this in turn allows for an exceptionally flexible form of behavioral control, needed for systems that have a wide range of behavioral options and operate within complex environments.
Now, there is an enormous body of literature -- putting it mildly -- that deals with the issue and issues of consciousness (see this collection for at least a start). But in what follows, I'd like to just set it to one side, temporarily (and perhaps foolhardily). Because I want, first, to be able to quickly sketch out some of the implications and elaborations of the hypothesis above -- in particular, that the mechanism of consciousness must have two main components -- two sides of the gap, so to speak -- one of which "presents" the environmental stimuli in some structured manner, while the other "apprehends" such presentations in some "loosely coupled" manner (where "loosely coupled" is intended to mean that the presentation is but one input among others -- others such as memory, expectation, internal state, etc. -- to the apprehender, whose job is to determine behavior or response).
Consciousness as World:This is to make the claim that consciousness really does create some version of an "inner world" -- "inner" in the sense that it's created by, and only exists within, the mental apparatus of the organism or system; and "world" in the sense that it is a whole that unifies the various sensory sources of environmental stimuli in a presentation that is centered on the body and environmental location of the organism. I deliberately used the word "presentation" here, and avoided "representation", because I wanted to emphasize the fabricated nature of this manifold, as well as its practical or functional aspect. (Later I want to come back to some of the philosophical issues and implications surrounding this usage.)
Such a world (I hope later, as I say, to make it more apparent why I think the adjective "inner" is actually redundant here) is made up out of a number of distinct "channels" of sensory input, corresponding to the different kinds of sense organ. And each of those channels provides for an indefinite number of qualitatively distinct, but otherwise irreducible, "tokens" of information (usually termed "qualia"), identifiable by channel (e.g., as a color or a sound), and corresponding to at least some of the difference in impinging stimuli. Note that different channels are not only identifiably distinct from one another, they display different characteristics as wholes -- sounds, for example, can be organized into a linear spectrum (among other characteristics) from high to low, whereas color seems to display a circular one, even though both channels render environmental stimuli that display a linear variation in wavelength; smell seems to display simply a large collection of distinct qualia without any question of a spectrum; touch seems to involve a number of distinct "spectra", such as smooth to rough, soft to hard, warm to cool, etc. (perhaps implying that there are really a number of distinct sensory channels commonly associated as "touch"?). In any case, it is the job of this component of consciousness to take the input signals from various sensory sources and render them simply as distinct, irreducible tokens on the various channels of the manifold that is the world.
Consciousness as Actor:If the fabricated World of consciousness is a presentation, there needs to be something that it's presented to or for. And this is the other major component of consciousness -- the other side of the gap that is its defining feature -- which we might as well call the Actor. Doing so, of course, immediately invites comparison with the oft-ridiculed "homunculus" theory of consciousness, which posits that there's a little person inside our heads who's monitoring sensory input on screens and dials -- which, apart from its inherent silliness, is clearly avoiding or begging the question of the nature of consciousness by merely locating it one step removed. Now, I don't think anyone has ever actually proposed such a theory, but it gets used often enough as a straw man in the course of proposing other, contrasting, theories. And that's the use that I'll make of it here too -- as a way of distinguishing this notion of an "Actor" component of consciousness from efforts at merely putting off the problem. The main idea is that the Actor component is itself just a mechanism, or a mechanical component of a larger mechanism, not a ghost in the machine, and not an agent at all. That is, "Actor" is just a label for a mechanism that outputs behavior based upon inputs from the "World" component of consciousness, but also, importantly, from other sources, such as a memory store, an anticipation generator, and various internal "drives". The algorithms, so to speak, by which this output is generated might vary considerably depending on the general complexity of both the conscious system and its environment, but a common theme might well be the ability to formulate a "goal" or intention based upon the general state of the system, and then the ability to prioritize or focus upon certain portions of the various input sources on the basis of that intention -- which would give rise to the phenomenon of conscious "attention". (Note that even though words like "goal" or "intention", or even "prioritize", commonly imply will or agency, here they're just used metaphorically, as is often done in describing the structure and operations of, say, software systems.)
Questions:
To wind up this already too-long posting, let me address a couple of the more obvious questions that might arise:
Why does this "Actor" component need that fabricated "World"? Why doesn't the Actor simply receive, and respond to, sensory input directly?
Because the functional advantage of consciousness as a control system is precisely that it's not tied directly into its environment -- that is, the point of the created "world" of consciousness, in a very real sense, is that it acts as a buffer between Actor and environment. It does this by rendering the barrage of environmental stimuli as standardized, "tokenized" bits of information -- "qualia" -- on a pre-structured, unified, and persistent manifold -- which is all that's meant by "World".
But if both components of consciousness are "mechanical" or deterministic, then how is this proposal fundamentally any different from any other general assertion of causal, neurological, or algorithmic bases for consciousness?
Well, since this proposal is certainly a mechanistic one, it isn'tfundamentally different from any other of the sort. But, first, this proposes an actual structure for such a mechanism -- two parts, loosely coupled (like a limited slip differential). Second, such a structure provides a reason for the logical necessity of something like qualia -- irreducible and qualitatively distinct tokens -- as being the only way that the two parts can be connected in a loose or non-determinate manner. Third, the two part structure offers an explanation for the functionality, or evolutionary effectiveness, of consciousness, since the loose connection provides a control system of unusual flexibility and adaptability. And fourth (though this is a little more vague), such a structure provides some basis for, and explanation of, the often-noted "freedom" of consciousness and of will, in its escape from causal determination by the world or by any single source of input (even though, like all mechanisms, it is determined by the sum of its inputs).
Despite the hand-waving about "metaphors" and so on, isn't this "Actor" still just a way of ducking the "hard problem of consciousness", since you never really say how such a mechanism would actually work?
No, I don't. Skepticism here is entirely reasonable, and at this point I'm really just putting forth an hypothesis or suggestion. But I'd make two general points in response: first, I think the suggestion is not implausible, and specific enough to be interesting. Second, since the mechanism being proposed is a general one, it should be possible to instantiate such a structure in an artificial system such as a robot -- in other words, the real test of this proposal would be the production of even a simple version of a synthetic consciousness.
Blogger Steve said...
ReplyDeleteThis is thought-provoking stuff.
Let me first say that the distinction between general consciousness (some call it core consciousness) and the more distinctively human language-oriented extended self-consciousness seems right on.
Your idea of the functional role of general consciousness seems plausible. I'll have to think about this and try to read more of what you say in your other posts.
12:04 PM, October 25, 2005
This comment has been removed by the author.
ReplyDelete