Vicissitudes of Consciousness, Varieties of Correlates

The Neural Correlates of Consciousness: Empirical and Conceptual Questions. Edited by Thomas Metzinger. Cambridge, Massachusetts: MIT Press, 2000. 350pp. ISBN 0-262-13370-9. Clothbound $50.

Austen Clark
Department of Philosophy
University of Connecticut
103 Manchester Hall
Storrs, CT 06269-2054

American Journal of Psychology, Spring 2003, 128-140. Dominic Massaro (ed.)



If, as Ned Block has argued, consciousness is a mongrel concept, then this collection resembles nothing so much as a visit to a dog pound, where one can hear all the varieties baying, at full volume. The experience is one of immersion in a voluminous excited cacophony, with much yipping and barking, some deep-throated growling, and other voices that can only be characterized as howling at the moon. What a time to be conscious! What a time to be conscious of being conscious!

What Block meant by calling consciousness a "mongrel" is as follows:

The word "consciousness" connotes a number of different concepts and denotes a number of different phenomena. We reason about "consciousness" using some premises that apply to one of the phenomena that fall under "consciousness" other premises that apply to other "consciousnesses", and we end up with trouble. (Block 1995, 227)

In this Block is surely correct. As many authors have noted over many years, the words "conscious" and "consciousness" are applied to various distinct subject matters on various and distinct grounds. For example, sometimes the word is applied to an animal or creature as a whole--the animal is or is not conscious--and sometimes it is applied to a particular mental state of an animal--this particular mental state is or is not a conscious mental state. Since the subject matters differ, the truth conditions of these ascriptions cannot be identical. David Rosenthal (1986, 1997) usefully dubbed this the distinction between "creature consciousness" and "state consciousness". But then there are many different senses in which a particular mental state is, or is not, a "conscious" mental state. William Lycan (1996) distinguished eight different senses of the word "conscious", and then within one of them picked out a dozen distinct candidates for what one might consider to be "the" problem of consciousness. Notice that with twenty distinct concepts to play with, we have not just twenty "problems of consciousness", but instead at least 210. After all, one must consider their inter-relations. With 20 distinct mongrels in the pound, there are 190 potential dual-tone chords that pairs of them might contribute to the cacophony. (We have n*(n-1)/2 distinct combinations, taking two at a time.) Even this is an oversimplification, as it ignores the possibilities of interaction. Perhaps the relation between phenomenal character and being conscious of something is different in episodes that are self-conscious as opposed to those which lack the latter variety of consciousness. Interactions may go higher yet: relations in that triad might change in creatures that achieve the kinds of self awareness found in social cognition, as opposed to those who live out solitary lives.

When one theorist says state S is conscious, and another says state S is not conscious, they may not be disagreeing at all, even though they speak of the very same mental state. Fred Dretske (1995, 101) is willing to say a mental state is "conscious" if, in virtue of having it, the creature who has it is conscious of something. A conscious state, he says, is a state with which one is conscious of something, and not necessarily a state of which one is conscious. So an echo-locating bat that is aware of a moth in virtue of having an auditory state that enables it to locate the moth and perceive some of its features (flutter, size, and so on) is thereby in a "conscious mental state". Other theorists insist a mental state is "conscious" only if it is a state of which one is conscious. The bat may be conscious of the moth but not conscious of its own auditory state; and so we get a purely verbal disagreement over whether that auditory state is, or is not, a conscious state. There is an even weaker sense in which a state can be called "conscious": it is a state the possession of which indicates that the creature who has it is a conscious creature. An animal is "conscious" if it is awake and has the capacity to sense something; so if the animal is sensing something, we might call that sensory state a "conscious" state, or a "state of consciousness", on the grounds that it betokens the presence of creature consciousness. It suffices to show the creature is a conscious creature. This is even weaker than Dretske’s use of the word. We can clearly establish that the bat senses the moth, and that the bat is a conscious creature, even in cases in which it may be difficult or impossible to establish that the bat is conscious of the moth. In fact it is difficult to understand what the latter phrase means if it is meant to imply anything more or less than: the bat senses the moth.

This weakest sense elevates any sensory state into the ranks of conscious mental states. Biophysicists might be surprised to discover that their discipline has been annexed, and is now part of "the field of consciousness research". But the label is merely an honorific: it is simply another way of saying that in virtue of having a sensory state, the creature is sensing something. In other words: P, therefore P. Indisputable, yet uninformative. Nevertheless I will argue below that this the only sense in which all sensory states must be conscious states. A creature can be in a sensory state even though it is neither "conscious of" what it is sensing nor conscious of the sensory state itself.

With muck like this at the foundations, what are the prospects for "the field of consciousness research" or for erecting a "science of consciousness"? It is easy to see that the prospects are nil. There will never be a science of consciousness, since there must be many sciences of consciousness: many empirical disciplines engaged with the various states which in varying ways are "conscious" mental states. These disciplines run from biophysics to perception to cognitive psychology and, perhaps, all the way out to sociology. "The" field of consciousness research does not exist, and cannot exist, because there are many such fields. The only thing that could hold them together is their name. The word "conscious", in one sense or another, somehow applies. It might be useful to pretend that this merely nominal identity reflects some deep underlying theoretical unity, but I suggest that a parade held together by allegiance to a banner on which the word "consciousness" is written is not held together by much at all.

No need to take my word for this. These assertions can be proven by an examination of The Neural Correlates of Consciousness, edited by Thomas Metzinger.

Five Sections, Five Explananda.

The book is a collection of twenty two papers that were among those presented at a 1998 conference in Bremen, Germany on the topic "Neural Correlates of Consciousness: Empirical and Conceptual Questions". The papers are organized in five sections: Conceptual problems; Representational dynamics; Vision; Anesthesia and the NMDA receptor complex; and Agency and social cognition. The contributors include philosophers, psychologists, and neuroscientists. For example, the section on "Representational dynamics" includes papers by Francis Crick and Christof Koch; Antonio Damasio; Wolf Singer; and Gerald Edelman and Giulio Tononi. The Anesthesia section starts with a paper by Hans Flohr, but then includes critiques by a philosopher (Valerie Hardcastle), two biophysicists writing on anesthetics (Nicholas Franks and William Lieb) and a psychologist (Jackie Andrade). The mix of disciplines is the great strength of the book. The editor adds a useful introductory paper and prefaces to each section, as well as contributing a paper on the subjectivity of subjective experience.

A simple way for the reader to verify that the concept of consciousness is a mongrel is to ask of each author: what feature of consciousness (or of conscious mental states) does this author find to be problematic? For which feature is an explanation requested? The exercise quickly reveals that one can put together a collection of articles, each of which is "about consciousness" (indeed, about the "neural correlates" of consciousness) even though they address at least five distinct explananda. The philosophers in this collection most often take phenomenal character as the defining problematic feature, and point to something called "qualia" as the paradigmatic imponderable. A close runner-up, but second in their affections, is the subjective character of consciousness: the idea, hailing from Thomas Nagel, that the character of a conscious mental state can be understood only if one understands it from the "point of view" of the creature who has it. This subjective (or perspectival) character in turn gets analyzed in many different ways. But when one turns to the sections that mention some data, the explananda shift. For example, the papers on vision (and two of those in "representational dynamics") are almost exclusively concerned with the problem of visual awareness. This is the problem of clarifying the neural difference between seeing and being conscious of what one sees (or, sometimes, being conscious of seeing something) as opposed to visual processing without consciousness. The very interesting section on Hans Flohr’s NMDA hypothesis addresses the difference between being conscious and being anesthetized. Is there some common locus for the anesthetic effect of general anesthetics? This is a question about creature consciousness, not state consciousness, and a solution could, for all we know, be entirely independent of the solution of (for example) the problem of visual awareness. Likewise, the last section, on selfhood, agency, and social cognition, adds awareness of agency to the list: the problem of how agency is perceived in self and others. It includes a very interesting paper by Vittorio Gallese on the "motor vocabulary". To the extent that this is a problem of consciousness it concerns some aspect of socially derived self-consciousness, of awareness of oneself as an agent. Lycan (1996, 5), in his canvas of the various senses of "consciousness", commented that he "would not touch this sense for a free week in Maui with champagne thrown in".

Phenomenal Properties

A "phenomenal property" is a property of appearance: a characterization of how something looks, feels, tastes, smells, and so on. While the paradigm examples are sensory or perceptual, any episode that in some way presents an appearance could be so characterized, and so there may be distinctive phenomenal properties associated as well with moods and emotions. The association in the minds of philosophers between phenomenal properties and consciousness can be traced at least back to Descartes, who (following Aquinas) pointed out that if one restricts one’s judgments to statements of how things appear, one can largely, perhaps entirely, avoid the possibility of error. Suppose the tower on the horizon looks round to me. I might go wrong saying "the tower is round", but I can’t go wrong if I confine myself to saying "the tower looks round to me now." The latter merely characterizes how the tower appears; "looks round" serves as a phenomenal property. Descartes thought that this sort of judgment pertains solely to something "objectively contained" in the mind--something existing purely as an object of thought--and that one could know such facts about the mind with greater certainty than one could know any facts about stuff outside the mind. Thus was born the unfortunate assumption that phenomenal properties and consciousness are coeval.

The assumption that a mental state is conscious if and only if it has phenomenal character is still alive and well. Here for example is David Chalmers ("What is a neural correlate of consciousness?"), reflecting on the pre-requisites for an account of the correlates of all the various "states of consciousness":

One would need a general way of thinking about arbitrary states of consciousness. Perhaps the best way is to think in terms of arbitrary phenomenal properties. For any distinctive kind of conscious experience, there will be a corresponding phenomenal property: in essence, the property of having a conscious experience of that kind. For example, being in a hypnotic state of consciousness is a phenomenal property; having a visual experience of a horizontal line is a phenomenal property; feeling intense happiness is a phenomenal property; feeling a throbbing pain is a phenomenal property; being conscious is a phenomenal property. (22)

Possessing phenomenal character and being conscious are, on this line, co-extensive. A state has one if and only if it has the other. The habit of treating appearance as the touchstone for consciousness is well-seated among those addressing the "conceptual" questions in this collection. Thomas Metzinger, for example ("The subjectivity of subjective experience"), gives a table of seven properties in answer to the question "what makes a neural representation a phenomenal representation?" (286) and describes these as "the seven most important theoretical problems connected with conscious experience" (285). He says that the potential for being a conscious subject "consists of three phenomenological target properties that in their conceptual interpretation constitute three different aspects of one and the same problem" (288). So "phenomenal representation" and "conscious experience" seem to be two labels for the same thing. Metzinger also claims that subjectivity is the problematic aspect of conscious experience (1); but there is no out-right contradiction, since he thinks that subjectivity and phenomenal properties are also invariably, and necessarily, linked.

Antti Revonsuo aims to characterize what he calls the "phenomenal level of organization", and he says it is

fully realized in the dreaming brain. The visual appearance of dreams is practically identical with that of the waking world... The dreaming brain shows us that sensory input and motor output are not necessary for producing a fully realized phenomenal level of organization. The dreaming brain creates the phenomenal level in an isolated form... (64)

Although he here speaks of "visual appearance", in the end Revonsuo points to a rather different characteristic of dreaming as characteristic of the "phenomenal level", namely "the sense of presence in or immersion in a multi-modal experiential reality" (65). He uses a virtual reality metaphor, saying "when the brain realizes the phenomenal level, it is actually creating the experience that I am directly present in a world outside my brain" (65); and he calls this an "out-of-the-brain-experience"(65). The fundamental job of the phenomenal level is to create this illusion, this "telepresence", as he calls it. The expression is charming, though notice that once again it is an expression of the idea that the central problem of consciousness is understanding phenomenal properties.

Unfortunately this assumption is not confined to the minds of philosophers; it has also crept into the reasoning of some of the neuroscientists and psychologists who contributed to this volume. For example, Dominic ffytche says that the term "visual consciousness" is "used as a synonym for ‘seeing’--for perceptual, phenomenological, or qualia descriptions of visual experience" (221). "Qualia" has become a term fraught with controversy, though all sides understand it to connote roughly the same thing as phenomenal properties. Examples are the particular appearances of something which looks amber, feels sticky, or tastes like honey. Some neuroscientists have made the unfortunate assumption that philosophers know what they’re talking about when they talk about qualia. Francis Crick and Christof Koch for example say "It is qualia which are at the root of the hard problem" (103). Crick and Koch are primarily concerned with the possibility that the activities of mind are never conscious, but only some of their products, and so this is something of a sidebar to their main point. But they conclude:

What remains is the sobering realization that our subjective world of qualia--what distinguishes us from zombies and fills our life with color, music, smells, and other vivid sensations--is possibly caused by the activity of a small fraction of all the neurons in the brain, located strategically between the outer and the inner worlds. How these act to produce the subjective world that is so dear to us is still a complete mystery. (109)

Likewise, Antonio Damasio identifies the first problem of consciousness as figuring out how the "movie in the brain" is generated, and notes "To arrive at the solution of this first problem of consciousness it is necessary to address the philosophical issue of qualia because the fundamental components of the images in the movie metaphor are made of qualia" (111).

The phraseology is provocative, but some of the assumptions in it are much at issue among philosophers. Are qualia properties of the celluloid, so to speak, or are they properties that appear to characterize what appears on the screen? Are they properties behind the scenes, inside the projector, which explain what the projector projects; or do they appear as properties of the objects that the movie is about, visible to the audience on the screen? On one option they are properties of the "vehicle" of representation, of the mental state that does the representing; on the other they are properties of the "content" of the representation, of the ‘object’ it purports to represent. Some philosophers treat them as one, some as the other. Some treat qualia as properties of mental states; while others treat them as properties of intentional objects: as appearing to characterize portions of the space surrounding the sentient organism. And some, of course, deny that the terminology can usefully be applied to anything at all (see Dennett 1988).

The writers on "conceptual" questions in this volume tend to favor phenomenal properties or qualia as the central problem, but it should be noted that some put other candidates forward. For example, Damasio proceeds to set aside the first "problem of consciousness" to focus on the second, which he identities as the "sense of self":

the self sense is so central to our experience that its mental appearance is the reality that requires explanation. There is nothing satisfactory about attempts to explain consciousness that begin by excluding self on the grounds that the experience of self is illusory. Illusion or no illusion, a satisfactory account of consciousness should explain how the sense of self comes to mind. (115)

And, as noted, Metzinger says that subjectivity, not phenomenal character, is the central puzzle of consciousness. In the second paragraph of the introduction we read:

Obviously, the fundamental methodological problem faced by any rigorous research program on consciousness is the subjectivity of the target phenomenon. It consists in the simple fact that conscious experience, under standard conditions, is always tied to an individual, first-person perspective. The subjective qualities inherent in a phenomenal color experience are a paradigm example of something that is accessible from a first-person perspective only. Color consciousness--regardless of whether in gray or in Technicolor--is a subjective phenomenon. (1)

With Metzinger at any rate it is clear that this problem is not meant to supplant the "phenomenal properties" problem; instead "subjective" is another way of characterizing "phenomenal". One is inherent in the other.

Visual Awareness

The six papers in the section on vision (as well as two in the section on representational dynamics, one by Wolf Singer and the other by Gerald Edelman and Giulio Tononi) all have as a target a different, and somewhat better defined, explanandum: the difference between visual processing with, and without, awareness. "Visual processing with awareness" is in these papers typically assessed by a subject’s capacity to report on what he or she sees. To get at processing without awareness, one technique is to use visual priming, described by Thomas Schmidt ("Visual perception without awareness"):

Two properties of the critical stimulus (referred to here as the "prime") must be demonstrated. First, the prime must influence the response to another stimulus (e.g. by slowing or speeding performance). Such a priming effect is an indirect measure of visual processing of the prime because it implies that the prime must have been sufficiently analyzed. Second, one must show that the prime cannot be consciously perceived, usually by establishing near-chance performance in a forced-choice discrimination or detection task. This is taken as a direct measure of visual awareness of the prime. (157)

The forced choice discrimination task was to identify the color of the stimulus, which was either red or green. Primes of the same color as the target decrease the error rate (and the response time) for identifying the color of the target; primes with the inconsistent color, presented long enough to be identifiable, increase that error rate. Schmidt demonstrated that this priming effects persists even when the prime is masked so quickly (in 17 msec) that its color is identified by the subject at a rate no better than chance. This effect was particularly marked in one subject, in whom, according to Schmidt, "the priming function can be perfectly dissociated from visual awareness" (164); in the other five subjects the main priming effect at that briefest of stimulus onset asynchronies was a significantly higher response time to inconsistent, as opposed to consistent, pairs. In other words none of the subjects could identify the color of the prime at better than chance when it was masked after only 17 msec, yet they all showed significantly higher response times when that color was inconsistent with the color of the target. In those inconsistent pairs subjects also showed a small but significant increase in errors in identifying the color of the target. Schmidt concludes "For a demonstration of response priming by color, it is not critical whether or not the primes are consciously perceived. ... it is fascinating that color processing can be dissociated from visual awareness." (167)

Other experimenters use other but equally ingenious techniques. Beena Khurana ("Face representation without conscious processing") used a "negative priming" paradigm. Subjects are presented a tableau of five faces, and asked whether the second and fourth face are the same or different. The first, third, and fifth are "distractors". On "related" trials, a distractor from the previous trial is used as a target in the next one. In "unrelated" trials, the targets are not drawn from distractors in the previous trial. Response time to related trials is significantly higher than that of unrelated trials. So even though subjects presumably do not focus attention on the distractors, nevertheless they must process some of their facial characteristics, as otherwise the difference in response times is inexplicable. As Khurana puts it "The visual system is capable of representing unfamiliar faces without focused attention." (175)

Romi Nijhawan and Beena Khurana ("Conscious registration of continuous and discrete visual events") investigated the visual perception of a moving stimulus, part of which is illuminated continuously, and the other part of which is illuminated by a stroboscopic flash. They find the "flashed" portion appears to lag behind the parts continuously illuminated, and call this the "flash-lag" effect. The apparent lag is a function of the angular velocity of the motion. They find the flash-lag effect can produce illusory contours and can defeat the "binocular fusion" of red and green into yellow. That is, even though a flashed red bar is super-imposed on a moving green bar, and both affect the same area of the retina, the two do not appear to occupy the same place, and so do not fuse to yellow. The authors use this evidence as a springboard for a discussion of how the visual system could be constructed so as to yield correct perceptions of the position of moving stimuli even though signals require from 100 to 250 msec to travel from the retina to the visual cortex. Their hypothesis: a "predictive remapping" mechanism compensates for the lag, so that instantaneous locations of moving objects are veridically perceived. The stroboscopically illuminated stimuli cannot engage this mechanism, and so they seem to lag behind.

Binocular rivalry provides another way to study visual processing with and without attention. Erik Lumer ("Binocular rivalry and human visual awareness") used fMRI to study subjects exposed to rivalrous stimuli: one a drifting horizontal grating, and the other a face. He asks "Is activity in the ventral pathway sufficient for the conscious perception of a visual stimulus, or does visual awareness require processing in other brain regions, particularly the parietal and frontal lobes?" (232) "Conscious perception" here means that the subject is conscious of what is perceived. In binocular rivalry one might be aware first of a face, then of a horizontal grating, then of a face, and so on. fMRI reveals multiple and distributed areas of the brain active in the two phases of the rivalry.

Finally, Melvyn Goodale and Kelly Murphy ("Space in the brain") propose a significant amendment to the Goodale and Milner (1992) suggestion that the ventral and dorsal channels correspond to (conscious) perception and (typically unconscious) action systems respectively. Goodale and Murphy point out that patients with damage to ventral stream mechanisms do suffer various deficits in spatial discrimination, so it is clear that not all forms of "visuo-spatial" processing are found in the dorsal stream (196). They use two tasks to assess visuo-spatial abilities of a patient (DF) who has ventral stream damage. In one the patient must point to or reach accurately towards a particular token. In the other the patient is presented an array of tokens in a particular spatial arrangement and must duplicate the arrangement by placing new tokens in the same pattern. DF can do the former but not the latter, while patients with Balint’s syndrome, or other kinds of damage to the dorsal channel, show the reverse pattern. Goodale and Murphy propose that the distinction between the channels is not one of perception versus action, but rather of egocentric versus allocentric visual coordinate schemes. Processing of egocentrically derived coordinates is done in the dorsal stream, and that is what is needed for the pointing task. Duplicating the spatial relations of an array, as in the second task, requires allocentric coordinates, processed in the ventral stream. They conclude "Allocentric spatial encoding is not a dorsal stream function, and appears to depend more on perceptual mechanisms in the ventral stream." (196)

The section on vision and the section that follows (section IV) are the strongest parts of the collection. They report some very interesting contemporary research. If one happens to be dissatisfied with the set of concepts we currently use to talk about consciousness and conscious experience, one longs for them to be replaced by something better; and the only place where we can possibly find something better will be in the empirical disciplines. They are the only ones tapped into energy sources sufficient to produce a radical overhaul. It won’t come from philosophy. We need to be forced to confront a mass of unassimilatable facts, and for that we need, first and foremost, a mass of facts. So it is the neuroscientists and psychologists in this collection who present the most interesting food for conceptual discussion. Philosophers need to catch up with them.

Anesthesia and Agency

Section IV of the book presents a mini-colloquium on Hans Flohr’s hypothesis about the role of the N-methyl-D-aspartate (NMDA) cortical receptor complex in mediating the effects of general anesthetics. In general anesthesia, Flohr argues, "a loss of consciousness ... is not necessarily due to an unspecific, global depression of all neural activity, but to the disruption of a specific subset of processes that depend on the normal functioning of the NMDA receptor" (246). He presents evidence that the same receptor complex is activated by psychedelic drugs and is distorted by schizophrenia, arguing for the possibility that the similar effects of pharmacological interventions into these different projection systems results from their convergence on a common cortical target, the cortical NMDA receptor. Thus, the cortical NMDA receptors could be the final common target not only of anesthetics but also of psychedelic drugs. A similar mechanism has been suggested for the psychotic symptoms in schizophrenia...The ultimate common cause of the altered states of consciousness would have to be located at the level of cortical information processing. (254)

And, he says, "A minimum activation of the cortical NMDA system is a necessary condition for the mechanisms underlying consciousness." (250) The three papers that follow criticize this hypothesis on philosophical, pharmacological, and psychological grounds. For purposes of this review the only essential point to recognize is that the explanandum has shifted yet again. Now we are considering variations along the range that starts with a creature that is awake and alert and ends with loss of consciousness. The subjects in section III who were engaged in "visual processing without awareness" did not thereby "lose consciousness" in the sense operative in section IV. They were conscious, but not conscious of what they were seeing. So answering the question of section IV will do nothing to answer the question of section III.

Similarly the questions raised in section V, on social cognition and agency, float free from all the rest. If you evolved as a social animal, how might you organize your perception of the actions of the other members of your herd, pack, gang, community, or nation? How might you organize and compute the contingencies of your own actions, in such a milieu? The notions of social cognition, selfhood, and agency are all rather murky, however, and it is difficult to see how one might make progress with them. Metzinger gives an interesting but largely incomprehensible theory of the subjectivity of subjective experience, complete with "new conceptual tools" (282) plus--wait, there’s more--"two new theoretical entities" (301). Some glimmer of how one might progress is provided by the last paper in the collection by Vittorio Gallese ("The Acting subject: toward the neural basis of social cognition"). Gallese discusses work by G. Rizzolatti, M. Jeannerod, and S. Baron-Cohen on the "words" in the "motor vocabulary" of the nervous system. The interesting suggestion is that the semantics of these words corresponds not to force or movement, but rather to the relationship between the agent and the object of the action (326):

The hierarchical value of these "words" can be different: some of them indicate the general goal of an action (eg grasp, hold, tear). Some others concern the way in which a particular action has to be executed, such as to grasp with the index finger and the thumb. Another group of "words" deals with the temporal phases into which the action to be performed can be segmented. (326)

And the vocabulary used to organize efferents might also play a profound role in perception of the actions of others. What makes this promising is that it is tied to down to some empirical results, rather than upward to new conceptual tools and new theoretical entities. The entire discussion is framed in terms of properties of "motor act" and "mirror" neurons found in F5 in the monkey, the "hand field" of the primary motor system.

From the cognitive to the sensory unconscious

Thus far we have the relatively benign conclusions that the term "conscious" is multiply ambiguous, and that researchers pursuing "the" problem of consciousness are in fact pursuing various and distinct explananda. In this final section I want to show that this situation is not entirely benign: the conceptual turmoil causes considerable mischief. In particular, logical space itself is distorted whenever a theorist treats one of the phenomena of consciousness as the exemplar for all the others. I will focus on the philosophical proposal that the hard problem of consciousness is the problem of qualia; or, put another way, that possession of phenomenal properties is the touchstone for presence or absence of consciousness. I noted that among the philosophers represented in the collection this was the favored way of explicating the problem of consciousness; and that it has acquired some influence among neuroscientists and psychologists as well. So it is worth showing that it is false. Oddly enough, in other parts of this very collection we have at hand materials sufficient to do this.

Consider specifically Chalmers’ claim (quoted above) that "for any distinctive kind of conscious experience, there will be a corresponding phenomenal property" and that "being conscious is a phenomenal property". (22) A phenomenal property characterizes how something appears, and the paradigm examples are sensory or perceptual: how something looks or feels or tastes or smells. There is one immediate, but relatively minor, problem with the claim that a state is conscious if and only if it has some phenomenal character. Some conscious mental states do not seem to have any particular or distinctive phenomenal character. Perhaps there is some phenomenal character essential to smelling ammonia, but there doesn’t seem to be one essential to thinking about ammonia. Good old NH3. Images and memories with associated phenomenology might be called up by the thought, but they can vary; and so none are essential. Similarly for mental states such as expecting someone for dinner, or intending to buy potatoes on the way home. Wittgenstein had great fun demolishing the idea that there was a characteristic phenomenology that defines these mental states. What makes my expectation an expectation of him? No phenomenal property could do it.

Chalmers cleverly left himself a way of coping with these examples: perhaps they are conscious experiences but not distinctive kinds of conscious experience. One wonders how one could then apply the concepts of thinking, intending, or expecting to oneself; but leave this aside. There is a deeper problem. A phenomenal property is a property of appearance; a way of being appeared-to. There is nothing in the concept of being appeared-to that implies that one is necessarily conscious of being appeared-to in that fashion. "Being appeared to" and "being conscious of" are distinct concepts; they have, in particular, distinct contraries. Nothing in the predicate "being appeared-to ammonia-wise" implies that one must be conscious of being appeared-to ammonia-wise.

The experiments on visual priming presented in section III of this collection suggested this problem, and they provide all the munitions we need. Schmidt showed that color primes can have distinctive effects even though the subjects are (in some sense) not aware of the color of the prime. In discussion he considers the possibility of what he calls "invisible colors" (157, 167). "Invisible color" is indeed an oxymoron, but "non-conscious color" is not, and is in fact what Schmidt means by his phrase: a chromatic sensory state of which one is not conscious. "Being appeared-to" and "being conscious of" are distinct concepts, and they need not be coextensive. Odd as it might seem, particularly from the first-person perspective, there is no conceptual barrier preventing us from admitting the possibility of sensory states--sensations--which are not conscious. Black holes likewise seemed outrageous when they were first proposed--a reductio ad absurdum, an offense against common sense--but it gradually became clear that the theory of relativity allowed for their existence. Later astronomical data provided evidence that they actually exist. Likewise, I suggest there is no contradiction in supposing that phenomenal properties populate some mental states of which one is unconscious. It takes a while to get used to the idea, but unfamiliarity is distinct from conceptual impossibility.

In fact the case for non-conscious phenomenal properties is much stronger than this. Not only are they possible, but section III of this collection provides evidence that they are real. We do not need rare or esoteric phenomena such as blindsight or the "hidden observer" in order to make this case (although they help: see Weiskrantz 1997; Stoerig & Cowey 1992; Hilgard 1986). All we need is careful thought about visual priming. Consider the implications of showing that a prime of which a subject is not conscious nevertheless has effects that indicate its color has been processed perceptually. One can quibble about whether Schmidt has actually shown this, and the arguments about what would show a subject is not conscious of the prime get rather interesting; but let us leave these issues aside as well. An "inconsistent" color--a prime whose color is the complement of the color of the target--increases response time to the target, while the matching color decreases response time. Something about the color of the prime must then have been processed and registered, even though the subject is not conscious of that color.

There are two possibilities. One is to suppose that the sensory state mediating the processing of the prime has whatever properties normally mediate perception of color, but that the subjects are simply not conscious of those sensory states. Discriminations continue, so the states mediating them must be present, yet the business goes on without being graced by consciousness. This makes it relatively easy to explain why the "inconsistent" primes increase response time: the color of the prime is complementary to the color of the target, and even though they are not conscious, the states registering and processing the color of the prime are in every other way just like those coding colors of which one is conscious. So complements can interfere, even though the subject is not conscious of it at all as a complementary color. On this hypothesis, the states mediating priming do have phenomenal properties, and do stand in relations of qualitative similarity or dissimilarity to the ones mediating normal conscious vision, but the qualia they possess are simply qualia of which one is unconscious. Like a black hole, non-conscious sensation is weird, but it can explain the data. Priming effects of inconsistent colors are explained in the same way one explains the interference effects of consciously perceived colors: by an inconsistency in their qualitative character.

The second possibility is to suppose that that state is not only unconscious but also lacks whatever properties would normally mediate the perception of color. So it differs in two ways from the normal sensation of color: it is not conscious, and it lacks the qualitative properties it would need to have to be a sensation of color. This model would have us agree with the philosophers: without consciousness there can be no qualia in the neighborhood, so these episodes are episodes in which we have discrimination without phenomenal properties. There must be information-bearing states involved, but they are not qualitatively similar to any state involved in conscious vision. They are not qualitatively similar to any such state, because they have no qualia at all, and so have no qualitative similarity to anything. Subjects processing in that way are processing like zombies.

The latter hypothesis is the more complicated. With it one needs some alternative explanation for why primes of inconsistent colors interfere with perception of the color of the target. Since the states mediating perception of the prime are non-conscious, our philosopher insists that they therefore have no phenomenal properties. They are no longer sensations of color, per se, but have some other set of properties, call them Schmullers, which are non-conscious and hence non-phenomenal. To explain why the prime interferes with subsequent identification of colors, one must suppose that perceptions of Schmullers interfere with perceptions of colors. When stimulus onset asynchrony increases sufficiently so that the subject can identify the color of the prime at better than chance, the states mediating perception of the prime undergo two changes: they become conscious, and their Schmuller properties become color properties. This transformation itself is something else our philosopher must explain. Why not drop all the extra baggage, and simply admit that Schmuller qualia are color qualia? They are, simply, color qualia of which one is not conscious. They have all the other properties mediating conscious perception of color, but one is not conscious of them.

Notice that if this is so much as possible, it refutes those philosophers who insist that "phenomenal character" is what is distinctive about consciousness, or that being conscious is a phenomenal property. Phenomenal properties can occur in states of which one is not conscious. The problems are distinct.

Psychologists have long since recognized the conceptual possibility and empirical reality of a "cognitive unconscious". I suggest there is no conceptual impossibility to their likewise recognizing a "sensory unconscious". This is filled with sensory states of which one is unconscious: states that have all the qualitative character and phenomenal properties of those which normally, when conscious, mediate one’s perception of colors, shapes, smells, tastes, pains, and so on; but which in this case happen to lack consciousness. To put it another way, states in the sensory unconscious resemble normal sensations in every respect but one: they are not conscious. If this is conceivable, then phenomenal states and conscious states are not coextensive; phenomenal properties do not require or guarantee awareness; and it false to say that qualia are the hard problem of consciousness. Qualia are a hard problem, but they are not specifically a problem of consciousness, or of consciousness per se. Sentience and consciousness are distinct domains. They share a border, but they are not coextensive.

References

Block, Ned (1995). On a confusion about a function of consciousness. Behavioral and Brain Sciences, 18: 227-88. Reprinted with some revisions in Block, Flanagan, and Güzeldere 1997, 375-415.

Block, Ned, Flanagan, Owen, and Güzeldere, Güven (eds.) (1997). The Nature of Consciousness: Philosophical Debates. Cambridge, Massachusetts: MIT Press.

Dennett, Daniel C. (1988). Quining qualia. In A. Marcel & E. Bisiach, (eds) Consciousness in Contemporary Science. Oxford: Oxford University Press, 42-77.

Dretske, Fred. (1995). Naturalizing the Mind. Cambridge, Massachusetts: MIT Press.

Goodale, M. A. and Milner, A. D. (1992). Separate visual pathways for perception and action. Trends in Neurosciences 15: 20-25.

Hilgard, E. R. (1986). Divided Consciousness: Multiple Controls in Human Thought and Action. New York: Wiley.

Lycan, William (1996). Consciousness and Experience. Cambridge, Massachusetts: MIT Press.

Rosenthal, David (1986). Two concepts of consciousness. Philosophical Studies, 94: 329-59. Reprinted in David Rosenthal (1991) (ed.), The Nature of Mind. New York: Oxford University Press, 462-477.

Rosenthal, David (1997). A theory of consciousness. In Block, Flanagan, and Güzeldere 1997, 729-54.

Stoerig, P. and Cowey, A. (1992). Wavelength sensitivity in blindsight. Brain 115: 425-44.

Weiskrantz, Lawrence (1997). Consciousness Lost and Found. Oxford: Oxford University Press.




Back to Austen Clark online papers.

Back to Uconn Philosophy home page.