Nagel’s Question-Begging Argument on the Badness of Death

Epicurus argued that death could not be bad for the person whose death it was, because the person ceased to be in death, hence there was no subject to whom it could be ascribed as an evil. That is, Epicurus famously observed, in his “Letter to Menoeceus,” that “when we are, death is not …, and when death is…, we are not.” Thomas Nagel argues that the Epicureans were wrong, that death is precisely bad “for the person who is its subject” (Mortal Questions, p. 2). I will argue that Nagel’s position on the subjective badness of death is question begging.

“I shall not discuss the value that one person’s life or death may have for others,” explains Nagel, “or its objective value, but only the value it has for the person who is its subject” (p. 2). He acknowledges that it is unclear “how the supposed misfortune [of death] can be assigned to a subject at all” (p. 4). Hence part of his objective is to show how death can be assigned to a subject in a way that makes it clearly an evil for that subject.

Things can be bad for a person, argues Nagel, completely independently of that person’s awareness of them. He claims, for example, that proponents of the view that “what you don’t know can’t hurt you” are wrong. He uses the case of a betrayal of which one is unaware to support this claim.  Many would argue that so long as one is unaware of the betrayal, and it has no negative repercussions, then one is unhurt by it. “[T]he natural view,” argues Nagel however, “is that the discovery of betrayal makes us unhappy because it is bad to be betrayed – not that betrayal is bad because its discovery makes us unhappy” (p. 5). 

In fact, however, the badness of the betrayal here has to be understood in an objective rather than a subjective sense. That is, betrayal is generally considered to be bad in an objective sense, or in a sense that is independent of any specific subject. In that sense it is bad to be betrayed independently of whether the person who has been betrayed is aware of it. What is at issue is whether it is bad in a subjective sense independently of the relevant subject’s awareness of it. Nagel simply assumes it is, based on what he asserts is “the natural view.” Unfortunately, as most philosophers and psychologists are aware, that a way of thinking is “natural” to human beings, does not alone prove it is correct. A betrayal arguably does become bad, in a subjective sense, only as a result of its discovery.

“A man’s life,” continues Nagel, “includes much that does not take place within the boundaries of his body and his mind, and what happens to him can include much that does not take place within the boundaries of his life” (p. 6). But, again, that is precisely what is at issue. That is, people who subscribe to the view that death is the end of the subject would dispute that anything could “happen” to a person once that person was dead. Things may indeed happen to a person’s reputation after their death. Someone presumed to be a benefactor of humanity while alive, may be exposed as a corrupt exploiter of humanity after their death. The claim, however, that damage to a person’s reputation that happens after their death is something that happens to them, rather than merely to their reputation is question begging. Neither Epicurus, nor any contemporary thinker would dispute that people live on, in a metaphorical sense, after their death. The issue is whether this metaphorical sense of the continuance of the subject would support the claim that death would be an evil for it.

“It is true,” observes Nagel, “that both the time before a man’s birth and the time after his death are times when he does not exist. But the time after his death is time of which his death deprives him” (p. 7). But again, that is precisely what is at issue. It is certainly true that had the man in question not died when he did, he would have lived longer, but that’s nothing more than a tautology. To infer from this tautology, as Nagel does, that “the time after his death is time of which death deprives him” (p. 7, emphasis added) is question begging. That is, proponents of the view that death is the end of the subject clearly understand the subject as a concrete sentient being, and death cannot deprive such a being of life because once death is, this being is not. Death, on this view of the subject, cannot deprive the dead of life because they no longer exist to be deprived of anything.

In Nagel’s defense, what would appear to be his somewhat idiosyncratic understanding of the nature of the subject is not actually so idiosyncratic. Nagel’s view that while the subject “can be exactly located in a sequence of places and times, the same is not necessarily true of the goods and ills that befall him” (p. 5) and that hence such goods and ills can continue to befall him after his death has been popular throughout history. Many individuals from the past were concerned about possible posthumous damage to their reputations, etc. It is arguably only in the last century that the Epicurean equation of the subject with an organic, sentient being became popular. And some people appear to continue incoherently—i.e., despite their professed adherence to the Epicurean view of the subject—to be concerned with such things. 

My point here is not that the conclusion of Nagel’s argument that death is a subjective evil is mistaken, but that the argument itself is question begging. That is, Nagel mentions Lucretius in the course of the essay, hence it is clear he is arguing against the Epicurean position that death is not an evil. The problem with Nagel’s argument is that it involves a tacit rejection of the Epicurean view of the nature of the subject. Nagel simply assumes without argument that the subject is an abstract, or immaterial entity an entity which it makes perfect sense to ascribe evils not only of which they are contingently unaware but of which they are necessarily unaware because they no longer possess sentience. There is no question that it is possible to interpret human beings in the concrete, materialistic manner the Epicureans do, and in the abstract, immaterial manner Nagel does. The question is which subject is the relevant subject so far as the badness of death is concerned. 

Identity Pleas and Excuses

J.L. Austin famously recommended the study of excuses to moral philosophy. Austin distinguished two main types of excuses: justifications that draw on explicitly normative standards to justify one’s action as right after all, and qualified admissions that seek to mitigate due punishment by citing extenuating factors. But when put on the spot in real life situations, the accused, rather than citing extenuating factors, sometimes asks for an exemption from an applicable moral rule. When an exemption is requested, the validity of the rule is granted, but the accused asks that the rule not be enforced. 

Sometimes the request to be exempted from a particular moral rule is made with reference to the kind of person the accused is. I shall call such requests identity pleas. Identity pleas seek a dispensation to be exempted from some applicable rule henceforth. They seek continued toleration – a pass to continue living in the breach. As such, identity pleas are more difficult to place within moral theory than excuses. A Kantian moral theory, with its emphasis on exceptionless universal rules, cannot admit their existence at all, based as they are on the assertion of idiosyncrasy. But they do exist in the context of real relationships, the preservation of which sometimes involves overlooking admitted incorrigibility.  

Country music provides ample examples of identity pleas, as in the following song lyrics from three generations of misbehaving Hank Williams: 

Sometimes it’s hard, but you’ve gotta understand, when the Lord made me, he made a ramblin’ man. — Hank Williams, Sr. 

If I get stoned and sing all night long, it’s a family tradition.  — Hank Williams, Jr.

I can’t help the way that I am, ‘cause the whiskey, weed, and women had the upper hand. — Hank Williams III

Regrettably for Hank Sr., his wife did not in fact understand, and kicked him to the curb. But the failure of his plea does not detract from the pertinence of the example: Hank acknowledged that his behavior fell short of settled standards for marital relations – standards he did not seek to challenge – but instead begged his wife to tolerate his rambling ways because of the kind of person he is and cannot help being. 

While Hank’s wife lost patience, others make a different choice upon receiving similar pleas. Hillary Clinton claimed not to be the sort of woman Tammy Wynette sings about in “Stand By Your Man,” but she famously chose to overlook her husband’s pattern of infidelity, presumably with open eyes. Real people make these kinds of choices – the choice whether the flaws of the people in their lives are worth putting up with. Moral rules do not dictate these choices. 

Once you notice that there is such a thing as an identity plea, you start to see them frequently. Dispensations are often tacitly given without being asked for. For example, family ties bind people together despite significant differences in values. Many workplaces have a person whose quirks become well known and tolerated, such as the curmudgeon who is called gruff rather than rude, as their behavior merits by normal standards. “Don’t mind him,” the new employee may be told. 

So do identity pleas represent a category wholly divorced from excuses and moral evaluation? Not entirely. Identity pleas function as excuses when their proper reception can be justified on moral grounds. In these cases, they are more than bare requests for toleration. For example, people with some kinds of disabilities are unable to achieve perfect social propriety, such as a person with Tourette’s syndrome who interjects profanity at inopportune moments. In cases such as these, there is happily now a stable consensus that one should tolerate such infelicities. Failure to offer dispensation in cases such as these is considered a moral failure on the part of any individual intolerant enough to do so, and is indeed illegal in the context of employment.

Here we have a case of moral standards settling that a dispensation is in order, precisely because of the type of individual who has violated the rule. In situations such as these, the person making an identity plea has a legitimate excuse for their behavior, and it would be wrong to deny the request. 

At this point, it may be suspected that the example of disability provides the key to understanding identity pleas across the board, showing them to be a subclass of excuse rather than a disjoint category. In particular, it may be thought that the issue is always ultimately whether the individual making the plea is factually unable to abide by established norms, whether due to a documented disability or a more idiosyncratic quirk in their personality. In that case, it might be argued, Hank Sr.’s plea was rightly rejected because he was merely unwilling, not literally unable, to change his ways.

I believe, however, that such a suggestion is misplaced. Pleas like Hank’s are often enough accepted despite the absence of evidence that the behavior in question is based in genuine incapacity as opposed to stubbornness or weakness of will. 

How, then, shall we understand the relationship between identity pleas and moral theory? I suggest the key is to acknowledge, to a degree that Kantian morality does not, that ongoing relationships of various kinds form the backdrop of evaluation where identity pleas (and exceptions more generally) are made and evaluated. I don’t know to what extent Hank’s mother put up with his antics. Different parents take different approaches to wayward children, some cutting them off for minor offenses and others sticking with them despite appalling failures to reciprocate (as Hank III documents in his song “I’m the Only Hell My Mama Ever Raised”).

There are standards with respect to identity pleas, but also wide latitude for discretion. This discretion exists in the space of intimacy and attachment, and differs by type of relationship (child, spouse, friend, employer, teacher, etc.). Abstract moral rules and appeals to rights do not form the basis of our more intimate relationships and do not dictate whether they are worth continuing despite breaches of normally applicable standards, whether in the past or anticipated in the future. Here, the moralism inherent in the consideration of excuses loses its centrality, as each person confronts the decision of what kind of person to be, and to be with.

Scott Kimbrough

Anticipating Gettier

Few philosophical articles have generated as much scholarship as Edmund Gettier’s 1963 piece in AnalysisIs Justified True Belief Knowledge.” In the article, Gettier provides two examples of beliefs that are both justified and true, but which he correctly assumes most philosophers would not accept as knowledge. What appears to have been overlooked in the flurry of responses to Gettier is the fact that the insight he presents into our understanding of what it means to know something was not new. Bertrand Russell had articulated the very same insight fifty years earlier in his The Problems of Philosophy. 

Gettier’s article includes two counterexamples to the definition of knowledge as justified true belief. Both counterexamples work in the same way. That is, both involve an individual’s holding a belief the truth of which is logically entailed by another belief they hold which is both justified and assumed, wrongly as it turns out, to be true, and both count on justification being conferred via the relation of entailment. Since both examples work in the same way, for the sake of brevity I’ll consider only the first.

In this example, two men, Smith and Jones, have applied for the same job. Smith is described as having “strong evidence” that Jones will get the job, and that Jones has ten coins in his pocket. Smith’s evidence is described by Gettier as being sufficiently strong to justify his beliefs that Jones will get the job and that Jones has ten coins in his pocket. The conjunctive proposition “Jones is the man who will get the job, and Jones has ten coins in his pocket,” Gettier explains, entails a third proposition that “The man who will get the job has ten coins in his pocket.” Since Smith can see that belief in the truth of this proposition is entailed from the proposition whose truth he is justified in believing, he believes this proposition as well. 

Included in the justification of Smith’s belief that Jones would get the job was that “the president of the company assured him that Jones would in the end be selected.” But, of course, the president could change his mind. And that is precisely what happens in Gettier’s example. Smith, not Jones, gets the job. It doesn’t take a great deal of imagination to envision this scenario. No amount of justification for beliefs relating to matters of empirical fact, at least no amount of the sort of justification Gettier presents (i.e., the kind available to introspection) can guarantee the truth of such beliefs.

What is more surprising, but certainly not implausible, is that Smith just happens to have ten coins in his pocket. So Smith’s belief that “The man who will get the job has ten coins in his pocket” is, as it turns out, both justified and true. That is, the belief is justified because is was entailed by Smith’s belief that Jones would get the job and that Jones had ten coins in his pocket. And it is true, because he, Smith, happens to have ten coins in his pocket as well.

Gettier correctly assumed, however, that despite the fact that Smith’s belief that “The man who will get the job has ten coins in his pocket” turned out to be both justified and true, that virtually no one would be willing to say that it constituted knowledge. 

Gettier’s point wasn’t really all that original though. The problem is that Smith inferred a true belief from a false one. That is, he inferred the truth of the proposition “The man who will get the job has ten coins in his pocket,” from his belief in the truth of the proposition that “Jones is the man who will get the job, and Jones has ten coins in his pocket.” Jones didn’t get the job, however, he did, and “a true belief,” Russell explained back in 1912, “is not knowledge when it is deduced from a false belief.” 

Russell’s example of a true belief that is inferred from a false belief is much simpler than Gettier’s in that it makes no reference to justification. His example involves someone believing truly that “the late Prime Minister’s last name began with a B” based on his false belief Balfour was the late prime minister. In fact, the belief that the late Prime Minister’s last name began with a B was true at the time Russell wrote The Problems of Philosophy because the late Prime Minister, at that time, was Henry Campbell Bannerman. 

The entailment condition is implicit in Russell’s example because the proposition that “Balfour was the late Prime Minister” entails the proposition “The late Prime Minister’s last name began with a B.” Again, Russell’s example makes no reference to justification, but this omission in no way weakens the strength of his insight that true beliefs inferred from false ones can’t constitute knowledge. This is precisely the insight put forward by Gettier, though it’s dressed up in his paper with the trappings of justification and even more baroquely with the contrivance of conveying justification via the relation of entailment. 

So Gettier’s paper, despite the splash it made, and indeed, to some extent continues to make, conveyed no philosophical insight into our intuitions about what it means to know something that had not already be conveyed by Russell fifty years earlier.

The Paradox of Personal Identity

What makes us who we are? Our bodies? Our minds? Our moral character? “The most natural theory of personal identity,” writes the philosopher Richard Swinburne, is that it is “constituted by bodily identity” (Personal Identity, Oxford, 1984). There are obvious problems with this theory, though. The brain, seems more important than the rest of the body. A person can lose an arm or a leg and still be obviously the same person. Damage to the brain is different. A person who suffers severe memory loss, or a dramatic personality change, as a result of a brain injury or degenerative brain disease is not so obviously the same person they were before. 

“The traditional alternative to a bodily theory of personal identity,” continues Swinburne, “is the memory-character theory.” According to this theory, it’s the continuity of our memories and our character that constitutes our selves. But which is more important, memory or character?

Our sense of self is closely connected with our memories, or more specifically, with the personal narratives we construct based on those memories. If the memories go, then so do the narratives. If we reach a point where we can no longer remember who we were, then the person we were is gone. There’s still a person there, of course, but it is, in an important sense, a different person, a new person, a person with no past. 

Recent work in “personal identity theory” has shifted its focus from memory to character, or more specifically, to moral character. Research done by Nina Strohminger, a psychologist at the Wharton School of the University of Pennsylvania, and Shaun Nichols, a philosopher at the Sage School of Philosophy at Cornell showed that the extent to which someone suffering from Alzheimers seemed different to those close to them was determined almost entirely by changes in their moral character rather than by memory loss (“Neurodegeneration and Identity,” Psychological Science, Vol. 26 [9] 2015 pp. 1469-1479). 

The primary aim of Strohminger’s and Nichols’ study, they explain, “was to determine the influence of neurodegenerative symptomatology on third-person judgments of personal identity” (p. 1470). But a theory of personal identity that locates that identity outside a person’s own sense of self is deeply unsatisfying.

Here’s the puzzle. It makes sense to suppose that character is constitutive of who we are. On closer examination, however, it seems neither necessary nor sufficient for personal identity. Or more correctly, whether it’s either necessary or sufficient would appear to depend on one’s perspective. So long as people retain a critical mass of the memories that make up their personal narratives, their characters can change and even change dramatically, without threatening their sense of self. That is, character can change without any corresponding change of personal identity from the first-person perspective. If people’s characters change enough, however, they will seem like different people to others, which is to say they will seem like different people, from the third-person perspective.

Conversely, people can experience such profound memory loss that they no longer have a coherent personal narrative and hence completely lose their sense of self. Memory loss does not necessarily affect character, however. So these same people who are no longer the person they were from the first-person perspective, may remain recognizable as the same person to others, which is to say, from the third-person perspective.

So it appears the judgment of whether a given person is “the same person” after significant memory loss as they were before depends on the perspective from which one makes the judgment. Someone whose personality remains recognizably the same will appear to be the same person from the third-person perspective, which is to say from the perspective of people who knew them before, even if they have entirely lost their own sense of who they used to be. On the other  hand, a person can retain their subjective sense of self even after personality changes that make them effectively unrecognizable to others.

So it seems that just like electrons will appear to be either waves or particles depending on how one observes them, so can we appear to be either the same person we have always felt ourselves to be, or an entirely different person depending on who’s doing the observation, ourselves or someone else.

But which perspective are we to take as definitive? The first person or the third person? Philosophers, as Thomas Nagel observed so compellingly in The View From Nowhere (Oxford, 1989), are biased in favor of objectivity, which is to say, they are biased in favor of the third-person perspective.

Do other people really have the final say, though, on who we are?

Getting Gettier Wrong

Edmund Gettier showed in “Is justified true belief knowledge?” that it’s possible to have what most people would consider a justified belief that is true by accident, or a belief where the “justification” is not related to the truth in the way we intuitively feel it ought to be in order for the belief to amount to knowledge. This insight gave rise to a variety of attempts to develop theories of belief justification that would avoid what came to be known as “the Gettier problem.” Alvin Plantinga argues he has developed such a theory, though he refers to his theory as one of “warrant” rather than “justification.“ Plantinga argues that Gettier problems relate to anomalies in the cognitive environments in which beliefs are formed and that his theory of warrant avoids such problems by taking the cognitive environment into account in a way that standard theories of justification do not. It’s clear, however, that Plantinga’s ”warrant” does not avoid the Gettier problem. I argue that Platinga’s purported solution to this problem is based on a misunderstanding of the problem. The problem does not, in fact, concern the cognitive environment in which beliefs are formed in the way Plantinga argues it does.


Plantinga argues that knowledge is possible only if one assumes some kind of pre-established fit between the way our minds work and the nature of objective reality. This fit is part of what Plantinga calls God’s “design plan” (2011, piii), a plan that involves “a match between our cognitive powers and the world” (2011, xiv). A belief has warrant, according to Platingua, “if it is produced by cognitive faculties functioning properly (subject to no malfunctioning) in a cognitive environment congenial for those faculties, according to a design plan successfully aimed at truth” (1993, piii).

“Gettier problems,” observes Plantinga,

come in several forms. There is Gettier’s original Smith owns a Ford or Brown is in Barcelona version: Smith comes into your office, bragging about his new Ford, shows you the bill of sale and title, takes you for a ride in it, and in general supplies you with a great deal of evidence for the proposition that he owns a Ford. Naturally enough you believe the proposition Smith owns a Ford. Acting on the maxim that it never hurts to believe and extra truth or two, you infer from that proposition its disjunction with Brown is in Barcelona (Brown is an acquaintance of your about whose whereabouts you have no information). As luck would have it, Smith is lying (he does not own a Ford) but Brown, by happy coincidence, is indeed in Barcelona. So your belief Smith owns a Ford or Brown is in Barcelona is indeed both true and justified; but surely you can’t properly be said to know it. (1993, 32).

Plantinga examines several similar examples where a person forms a true belief based on what is, in fact, some kind of deceptive practice. These examples include one, which he credits to Carl Ginet, that involves a belief that one is looking at a comparatively fine example of a barn where the comparison is based not on real barns but on barn facades that the local inhabitants of rural Wisconsin town have erected in an effort to make themselves look prosperous. Another involves a belief that horses have recently been on a particular bridle trail in Yellowstone National Park based on the sight of horse manure placed there by “a wag with a perverse sense of humor” (1993, 32). Each example functions effectively like the first in that the crucial element in the example is that it involves an attempt to deceive.


The problem, according to Plantinga is that “credulity is part of our design plan. But it does not work well when our fellows lie to us or deceive us in some other manner, as in the case of Smith, who lies about the Ford, or the Wisconsinites, who set out to deceive the city-slicker tourists” (1993, 33). That is, there’s a problem, according to Plantinga, with “the cognitive environment.” That environment is supposed to be free of deliberately deceptive elements.


If we turn to what Gettier actually says, however, we see that neither of the counterexamples he presents to the definition of knowledge as justified true belief involves any reference to deliberate deception. Plantinga describes Gettier’s “original Smith owns a Ford or Brown is in Barcelona version” as one in which Smith lies about owning a Ford.


But Smith doesn’t lie in Gettier’s example. No one lies in that example. In fact, it is not Smith, but Jones who owns a Ford in Gettier’s original example. Smith, in this, the second of Gettier’s two examples, has reason to believe Jones owns a Ford. Why does Smith believe this? “Smith’s evidence,” writes Gettier, “might be that Jones has at all times in the past within Smith’s memory owned a car, and always a Ford, and that Jones has just offered Smith a ride while driving a Ford” (1963, 122). There’s no reference to Smith lying in Gettier’s original example despite Plantinga’s claim that there is. It just happens that Jones no longer owns a Ford, but has been driving a rented Ford.


Each of the purportedly Gettier-type counterexamples to knowledge as justified true belief that Plantinga presents, involves either lying or some other form of deliberate deception.
“[A] true belief is formed in these cases,” writes Plantinga, “but not as a result of the proper function of all the cognitive modules governed by the relevant parts of the design plan. The faculties involved are functioning properly, but there is still no warrant; and the reason has to do with the local cognitive environment in which the belief is formed” (1993, 33). That is, the local cognitive environment involves deliberate deception.


Plantinga acknowledges that “credulity is modified by experience,” that “we learn to believe some people under some circumstances and disbelieve others under others” (1993, 33), but he fails to specify what sorts of circumstances provide the proper cognitive environment for credulous belief formation. In fact, as Richard Greene and N.A. Balmert point out, Plantinga fails, in general, “to provide us with a criterion by which we can identify the proper cognitive environment for a particular cognitive module” (1997, 137).


It isn’t possible avoid the Gettier problem the way Plantinga argues his theory of warrant does by pointing out that God designed us generally to believe what other people say or do because no one lies, or indeed engages in any other form of deception, in either of Gettier’s examples. The “local cognitive environment” in Gettier’s “Jones owns a Ford or Brown is in Barcelona” example is free of any element of deception. The problem Gettier points out with the traditional definition of knowledge as justified true belief does not appear to have anything to do with the “local cognitive environment,” unless contingency is considered inappropriate to that environment, since it just happens that Jones no longer owns a Ford and that Brown is in Barcelona. But contingency is a feature of almost every cognitive environment, and certainly to every cognitive environment where the beliefs formed concern the nature of objective physical reality.

References

Gettier, E. 1963. Is justified true belief knowledge? Analysis 23, 121-23.
Greene, R. and Balmert, N.A. “Two notions of warrant and Plantinga’s solution to the Gettier problem. Analysis 57, 132-139.
Plantinga, A. 1993. Warrant and Proper Function. Oxford University Press.
Plantinga, A. 2011. Where the Conflict Really Lies. Oxford University Press.

Sight and light

We normally regard seeing as intimately connected with light. But must seeing involve light? Suppose you could step into a pitch-dark room and have precisely the experiences you would have if it were fully lighted. The room would thus look to you just as it would if fully lighted, and you could find any unobscured object by looking around for it. Would this not show that you can see in the dark? If so, then the presence of light is not essential to seeing.

However, the case does not establish quite this much. For seeing is a causal relation, and for all I have said you are just vividly hallucinating precisely the right things rather than seeing them. But suppose you are not hallucinating and that if someone covered a coin you see with lead or covered your eyes, you would no longer have a visual experience of a coin. In this case, it could be that somehow the coin affects your eyes through a mechanism other than light transmission, yet requiring an unobstructed path between the object seen and your eyes. Now it begins to seem that you are seeing. You are responding visually to stimuli that causally affect your eyes. Yet their doing so does not depend on the presence of light

—Robert Audi

(This post was excerpted, with Audi’s permission, from his Epistemology: A Contemporary Introduction to the Theory of Knowledge. It’s an excellent example of how it is possible to make an interesting and even important philosophical point in very few words.)

More “Constancies” in Visual Perception

Mortar and Pestle, oil on canvas mounted on board, M.G. Piety 2020

Philosophers, psychologists, and neurologists interested in the nature of visual perception long ago noticed a phenomenon that has come to be known as “size constancy.” Size constancy, according to the American Psychological Association, refers to the perception of an object “as being the same size despite the fact that the size of its retinal image varies depending on its distance from the observer.” That is, as objects approach us or recede from us their image on our retina increases or decreases in size respectively.

Size constancy is only one aspect of what the APA refers to as “perceptual constancy.” Other aspects include what they refer to as “brightness constancy,” “color constancy,” and “shape constancy.” Each of these aspects of perceptual constancy has presented a challenge to visual artists who wish to work in the realist tradition. Shape constancy is addressed in teachings on “perspective.” A building may appear square to an observer but artists need to learn that as it recedes in space the lines that comprise the outline of the receding portion move closer together.

“Brightness constancy” and “color constancy” are less well understood. In fact, these designations are arguably misnomers. “Brightness” can be correctly predicated only of objects that actually emit light, but most objects don’t do that. “Value” refers to where on a scale of darkness or lightness a color, including black, falls, hence “brightness constancy” is more properly identified as “value constancy.” “Color constancy,” by the same token, is more properly identified as “hue constancy” in that “colors,” as generally understood, can be broken down into multitudes of subcategories artists refer to as “hues.” Red, for example, can be broken down into a variety of hues, some falling under the category artists refer to as “warm” because they are closer to the yellow part of the spectrum (e.g., cadmium red and vermilion) and others under the category artists refer to as “cool” because they are closer to the blue part of the spectrum (e.g., alizarin crimson).

In fact, value and hue constancy overlap. The artist Frank Arcuri explains to his students that natural light is cool and that this means it “washes out” color, so the portion of an object on which light falls most directly will have a lower hue saturation than the surrounding area. Areas of an object that are lit directly are thus both lighter in value and different in hue. As the object turns away from the light the color saturation increases becoming first warmer and then, eventually also darker.

Objects are also always illuminated by both direct light and the light reflected off other objects. This reflected light is influenced by the color of the objects off which it bounces so even what appears to an untrained eye as a purely white object will, in fact, have portions where it is not purely white. The difficulty is that such objects will appear purely white to most observers because the brain “corrects” for these tiny variations in hue to produce a visual impression of a uniformly-colored object.

Artists have been told for generations to “paint what you see.” It should be clear now, however, that such advice is counterproductive. What the untrained eye “sees” is something like an interpretation by the brain of the information provided to the visual cortex. Visual perception is a far more complex process than is generally appreciated. This process by which the brain produces it needs to be deconstructed by those who wish to learn how to paint realistically and this is not an easy task because the brain does its “interpretations” automatically.

Determinacy and the Self

That we create ourselves over time as the result of the decisions we make is a widely accepted perception of what has come to be understood as selfhood. We become increasingly concrete over time as we age. A natural inference from this is that there is more to the selves of adults than there is to the selves of children.

And yet, there is a sense in which this increasing concretion represents a diminution of the self. Possibilities fall away like bits of marble giving way to the sculptor’s chisel. In an important sense, we become smaller as we take on a determinate shape.

That’s part of the reason, I believe, that nearly everyone is nostalgic for childhood, independently of whether their childhood was particularly happy. The self of the child is an enormous, almost limitless collection of possibilities, a vast expanse of possibilities in which the imagination of the child luxuriates in a way that the imagination of the adult cannot. Adults fantasize, of course, about becoming rich or famous, or about career changes, about becoming an artist, or musician, or dancer, but these possibilities, if they are genuine, are heavy with the weight of improbability that does not weigh down the imaginings of the child.

There is something godlike in the vastness of the self of the child. We become human, all too human as our selves take shape over time.