|chapter 2||The Neural Basis for Cognition|
Explaining Capgras Syndrome
We began this chapter with a description of Capgras syndrome, and we’ve offered an account of the mental processes that characterize this disorder. Specifically, we’ve suggested that someone with this syndrome is able to recognize a loved one’s face, but with no feeling of familiarity. Is this the right way to think about Capgras syndrome?
One line of evidence comes from neuroimaging techniques that enable researchers to take high- quality, three-dimensional “pictures” of living brains without in any way disturbing the brains owners. We’ll have more to say about neuroimaging later; but first, what do these techniques tell us about Capgras syndrome? The Neural Basis for Capers Syndrome
Some types of neuroimaging provide portraits of the physical makeup of the brain: What’s where? How are structures shaped or connected to each other? Are there structures present (such as missing (because of disease or birth defects)? tumors) that shouldn’t be there, or structures that are This information about structure was gained in older studies from positron emission tomography on magnetic resonance (more commonly referred to as a PET scan). More recent studies usually rely imaging (MRI; see Figure 2.1). These scans suggest a link between Capgras syndrome and abnormalities in several brain areas, indicating that our account of the syndrome will need to consider several elements (Edelstyn & Oyebode, 1999; also see O’Connor, Walbridge, Sandson, & Alexander, 1996). One site of damage in Capgras patients is in the temporal lobe (see Figure 2.2), particularly on the right side of the head. This damage probably disrupts circuits involving the amygdala, an almond- structure that-in the intact brain-seems to serve as an “emotional evaluator,” helping shaped organism detect stimuli associated with threat or danger (see Figure 2.3). The amygdala is also an or of available rewards. With damaged important for detecting positive stimuli-indicators of safety amygdalae, therefore, people with Capgras syndrome won’t experience the warm sense of feeling good (and safe and secure) when looking at a loved one’s familiar face. This lack of an emotional response is probably why these faces don’t feel familiar to them, and is fully in line with the two- systems hypothesis we’ve already sketched. Patients with Capgras syndrome also have brain abnormalities in the frontal lobe, specifically in the right prefrontal cortex. What is this area’s normal function? To find out, we turn to a different neuroimaging technique, functional magnetic resonance imaging (fMRI), which enables us to track moment-by-moment activity levels in different sites in a living brain. (We’ll say more about FMRI in a later section.) This technique allows us to answer such questions as: When a person is reading which brain regions are particularly active? How about when a person is listening to music? With data like these, we can ask which tasks make heavy use of a brain area, and from that base we can draw conclusions about that brain area’s function.
Studies make it clear that the prefrontal cortex is especially active when a person is doing tasks that require planning or careful analysis. Conversely, this area is less active when someone is dreaming. Plausibly, this latter pattern reflects the absence of careful analysis of the dream material, which helps explain why dreams are often illogical or bizarre.
Related, consider FMRI scans of patients suffering from schizophrenia (e.g., Silbersweig et al., 1995). Neuroimaging reveals diminished activity in the frontal lobes whenever these patients are experiencing hallucinations. One interpretation is that the diminished activity reflects a decreased to distinguish internal events (thoughts) from external ones (voices) or to distinguish ability imagined events from real ones (cf. Glisky, Polster, & Routhieaux, 1995). How is all of this relevant to Capgras syndrome? With damage to the frontal lobe, Capgras patients may be less able to keep track of what is real and what is not, what is sensible and what is not. As a result, weird beliefs can emerge unchecked, including delusions (about robots and the like that you or I would find totally bizarre. What Do We Learn from Capgras Syndrome? Other lines of evidence add to our understanding of Capgras syndrome (e.g., Ellis & Lewis, 2001; Ramachandran & Blakeslee, 1998). Some of the evidence comes from the psychology laboratory and confirms the suggestion that recognition of all stimuli (not just faces) involves two separate on factual knowledge, and one that’s more “emotional” and tied to the mechanisms-one that hinges warm sense of familiarity (see Chapter 7)
Note, then, that our understanding of Capgras syndrome depends drawn from cognitive psychology and from cognitive neuroscience. We use both perspectives to test (and, ultimately, to confirm) the hypothesis we’ve offered. In addition, just as both perspectives illuminate Capgras syndrome, both can be illuminated by the syndrome. That is, we can use Capgras syndrome (and other biological evidence) to illuminate broader issues about the nature of the brain and of the mind.
For example, Capgras syndrome suggests that the amygdala plays a crucial role in supporting the feeling of familiarity. Other evidence suggests that the amygdala also helps people remember the emotional events of their lives (e.g., Buchanan & Adolphs, 2004). Still other evidence indicates that the amygdala plays a role in decision making (e.g., Bechara, Damasio, & Damasio, 2003), especially for decisions that rest on emotional evaluations of one’s options. Facts like these tell us a lot about the various functions that make cognition possible and, more specifically, tell us that our theorizing needs to include a broadly useful “emotional evaluator,” involved in many cognitive processes. Moreover, Capgras syndrome tells us that this emotional evaluator works in a fashion separate from the evaluation of factual information, and this observation gives us a way to think about occasions in which your evaluation of the facts points toward one conclusion, while an emotional evaluation points toward a different conclusion. These are valuable clues as we try to understand the processes that support ordinary remembering or decision making. (For more on the role of emotion in decision Chapter 12.) What does Capgras syndrome teach us about the brain itself? One lesson involves the fact that many different parts of the brain are needed for even the simplest achievement. In order to recognize your father, for example, one part your brain needs to store the factual memory of what he looks like. Another part of the brain is responsible for analyzing the visual input you receive when looking at a face. Yet another brain area has the job of comparing this now-analyzed input to the factual information provided from memory, to determine whether there’s a match. Another site provides the emotional evaluation of the input. A different site presumably assembles the data from all these other sites-and registers the fact that the face being inspected does match the factual recollection of your father’s face, and also produces a warm sense of familiarity.
Usually, all these brain areas work together, allowing the recognition of your father’s face to go smoothly forward. If they don’t work together-that is, if coordination among these areas is disrupted-yet another area works to make sure you offer reasonable hypotheses about this disconnect, and not zany ones. (In other words, if your father looks less familiar to you on some occasion, you’re likely to explain this by saying, “I guess he must have gotten new glasses” rather than “I bet he’s been replaced by a robot.”) Unmistakably, this apparently easy task-seeing your father and recognizing who he is-requires multiple brain areas. The same is true of most tasks, and in this way Capgras syndrome illustrates this crucial aspect of brain function. The Study of the Brain In order to discuss Capgras syndrome, we needed to refer to different brain areas and had to rely on several different research techniques. In this way, the syndrome also illustrates another point-that our theories. this is a domain in which we need some technical foundations before we can develop Let’s start building those foundations.
The human brain weighs (on average) a bit more than 3 pounds (roughly 1.4 kg), with male brains weighing about 10 % more than female brains (Hartmann, Ramseier, Gudat, Mihatsch, & Polasek, 1994). The brain is roughly the size of a small melon, yet this compact structure has been estimated to contain 86 billion nerve cells (Azevedo et al., 2009). Each of these cells is connected to 10,000 or so others-for a total of roughly 860 trillion connections. The brain also contains a huge number of qlial cells, and we’ll have more to say about all of these individual cells later on in the chapter. For now, though, how should we begin our study of this densely packed, incredibly complex organ?
One place to start is with a simple fact we’ve already met: that different parts of the brain perform different jobs. Scientists have known this fact about the brain for many years, thanks to clinical evidence showing that the symptoms produced by brain damage depend heavily on the location of the damage. In 1848, for example, a horrible construction accident caused Phineas Gage to suffer damage in the frontmost part of his brain (see Figure 2.4), and this damage led to severe personality and emotional problems. In 1861, physician Paul Broca noted that damage in a different location on the left side of the brain led to a disruntion of lanonade skills In 1911 Édouard Clanarède (1911/1951) reported his observations with patients who suffered from profound memory loss produced by damage in still another part of the brain. Clearly, therefore, we need to understand brain functioning with reference to brain anatomy. Where was the damage that Gage suffered? Where was the damage in Broca’s patients or Claparède’s? In this section, we fill in some basics of brain anatomy. Hindbrain, Midbrain, Forebrain The human brain is divided into three main structures: the hindbrain, the midbrain, and the forebrain. The hindbrain is located at the very top of the spinal cord and includes structures crucial for controlling key life functions. It’s here, for example, that the rhythm of heartbeats and the rhythm of breathing are regulated. The hindbrain also plays an essential role in maintaining the Specifically, the hindbrain helps maintain the body’s posture and balance; it also body’s overall tone. helps control the brain’s level of alertness. The largest area of the hindbrain the cerebellum. For many years, investigators believed this structure’s main role was in the coordination of bodily movements and balance. Research indicates, however, that the cerebellum plays various other roles and that damage to this organ can cause problems in spatial reasoning, in discriminating sounds, and in integrating the input received from various sensory systems (Bower & Parsons, 2003).
The midbrain has several functions. It plays an important part in coordinating movements, including the precise movements of the eyes as they explore the visual world. Also in the midbrain are circuits that relay auditory information from the ears to the areas in the forebrain where this information is processed and interpreted. Still other structures in the midbrain help to regulate the experience of pain.
For our purposes, though, the most interesting brain region (and, in humans, the largest region) is the forebrain. Drawings of the brain (like the one shown in Figure 2.2) show little other than the forebrain, because this structure surrounds (and so hides from view) the entire midbrain and most of the hindbrain. Of course, only the outer surface of the forebrain-the cortex-is visible in such organ’s outer pictures. In general, the word “cortex” (from the Latin word for “tree bark”) refers to an surface, and many organs each have their own cortex: what’s visible in the drawing, then, is the cerebral cortex. The cortex is just a thin covering on the outer surface of the forebrain; on average, it’s a mere 3 mm thick. Nonetheless, there’s a great deal of cortical tissue; by some estimates, the cortex makes up 80% of the human brain. This considerable volume is made possible by the fact that the cerebral cortex, thin as it is, consists of a large sheet of tissue. If stretched out flat, it would cover more than 300 square inches, or roughly 2,000 cm2. (For comparison, this is an area roughly 20 % greater than the area covered by an extra-large-18 inch, or 46 cm-pizza.) But the cortex isn’t stretched flat; instead, it’s crumpled up and jammed into the limited space inside the skull. It’s this crumpling that produces the brain’s most obvious visual feature-the wrinkles, or convolutions, that cover the brain’s outer surface.
Some of the “valleys” between the wrinkles are actually deep grooves that divide the brain into the longitudinal fissure, running from the front of the different sections. The deepest groove brain to the back, which separates the left cerebral hemisphere from the right. Other fissures divide the cortex in each hemisphere into four lobes (again, look back at Figure 2.2), and these are named after the bones that cover them-bones that, as a group, make up the skull. The frontal lobes form the front of the brain, right behind the forehead. The central fissure divides the frontal lobes on each side of the brain from the parietal lobes, the brain’s topmost part. The bottom edge of the frontal lobes is marked by the lateral fissure, and below it are the temporal lobes. Finally, at the very back of the brain, connected to the parietal and temporal lobes, are the occipital lobes. Subcortical Structures Hidden from view, underneath the cortex, are several subcortical structures. One of these structures, the thalamus, acts as a relay station for nearly all the sensory information going to the cortex. Directly underneath the thalamus is the hypothalamus, a structure that plays a crucial role in controlling behaviors that serve specific biological needs-behaviors that include eating, drinking and sexual activity.
Surrounding the thalamus and hypothalamus is another set of structures that form the limbic system. Included here is the amygdala, and close by is the hippocampus, both located underneath the cortex in the temporal lobe (plurals: amygdalae and hippocampi; structures are essential for learning and memory, and the patient H.M., discussed in Chapter 1 see Figure 2.5). These developed his profound amnesia after surgeons removed large portions of these structures-strong confirmation of their role in the formation of new memories. We mentioned earlier that the amygdala plays a key role in emotional processing, and this role is reflected in many findings. For example, presentation of frightful faces causes high levels of activity in the amygdala (Williams et al., 2006). Likewise, people ordinarily show more complete, longer- lasting memories for emotional events, compared to similar but emotionally flat events. This memory advantage for emotional events is especially pronounced in people who showed greater witnessing the event in the first place. Conversely, the activation in the amygdala while they were memory advantage for emotional events is diminished (and may not be observed at all) in people injury) have suffered damage to the amygdalae. who (through sickness or injury) have suffered damage to the amygdalae. Lateralization Virtually all parts of the brain come in pairs, and so there is a hippocampus on the left side of the brain and another on the right, a left-side amygdala and a right-side one. The same is true for the cerebral cortex itself: There is a temporal cortex (i.e., a cortex of the temporal lobe) in the left hemisphere and another in the right, a left occipital cortex and a right one, and so on. In all cases cortical and subcortical, the left and right structures in each pair have roughly the same shape and the same pattern of connections to other brain areas. Even so, there are differences in function between the left-side and right-side structures, with each left-hemisphere structure playing a somewhat different role from the corresponding right-hemisphere structure.
Let’s remember, though, that the two halves of the brain work together-the functioning of one side is closely integrated with that of the other side. This integration is made possible by the commissures, thick bundles of fibers that carry information back and forth between the two hemispheres. The largest commissure is the corpus callosum, but several other structures also make sure that the two brain halves work as partners in almost all mental tasks.
In certain cases, though, there are medical reasons to sever the corpus callosum and some of the other commissures. (For many years, this surgery was a last resort for extreme cases of epilepsy.) The person is then said to be a “split-brain patient”-still having both brain halves, but with communication between the halves severely limited. Research with these patients has taught great deal about the specialized function of the brain’s two hemispheres. It has provided evidence, for example, that many aspects of language processing are lodged in the left hemisphere, while the right hemisphere seems crucial for a number of tasks involving spatial judgment (see Figure 2.6). However, it’s important not to overstate the contrast between the two brain halves, and it’s misleading to claim (as some people do) that we need to silence our “left-brain thinking” in order to be more creative, or that intuitions grow out of “right-brain thinking.” These claims do begin with a kernel of truth, because some elements of creativity depend specialized processing in the right on hemisphere (see, e.g., Kounios & Beeman, 2015). Even so, whether we’re examining creativity or any other capacity, the two halves of the brain have to work together, with each hemisphere making its own contribution to the overall performance. Therefore, “shutting down” or “silencing” hemisphere, even if that were biologically possible, wouldn’t allow you new achievements, because the many complex, sophisticated skills we each display (including creativity, intuition, and more) depend on the whole brain. In other words, our hemispheres trying to impose its style of thinking on the other. Instead, the hemispheres pool their specialized capacities to produce a seamlessly integrated, single mental self.
e. Demonstration 2.1: Brain Anatomy Throughout this text, you’ll find demonstrations of research phenomena, so you can experience these effects firsthand. Here, though, is a different sort of interactive exercise: For readers trying to understand the brain’s anatomy, textbook drawings are useful, but limited. It’s helpful, therefore, to look at a three-dimensional model. For this purpose, an application called 3D Brain can be downloaded free from the iPhone or Android App stores.
An online version of the app is also available (http://www.g2conline.org/2022), and it features a pull-down menu on the left where you can choose to look either at the whole brain or at specific set of controls that you can use to spin brain regions. On the right side, there’s a diamond-shaped the image so you can examine the brain from whatever angle you choose. Many students find these varying views helpful as they try to understand the brain’s overall anatomy. Sources of Evidence about the Brain How can we learn about these various structures-and many others that we haven’t named? Cognitive neuroscience relies on many types of evidence to study the brain and nervous system. Let’s look at some of the options. Data from Neuropsychology We’ve already encountered one form of evidence-the study of individuals who have suffered brain damage through accident, disease, or birth defect. The study of these cases generally falls within the domain of neuropsychology: the study of the brain’s structures and how they relate to brain function. Within neuropsychology, the specialty of clinical neuropsychology seeks (among other goals) to understand the functioning of intact, undamaged brains by means of careful scrutiny of cases involving brain damage. Data drawn from clinical neuropsychology will be important throughout this text. For now, though, we’ll emphasize that the symptoms resulting from brain damage depend damage. A lesion (a specific area of damage) in the hippocampus produces memory problems but language disorders; a lesion in the occipital cortex produces problems in vision but spares the other sensory modalities. Likewise, the consequences of brain lesions depend on which hemisphere is damaged. Damage to the left side of the frontal lobe, for example, is likely to produce a on the site of the not disruption to the right side of the frontal lobe generally doesn’t have this effect. In of language use; damage obvious ways, then, these patterns confirm the claim that different brain areas perform different unctions. In addition, these patterns provide a rich source of data that help develop and test us hypotheses about those functions. Data from Neuroimaging Further insights come from neuroimaging techniques. There are several types of neuroimaging, but they all produce precise, three-dimensional pictures of a procedures provide structural imaging, generating positions of the brain’s components. Other procedures provide functional imaging, which tells us living brain. Some neuroimaging a detailed portrait of the shapes, sizes, and about activity levels throughout the brain.
For many years, computerized axial tomography (CT scans) was the primary tool for structural imaging, and positron emission tomography (PET scans) was used to study the brain’s activity. CT scans rely on X-rays and so-in essence-provide a three-dimensional X-ray picture of the brain. PET scans, in contrast, start by introducing a tracer substance such as glucose into the patient’s body; the molecules of this tracer have been tagged with a low dose of radioactivity, and the scan keeps track of this radioactivity, allowing us to tell which tissues are using more of the glucose (the body’s main fuel) and which ones are using less.
For each type of scan, the primary data (X-rays or radioactive emissions) are collected by a bank of detectors placed around the person’s head. A computer then compares the signals received by each of the detectors and uses this information to construct a three-dimensional map of the brain-a map of structures from a CT scan, and a map showing activity levels from PET scan. More recent studies have turned to two newer techniques, introduced earlier in the chapter. Magnetic resonance imaging (MRI scans) relies on the magnetic properties of the atoms that make Magnetic resonance up the brain tissue, and it yields fabulously detailed pictures of the brain. MRI scans structural images, but a closely related technique, functional magnetic provide resonance imaging (fMRI scans), provides functional imaging. The FMRI scans measure the oxygen content in blood flowing through each region of the brain; this turns out to be an accurate index of the level of neural activity in that region. In this way, fMRI scans offer an incredibly precise picture of the brain’s moment-by- moment activities.
The results of structural imaging (CT or MRI scans) are relatively stable, changing only if the or the growth of a tumor). The person’s brain structure changes (because of an results of PET or FMRI scans, in contrast, are highly variable, because the results depend injury, perhaps, on what task the person is performing. We can therefore use these latter techniques function-using fMRI scans, for example, to determine which brain sites are explore brain especially activated logic problem. In this way, the neuroimaging data can provide crucial information about how these activities are made possible by to when someone is making a moral judgment trying to solve a or specific patterns of functioning within the brain. Data from Electrical Recording Neuroscientists have another technique in their toolkit: electrical recording of the brain’s activity. To explain this point, though, we need to say a bit about how the brain functions. As mentioned earlier, the brain contains billions of nerve cells-called “neurons”-and it is the neurons that do the brain’s main work. (We’ll say more about these cells later in the chapter.) Neurons vary in their functioning, but for the most part they communicate with one another via chemical signals called “neurotransmitters.” Once a neuron is “activated,” it releases the transmitter, and this chemical can then activate (or, in some cases, de-activate) other, adjacent neurons. The adjacent neurons “receive” this chemical signal and, in turn, send their own signal onward to other neurons.
Let’s be clear, though, that the process we just described is communication between neurons: One neuron releases the transmitter substance, and this activates (or de-activates) another neuron. But there’s also communication within each neuron. The reason, basically, is that neurons have an “input” end and an “output” end. The “input” end is the portion of the neuron that’s most sensitive to neurotransmitters; this is where the signal from other neurons is received. The “output” end is the portion that releases neurotransmitters, sending the signal on to other neurons. These two ends can sometimes be far apart. (For example, some neurons in the body run from the base of the spine down to the toes; for these cells, the input and output ends might be a full meter apart.) The question, then, is how neurons get a signal from one end of the cell to the other. The answer involves an electrical pulse, made possible by a flow of charged atoms (ions) in and out of the neuron (again, we’ll say more about this process later in the chapter). The amount of electrical current involved in this ion flow is tiny; but many millions of neurons are active at the same time, and the current generated by all of them together is strong enough to be detected by sensitive electrodes placed on the surface of the scalp. This is the basis for electroencephalography-a recording of voltage changes occurring at the scalp that reflect activity in the brain underneath. This procedure generates an electroencephalogram (EEG)-a recording of the brain’s electrical activity.
Often, EEGS are used to study broad rhythms in the brain’s activity. For example, an alpha rhythm (with the activity level rising and falling seven to ten times per second) can usually be detected in the brain of someone who is awake but calm and relaxed; a delta rhythm (with the activity rising and falling roughly one to four times per second) is observed when someone is deeply asleep. A much faster gamma rhythm (between 30 and 80 cycles per second) has received a lot of research attention, with a suggestion that this rhythm plays a key role in creating conscious awareness (e.g., Crick & Koch, 1990; Dehaene, 2014).
Sometimes, though, we want to know about the electrical activity in the brain over a shorter period-for example, when the brain is responding to a specific input or a particular stimulus. In this case, we measure changes in the EEG in the brief periods just before, during, and after the event. These changes are referred to as event-related potentials (see Figure 2.7). The Power of Combining Techniques Each of the research tools we’ve described has strengths and weaknesses. CT scans and MRI data tell us about the shape and size of brain structures, but they tell nothing about the activity levels within these structures. PET scans and FMRI studies do tell us about brain activity, and they can locate the activity rather precisely (within a millimeter or two). But these techniques are less precise about when the activity took place. For example, fMRI data summarize the brain’s activity over a period of several seconds and cannot indicate when exactly, within this time window, the activity took place. EEG data give more precise information about timing but are much weaker in indicating where the activity took place.
Researchers deal with these limitations by means of a strategy commonly used in science: We seek data from multiple sources, so that the strengths of one technique can make up for the shortcomings of another. As a result, some studies combine EEG recordings with fMRI scans, with the EEGS telling us when certain events took place in the brain, and the scans telling us where the activity took place. Likewise, some studies combine FMRI scans with CT data, so that findings about brain activation can be linked to a detailed portrait of the person’s brain anatomy. Researchers also face another complication: the fact that many of the techniques described so far provide correlational data. To understand the concern here, let’s look at an example. A brain area called the is especially active whenever a face is being perceived (see Figure 2.8)-and so there is a correlation between a mental activity (perceiving a face) and a pattern of brain activity. Does this mean the FFA is needed for face perception? A different possibility is that the FFA activation may just be a by-product of face perception and doesn’t play a crucial role. As an analogy, think about the fact that a car’s speedometer becomes “more activated” (i.e., shows a higher value) whenever the car goes faster. That doesn’t mean that the speedometer causes the speed or is necessary for the speed. The car would go just as fast and would, for many purposes, perform just as well if the speedometer were removed. The speedometer’s state, in other words, is correlated with the car’s speed but in no sense causes (or promotes, or is needed for) the car’s speed. In the same way, neuroimaging data can tell us that a brain area’s activity is correlated with a particular function, but we need other data to determine whether the brain site plays a role in causing (or supporting, study of brain lesions. If damage allowing) that function. In many cases, those other data come from the or to a brain site disrupts a particular function, it’s an indication that some role in supporting that function. (And, in fact, the FFA does play an important the site does play role in face recognition.)
Also helpful here is a technique called transcranial magnetic stimulation (TMS). This technique creates a series of strong magnetic pulses at a specific location on the scalp, and these pulses activate the neurons directly underneath this scalp area (Helmuth, 2001). TMS can thus be used as means of asking what happens if we stimulate certain neurons. In addition, because this stimulation disrupts the ordinary function of these neurons, it produces a (temporary) lesion-allowing us to identify, in essence, what functions are compromised when a particular bit of brain tissue is briefly “turned off.” In these ways, the results of a TMS procedure can provide crucial information about the functional role of that brain area. Localization of Function Drawing on the techniques we have described, neuroscientists have learned a great deal about the function of specific brain structures. This type of research effort is referred to as the localization of function, an effort (to put it crudely) aimed at figuring out what’s happening where within the brain.
Localization data are useful in many ways. For example, think back to the discussion of Capgras syndrome earlier in this chapter. Brain scans told us that people with this syndrome have damaged amygdalae, but how is this damage related to the symptoms of the syndrome? More broadly, what problems does a function-in particular, on data showing that the amygdala is involved in many tasks involving emotional appraisal. This combination of points helped us to build (and test) our claims about this syndrome and, in general, claims about the role of emotion within the ordinary experience of “familiarity.
As a different illustration, consider the experience of calling up a “mental picture” before the “mind’s eye.” We’ll have more to say about this experience in Chapter 11, but we can already ask: How much does this experience have in common with ordinary seeing-that is, the processes that unfold damaged amygdala create? To tackle these questions, we rely on localization of when we place a real picture before someone’s eyes? As it turns out, localization data reveal enormous overlap between the brain structures needed for these two activities (visualizing and actual vision), telling us imme diately that these activities do have a great deal in common (see Figure 2.9). So, again, we build on localization-this time to identify how exactly two mental activities are related to each other. e. Demonstration 2.2: Temporary Fatigue-Created Brain Dysfunction Here’s an exercise that you may be able to do immediately, or you may need to wait for a suitable opportunity. The chapter discusses the diverse consequences of brain damage-with the particular result depending conditions can also lead to problems in brain functioning. When you’re extremely tired, for example, or ill, or intoxicated, you obviously don’t perform at your best. Of course, these conditions don’t involve permanent brain damage, but we can still ask: What aspects of your brain functioning are on where in the brain the damage is located. It turns out, however, that other disrupted by these temporary states?
As an initial step, ask yourself: What kinds of things can I still do well when I’m extremely tired or ill? What things do I do badly when I’m tired or ill? Then, based on this catalogue of “symptoms,” try disrupted by tiredness or illness. Next, based on this to describe the cognitive capacities that are can you figure out what brain areas aren’t working at their best in these situations? The Cerebral Cortex As we’ve noted, the largest portion of the human brain is the cerebral cortex-the thin layer of tissue covering the cerebrum. This is the region in which an enormous amount of information processing takes place, and so, for many topics, it is the brain region of greatest interest for cognitive psychologists.
The cortex includes many distinct regions, each with its own function, but these regions are traditionally divided into three categories. Motor areas contain brain tissue crucial for organizing and controlling bodily movements. Sensory areas contain tissue essential for organizing and analyzing the information received from the senses. Association areas support many functions including the essential (but not well-defined) human activity we call “thinking.” Motor Areas Certain regions of the cerebral cortex serve as the “departure points” for signals leaving the cortex and controlling muscle movement. Other areas are the “arrival points” for information coming from the eyes, ears, and other sense organs. In both cases, these areas are called “primary projection areas” with the departure points known as the primary motor projection areas and the arrival points contained in regions known as the primary sensory projection areas. Evidence for the motor projection area comes from studies in which investigators apply mild electrical current to this area in anesthetized animals. This stimulation often produces specific movements, so that current applied to one site causes a movement of the left front leg, while current applied to a different site causes the ears to contralateral control, with stimulation to the left hemisphere leading to movements on the right prick up. These movements show a pattern of side of the body, and vice versa.
Why are these areas called “projection areas”? The term is borrowed from mathematics and from the discipline of map making, because these areas seem to form “maps” of the external world, with particular positions on the cortex locations in space. In the human brain, the map that constitutes the motor projection area is located on a strip of tissue toward the rear of the frontal lobe, and the pattern of mapping is illustrated in to particular parts of the body or particular corresponding Figure 2.10. In this illustration, a drawing of a person has been overlaid on a depiction of the brain, with each part of the little person positioned on top of the brain area that controls its movement. The figure shows that areas of the body that we can move with great precision (e.g., fingers and lips) have a lot of cortical area devoted to them; areas of the body over which we have less control (e.g. the shoulder and the back) receive less cortical coverage. Sensory Areas Information arriving from the skin senses (your sense of touch or your sense of temperature) is projected to a region in the parietal lobe, just behind the motor projection area. This is labeled the “somatosensory” area in Figure 2.10. If a patient’s brain is stimulated in this region (with electrical current or touch), the patient will typically report a tingling sensation in a specific part of the body Figure 2.10 also shows the region (in the temporal lobes) that functions as the primary projection area for hearing (the “auditory” area). If the brain is directly stimulated here, the patient will hear clicks, buzzes, and hums. An area in the occipital lobes is the primary projection area for vision; stimulation here causes the patient to see flashes of light or visual patterns.
The sensory projection areas differ from each other in important ways, but they also have features in common-and they’re features that parallel the attributes of the motor projection area. First, each of these areas provides a “map” of the sensory environment. In the somatosensory area, each part of the body’s surface is represented by its own region on the cortex; areas of the body that are near to each other are typically represented by similarly nearby areas in the brain. In the visual area, each region of visual space has its own cortical representation, and adjacent areas of visual space are usually represented by adjacent brain sites. In the auditory projection area, different frequencies of sound have their own cortical sites, and adjacent brain sites are responsive to adjacent frequencies. Second, in each of these sensory maps, the assignment of cortical space is governed by function, not by anatomical proportions. In the parietal lobes, parts of the body that aren’t very discriminating with regard to touch-even if they’re physically large-get relatively little cortical area. Other, more sensitive areas of the body (the lips, tongue, and fingers) get much more space. In the occipital lobes, more cortical surface is devoted to the fovea, the part of the eyeball that is most sensitive to detail. Chapter 3.) And in the auditory areas, some frequencies of sound get (For more on the fovea, see more cerebral coverage than others. It’s surely no coincidence that these “advantaged” frequencies are those essential for the perception of speech.
Finally, we also find evidence here of contralateral connections. The somatosensory area in the left hemisphere, for example, receives its main input from the right side of the body; the corresponding area in the right hemisphere receives its input from the left side of the body. Likewise for the visual projection areas, although here the projection is not contralateral with regard to body parts. Instead, physical space. it’s contralateral with regard to Specifically, the visual projection area in the right hemisphere receives information from both the left eye and the right, but the information it receives corresponds to the left half of visual space (i.e., all of the things visible to your left when you’re looking straight ahead). The reverse is true for the visual area in the left hemisphere. It receives information from both eyes, but from only the right half of visual space. The pattern of contralateral organization is also evident-although not auditory cortex, with roughly 60 % of the nerve clear-cut-for the as fibers from each ear sending their information to the opposite side of the brain. Association Areas The areas described so far, both motor and sensory, make up only a small part of the human cerebral cortex-roughly 25%. The remaining cortical areas are traditionally referred to as the association cortex. This terminology is falling out of use, however, partly because this large volume of brain tissue can be subdivided further on both functional and anatomical grounds. These subdivisions are perhaps best revealed by the diversity of symptoms that result if the cortex is damaged in one or another specific location. For example, some lesions in the frontal lobe produce apraxias, disturbances in the initiation or organization of voluntary action. Other lesions (generally in the occipital cortex, or in the rearmost part of the parietal lobe) lead to agnosias, disruptions in the ability to identify familiar objects. Agnosias usually affect one modality only-so a patient with visual agnosia, for example, can recognize a fork by touching it but not by looking at it. A patient with auditory agnosia, by contrast, might be unable to identify familiar voices but might still recognize the face of the person speaking.
Still other lesions (usually in the parietal lobe) produce neglect syndrome, in which the individual seems to ignore half of the visual world. A patient afflicted with this syndrome will shave only half of his face and eat food from only half of his plate. If asked to read the word “parties,” he will read “ties,” and so on. Damage in other areas causes still other symptoms. We mentioned earlier that lesions in areas near the lateral fissure (the deep groove that separates the frontal and temporal lobes) can result in disruption to language capacities, a problem referred to as aphasia.
Finally, damage to the frontmost part of the frontal lobe, the prefrontal area, causes problems in planning and implementing strategies. In some cases, patients with damage here show problems in inhibiting their own behaviors, relying on habit even in situations for which habit is inappropriate. can also (as we mentioned in our discussion of Capgras syndrome) lead to a simply Frontal lobe damage variety of confusions, such as whether a remembered episode actually happened or was imagined. We’ll discuss more about these diagnostic categories-aphasia, agnosia, neglect, and more-in upcoming chapters, where we’ll consider these disorders in the context of other things that are known about object recognition, attention, and so on. Our point for the moment, though, is simple: These clinical patterns make it clear that the so-called association cortex contains many subregions, each specialized for a particular function, but with all of the subregions working together in virtually all aspects of our daily lives. e Demonstration 2.3: “Acuity” in the Somatosensory System The text indicates that the amount of cortical tissue devoted to different body areas depends on the sensitivity of those body areas, rather than their size. For example, a lot of cortical tissue is devoted to your lips and fingertips, even though these body parts are relatively small. Less cortical tissue is devoted to your back, even though your back is quite large.
However, this raises a question: Just how sensitive (or insensitive) is your back? One way to tackle this issue is with a measure of two-point acuity. This measure asks the question: How far apart do two points have to be in order for you to perceive them as two separate points? In other words, how far apart do the points have to be for you to tell whether you’ve been touched in one location or in two?
For this demonstration, you’ll need a friend and a ruler. Pull up your shirt just enough so that your friend has access to the bare skin of your back. Have your friend (without warning) touch you with either one fingertip or with two. Can you tell the difference? Ask your friend to vary the touch from one trial to the next-sometimes one fingertip, sometimes two, and, when two, sometimes close together and sometimes farther apart. For the trials in which you do realize that you’re being touched with two fingertips, have your friend measure (with the ruler) how far apart the touches were. This distance gives you an assessment of two-point acuity
Does your acuity depend on where on your back your friend touches you?
Now have your friend do the same procedure, but touching the inner surface of your forearm. (For this part of the procedure, close your eyes, so that you have to rely on your sense of touch rather than your view of what your friend is doing!) What is your two-point acuity on the forearm? Brain Cells Our brief tour so far has described some of the large-scale structures in the brain. For many purposes, though, actually carried out. we need to zoom in for a closer look, in order to see how the brain’s functions are actually carried out. Neurons and Glia We’ve already mentioned that the human brain contains many billions of neurons and a comparable number of glia. The glia perform many functions. They help to guide the development of the nervous system in the fetus and young infant; they support repairs if the nervous system is damaged; they also control the flow of nutrients to the neurons. Specialized glial cells also provide a layer of electrical insulation surrounding parts of some neurons; this insulation dramatically increases the speed with which neurons can send their signals. (We’ll return to this point in a moment.) Finally signaling system within the brain, some research suggests the glia may also constitute their own separate from the information flow provided by the neurons (e.g., Bullock et al, 2005; Gallo & Chitajullu, 2001). There is no question, though, that the main flow of information through the brain-from the sense organs inward, from one part of the brain to the others, and then from the brain outward-is made possible by the neurons. Neurons come in many shapes and sizes (see Figure 2.11), but in general, neurons have three major parts. The cell body is the portion of the cell that contains the neuron’s nucleus and all the elements needed for the normal metabolic activities of the cell. The dendrites are usually the “input” side of the neuron, receiving signals from many other neurons. In heavily branched, like a thick and tangled bush. The axon is the “output” side of the neuron; it sends neural impulses to other neurons (see Figure 2.12). Axons can most neurons, the dendrites are vary enormously in length-the giraffe, for example, has neurons with axons that run the full length of its neck. The Synapse We’ve mentioned that communication from one neuron to the next is generally made possible by a chemical signal: When a neuron has been sufficiently stimulated, it releases a minute quantity of a neurotransmitter. The molecules of this substance drift across the tiny gap between neurons and latch on to the dendrites of the adjacent cell. If the dendrites receive enough of this substance, the next neuron will “fire,” and so the signal will be sent along to other neurons.
Notice, then, that neurons usually don’t touch each other directly. Instead, at the end of the axon there is a gap separating each neuron from the next. This entire site-the end of the axon, plus the gap, plus the receiving membrane of the next neuron-is called a synapse. The space between the neurons is the synaptic gap. The bit of the neuron that releases the transmitter into this gap is the presynaptic membrane, and the bit of the neuron on the other side of the gap, affected by the transmitters, is the postsynaptic membrane.
When the neurotransmitters arrive at the postsynaptic membrane, they cause changes in this membrane that enable certain ions to flow into and out of the postsynaptic cell (see Figure 2.13). If these ionic flows are relatively small, then the postsynaptic cell quickly recovers and the ions are transported back to where they were initially. But if the ionic flows are large enough, they trigger a response in the postsynaptic cell. In formal terms, if the incoming signal reaches the postsynaptic cell’s threshold, then the cell fires. That is, it produces an action potential-a signal that moves down its axon, which in turn causes the release of neurotransmitters at the next synapse, potentially causing the next cell to fire. In some neurons, the action potential moves down the axon at a relatively slow speed. For other neurons, specialized glial cells are wrapped around the axon, creating a layer of insulation called the myelin sheath (see Figure 2.12). Because of the myelin, ions can flow in or out of the axon only at the gaps between the myelin cells. As a result, the signal traveling down the axon has to “jump” from gap to gap, and this greatly increases the speed at which the signal is transmitted. For neurons without myelin, the signal travels at speeds below 10 m/s; for “myelinated” neurons, the speed can be ten times faster.
Overall, let’s emphasize four points about this sequence of events. First, let’s note once again that neurons depend on two different forms of information flow. Communication from one neuron to the next is (for most neurons) mediated by a chemical signal. In contrast, communication from one end of the neuron to the other (usually from the dendrites down the length of the axon) is made possible by an electrical signal, created by the flow of ions in and out of the cell.
Second, the postsynaptic neuron’s initial response can vary in size; the incoming signal can cause a small ionic flow or a large one. Crucially, though, once these inputs reach the postsynaptic neuron’s firing threshold, there’s no variability in the response- either a signal is sent down the axon or it is not. If the signal is sent, it is always of the same magnitude, a fact referred to as the all-or- none law. Just pounding on a car horn won’t make the horn any louder, a stronger stimulus won’t produce a stronger action potential. A neuron either fires or it doesn’t; there’s no in-between. This does not mean, however, that neurons always send exactly the same information. A neuron can fire many times per second or only occasionally. A neuron can fire just once and then stop, or it can keep firing for an extended span. But, even so, each individual response by the neuron is always the same size.
Third, we should also note that the brain relies on many different neurotransmitters. By some counts, a hundred transmitters have been catalogued so far, and this diversity enables the brain to send a variety of different messages. Some transmitters have the effect of stimulating subsequent neurons; some do the opposite and inhibit other neurons. Some transmitters play an essential role in learning and memory; others play a key role in regulating the level of arousal in the brain; still others influence motivation and emotion.
Fourth, let’s be clear about the central role of the synapse. The synaptic gap is actually quite small -roughly 20 to 30 nanometers across. (For contrast’s sake, the diameter of a human hair is roughly 80,000 nanometers.) Even so, transmission across this gap slows down the neuronal signal, but this is a tiny price to pay for the advantages created by this mode of signaling: Each neuron receives information from (i.e., has synapses with) many other neurons, and this allows the “receiving” neuron to integrate information from many sources. This pattern of many neurons feeding into one also makes it possible for a neuron to “compare” signals and to adjust its response to one input according to the signal arriving from a different input. In addition, communication at the synapse is adjustable. This means that the strength of a synaptic connection can be altered by experience, and this adjustment is crucial for the process of learning-the storage of new knowledge and new skills within the nervous system. Coding This discussion of individual neurons leads to a further question: How do these microscopic nerve cells manage to represent a specific idea or a specific content? Let’s say that right now you’re thinking about your favorite song. How is this information represented by neurons? The issue here is referred to as coding, and there are many options for what the neurons’ “code” might be (Gallistel, specific group of neurons somehow represents “favorite song” so that whenever you’re thinking about the song, it’s precisely these neurons that are referred to as 2017). As one option, we might imagine that a activated. Or, as a different option, the song might be represented by activity. If so, “favorite song” might be represented in the brain by something like “Neuron X firing strongly while Neuron Y is firing weakly and Neuron Z is not firing at all” (and so on for thousands of other neurons). Note that within this scheme the same neurons might be involved in the representation of other sounds, but with different patterns. So-to continue our example-Neuron X might also be involved in the representation of the sound of a car engine, but for this sound it might a broad pattern of neuronal be part of a pattern that includes Neurons Q, R, and S also firing strongly, and Neuron Y not firing at all. As it turns out, the brain uses both forms of coding. For example, in Chapter 4 we’ll see that some neurons really are associated with a particular content. In fact, researchers documented a cell in one of the people they tested that fired whenever a picture of Jennifer Aniston was in view, and didn’t fire in response to pictures of other faces. Another cell (in a different person’s brain) fired whenever a picture of the Sydney Opera House was shown, but didn’t fire when other buildings were in view (Quiroga, Reddy, Kreiman, Koch, & Fried, 2005)! These do seem to be cases in which an idea (in particular, a certain visual image) is represented by specific neurons in the brain.
In other cases, evidence suggests that ideas and memories are represented in the brain through widespread patterns of activity. This sort of “pattern coding” is, for example, certainly involved in the neural mechanisms through which you plan, and then carry out, particular motions-like reaching lifting your foot to step over an obstacle (Georgopoulos, 1990, 1995). Well return to pattern coding in Chapter 9, when we discuss the notion of a distributed representation. e. Demonstration 2.4: The Speed of Neural Transmission Neural transmission is fast, but it takes a measurable amount of time. In fact, you may be surprised by just how slow this transmission is. Here’s a way to assess this speed by examining your response times: Have a friend pinch a ruler at its top end, with the ruler hanging down from your friend’s grasp. Position your thumb and index finger, ready to pinch, right at the bottom edge of the ruler. Your fingers should be close to the ruler, but not touching it.
Keep your eyes on the ruler. Tell your friend to let go of the ruler whenever he’s ready, but to give you no warning. Your job is to pinch your thumb and index finger together as rapidly as you can to catch the ruler as it drops.
How far does the ruler drop before you grab it? Where on the ruler are your fingers when you catch it? Try to predict this outcome before you run the procedure. It turns out that most people need 150 to 200 milliseconds (msec) to see that the ruler is dropping and then catch it. In this time span, the ruler is likely to drop at least 4 inches, and probably a bit more. Use this formula to translate the drop-distance into time: (The value 386 is the rate of gravitational acceleration-that is, 9.8 meters per second per second or roughly 386 inches per second per second.)
The time you calculate is the time needed for the neural signal to move from your eyes (when you to the circuits in your brain that launch the finger movement, plus the time see the ruler dropping) needed for you to launch the response (i.e., to initiate the movement in your fingers), plus the time then needed for the neural command to travel from your brain to your fingers.
By the way, if you caught the ruler when it had only fallen 3 inches, you are extremely fast. (And perhaps you should try to repeat this fast performance, to rule out the possibility that you simply had made a good guess about when the ruler would drop, and had actually started your response before the ruler began to move!) Average scores are between 6 and 8 inches-a distance that suggests a response time of roughly a fifth of a second. Moving On We have now described the brain’s basic anatomy and have also taken a brief look at the brain’s microscopic parts-the individual neurons. But how do all of these elements, large and small, function in ways that enable us to think, remember, learn, speak, or feel? As a step toward tackling this issue, the next chapter takes a closer look at the portions of the nervous system that allow us to see. We’ll use the visual system as our example for two important reasons. First, vision is the a huge amount of information, whether by reading or modality through which humans acquire simply by viewing the world around us. If we understand vision, therefore, we understand the processes that bring us much of our knowledge. Second, investigators have made enormous out the neural “wiring” of the visual system, offering progress in mapping sophisticated portrait of how this system operates. As a result, an examination of vision provides an excellent illustration of how the study of the brain can proceed and what it can teach us. COGNITIVE PSYCHOLOGY AND EDUCATION food supplements and cognition Various businesses try to sell you training programs or food supplements that (they claim) will improve your memory, help you think more clearly, and so on. Evidence suggests, though, that the currently offered training programs may provide little benefit. These programs do improve performance on the specific exercises contained within the training itself, but they have no impact on any tasks beyond these exercises. In other words, the programs don’t seem to help with the sorts of mental challenges you encounter in day-to-day functioning (Simons et al., 2016).
What about food supplements? Most of these supplements have not been tested in any systematic way, and so there’s little (and often no) solid evidence to support the claims sometimes made for these products. One supplement, though, has been rigorously tested: Ginkgo biloba, an extract derived from a tree of the same name and advertised as capable of enhancing memory. Is Ginkgo biloba effective? To answer that question, let’s begin with the fact that for its normal functioning, the brain requires an excellent blood flow and, with that, a lot of oxygen and a lot of nutrients. Indeed, it’s estimated that the brain, constituting roughly 2% of your body weight consumes 15% percent of your body’s energy supply. It’s not surprising, therefore, that the brain’s impaired if some change in your operations are health interferes with the flow of oxygen or nutrients. If (for example) you’re ill, or not eating enough, or not getting enough sleep, these conditions affect virtually all aspects of your biological functioning. However, since the brain is so demanding of nutrients and oxygen, it’s one of the first organs to suffer if the supply of these necessities is compromised. This is why poor nutrition or poor health almost inevitably undermines your ability to think, to remember, or to pay attention.
Within this context, it’s important that Ginkgo biloba can improve blood circulation and reduce some sorts of bodily inflammation. Because of these effects, Ginkgo can be helpful for people who have circulatory problems or who are at risk for nerve damage, and one group that may benefit is patients with Alzheimer’s disease. Evidence suggests that Ginkgo helps these patients remember more and think more clearly, but this isn’t because Ginkgo is making these patients “smarter” in any direct way. Instead. the Ginkgo is broadly improving the patients’ blood circulation and the health status of their nerve cells, allowing these cells to do their work.
What about healthy people-those not suffering from bodily inflammations or damage to their brain cells? Here, the evidence is mixed, but most studies have observed no benefit from this food supplement. Apparently, Ginkgo’s effects, if they exist at all in healthy adults, are so small that they’re difficult to detect.
Are there other steps that will improve the mental functioning of healthy young adults? Answers here have to be tentative, because new “smart pills” and “smart foods” are being proposed all the time, and each one has to be tested before we can know its effects. For now, though, we’ve already indicated part of a positive answer: Good nutrition, plenty of sleep, and adequate exercise will keep your blood supply in good condition, and this will help your brain to do its job. In addition, there may be something else you can do. The brain needs “fuel” to do its work, and the body’s fuel comes from the sugar glucose. You can protect yourself, therefore, by making sure that your brain has all the glucose it needs. This isn’t a recommendation to jettison all other aspects of your diet and eat nothing but chocolate bars. In fact, most of the glucose your body needs doesn’t come from sugary foods; instead, most comes from the breakdown of carbohydrates-from the grains, dairy products, fruits, and vegetables you eat. For this reason, it might be a good idea to have a slice of bread and a glass of milk just before taking an exam or walking into a particularly challenging class. These steps will help make sure that you’re not caught by a glucose shortfall that could interfere with your brain’s functioning. Also, be careful not to ingest too much sugar. If you eat a big candy bar just before an exam, you might get an upward spike in your blood glucose followed by a sudden drop, and these abrupt changes.
Overall, then, it seems that food supplements tested so far offer no “fast track” toward better produce problems of their own. can cognition. Ginkgo biloba is helpful, but mostly for special populations. A high-carb snack may help, but it will be of little value if you’re already adequately nourished. Therefore, on all these grounds, the best path toward better cognition seems to be the one that common sense would already recommend-eating a balanced diet, getting a good night’s sleep, and paying careful attention during your studies. COGNITIVE PSYCHOLOGY AND THE LAW detecting lies It’s obvious that people sometimes lie, and sometimes they lie to the police. The police do all they can, of course, to detect this deception, but actually most people (including the police) aren’t very skilled in making this determination. This is one of the reasons that police investigations often make use of a device called the “polygraph,” or, as it’s commonly called, the “lie detector” This device (in combination with a procedure known as the Control Question Test, or CQT) relies on the idea that someone who’s lying is likely to become anxious about the lie and tense up. These emotional changes, even if carefully suppressed by the test subject, are associated with changes in the person’s breathing pattern, heart rate, blood pressure, and perspiration. The polygraph measures these changes, and in that way it tries to detect the lie.
Polygraph results are correct most of the time. The exact accuracy level, however, is difficult to calculate because (among other considerations) much depends on the skill level of the polygrapher (and, specifically, how the polygrapher conducts the “pre-polygraph interview”). One overall estimate, though, from the National Research Council, suggests that the CQT detects roughly 77% of the liars and falsely accuses only 16% of the truth tellers. Why does the CQT sometimes give inaccurate results-failing to detect lies, or indicating that people are lying when they’re not? One reason is straightforward: Sometimes liars are perfectly calm, so the polygraph will miss their lies; sometimes truth tellers are highly anxious, and the polygraph will pick up this tension. In addition, it’s often possible to “beat the test” by using certain strategies. One strategy involves the test subject engaging in fast-paced mental arithmetic during key parts of the test. The idea here is that the CQT examines the subject’s state when he’s asked crucial questions (e.g., “Did you rob the bank?”) in comparison to his state when he’s asked neutral questions (e.g., “Are you sitting down?”). If the test subject uses a strategy that increases his arousal during neutral questions, this will make it hard to detect any difference between his state during these questions and during the crucial questions, making it harder to detect lies!