|chapter 1||The Science of the Mind|
The Scope of Cognitive Psychology
When the field of cognitive psychology was first launched, it was broadly focused on the scientific study of knowledge, and this focus led immediately to a series of questions: How is knowledge acquired? How is knowledge retained so that it’s available when needed? How is knowledge used—whether as a basis for making decisions or as a means of solving problems?
These are great questions, and it’s easy to see that answering them might be quite useful. For example, imagine that you’re studying for next Wednesday’s exam, but for some reason the material just won’t “stick” in your memory. You find yourself wishing, therefore, for a better strategy to use in studying and memorizing. What would that strategy be? Is it possible to have a “better memory”? As a different case, let’s say that while you’re studying, your friend is moving around in the room, and you find this quite distracting. Why can’t you just shut out your friend’s motion? Why don’t you have better control over your attention and your ability to concentrate?
Here’s one more example: You’re looking to learn how many people have decided to vote for candidate X. How do people decide whom to vote for? For that matter, how do people decide what college to attend, or which car to buy, or even what to have for dinner? And how can we help people make better decisions- so that, for example, they choose healthier foods, or vote for the candidate who (in your view) is preferable?
Before we’re through, we’ll consider evidence pertinent to all of these questions. Let’s note, though, that in these examples, things aren’t going as you might have wished: You remember less than you want to; you can’t ignore a distraction; the voters make a choice you don’t like. What about the other side of the picture? What about the remarkable intellectual feats that humans achieve- brilliant deductions or creative solutions to complex problems? In this text, we’ll also discuss these cases and explore how people manage to accomplish the great things they do.
The Broad Role for Memory:
The questions we’ve mentioned so far might make it sound like cognitive psychology is concerned just with your functioning as an intellectual-your ability to remember, or to pay attention, or to think through options when making a choice. As we’ve said, though, the relevance of cognitive psychology is much broader-thanks to the fact that a huge range of your actions, thoughts, and feelings depend on your cognition. As one way to convey this point, let’s ask: When we investigate how memory functions, what’s at stake? Or, to turn this around, what aspects of your life depend memory?
You obviously rely on memory when you’re taking an exam-memory for what you learned during the term. Likewise, you rely on memory when you’re at the supermarket and trying to remember the cheesecake recipe so that you can buy the right ingredients. You also rely on memory when you’re reminiscing about childhood. But what else draws on memory? Consider this simple story (adapted from Charniak, 1972):
Betsy wanted to bring Jacob a present. She shook her piggy bank. It made no sound. She went to look for her mother.
This four-sentence tale is easy to understand, but only because you provided important bits of background. For example, you weren’t at all puzzled about why Betsy was interested in her piggy bank; you weren’t puzzled, specifically about why the story’s first sentence led to the second. This is because you naturally already knew (a) that the things gives presents are often things bought for the occasion (rather than things already owned), (b) one as that buying things requires money, and (c) that money is sometimes stored in piggy banks. Without these facts, you would have wondered why a desire to give a gift would lead someone to her piggy bank. (Surely you didn’t think Betsy intended to give the piggy bank itself as the present!) Likewise, you immediately understood why Betsy shook her piggy bank. You didn’t shaking it in frustration or to find out if it would make a good suppose that she was trying percussion instrument. Instead, you understood that she was trving to determine its contents. But vou knew this fact only because vou alread knew (d) that Betsy was a child (because few adults keep their money in piggy banks). (e) that children don’t keep track of how much money is in their banks, and (f) that piggy banks are made out of opaque material (and so a child can’t simply look into the bank to see what’s inside). Without these facts, Betsy’s shaking of the bank would make no sense. Similarly, you understood what it meant that the bank made no sound. That’s because you know (g) that it’s usually coins (not bills) that are kept in piggy banks, and (h) that coins make noise when they’re shaken. If you didn’t know these facts, you might have interpreted the bank’s silence, when it was shaken, as good news, indicating perhaps that the bank was jammed full of $20 bills-an inference that would have led you to a very different expectation for how the story would unfold from there.
Of course, there’s nothing special about the “Betsy and Jacob” story, and we’d uncover a similar reliance on background knowledge if we explored how you understand some other conversation, comprehend a TV show. Our suggestion, in other words, is that many (perhaps all) of your encounters with the world depend on your narrative or follow a or supplementing your experience with knowledge that you bring to the situation. And perhaps this has to be true. After all, if you didn’t supply the relevant bits of background, then anyone telling the “Betsy and Jacob” story would need to spell out all the connections and alll the assumptions. That is, the story would have to include all the facts that, with memory, are supplied by you. As a result, the story would have to be much longer, and the telling of it much slower. The same would be true for every story you hear, every conversation you participate in. Memory is thus crucial for each of these activities.
Amnesia and Memory Loss:
Here is a different sort of example: In Chapter 7, we will consider cases of clinical amnesia-cases in which someone, because of brain damage, has lost the ability to remember certain materials. These cases are fascinating at many levels and provide key insights into what memory is for. Without memory, what is disrupted?
H.M. was in his mid-20s when he had brain surgery intended to control his severe epilepsy. The surgery was, in a narrow sense, a success, and H. M’s epilepsy was brought under control. But this gain came at an enormous cost, because H.M. essentially lost the ability to form new memories. He survived for more than 50 years after the operation, and for all those years he had little trouble remembering events prior to the surgery. But H.M. seemed completely unable to recall any event that occurred after his operation. If asked who the president is, or about recent events, he reported facts and events that were current at the time of the surgery. If asked questions about last week, or even an hour ago, he recalled nothing.
This memory loss had massive consequences for H. M’s life, and some of the consequences are surprising. For example, he had an uncle he was very fond of, and he occasionally asked his hospital visitors how his uncle was doing. Unfortunately, the uncle died sometime after H.M.’s surgery, and horrible shock, but because of his amnesia, H.M. was told this sad news. The information came as H.M. soon forgot about it.
Sometime later, because he’d forgotten about his uncle’s death, H.M. again asked how his uncle was doing and was again told of the death. But with no memory of having heard this news before, he was once more hearing it “for the first time,” with the shock and grief every bit as strong as it was initially. Indeed, each time he heard this news, he was hearing it “for the first time.” With no memory, he had no opportunity to live with the news, to adjust to it. As a result, his grief could not subside. Without memory, H.M. had no way to come to terms with his uncle’s death.
A different glimpse of memory function comes from some of H. M’s comments about what it felt like to be in his situation. Let’s start here with the notion that for those of us without amnesia, numerous memories support our conception of who we are: We know whether we deserve praise for our good deeds or blame for our transgressions because we remember those good deeds and transgressions. We know whether we’ve kept our promises or achieved our goals because, again, we have the relevant memories. None of this is true for people who suffer from amnesia, and H.M. sometimes commented that in important ways, he didn’t know who he was. He didn’t know if he should be proud of his accomplishments or ashamed of his crimes; he didn’t know if he’d been clever or stupid, honorable or dishonest, industrious or lazy. In a sense, then, without a memory, there is no self. (For broader discussion, see Conway & Pleydell-Pearce, 2000; Hilts, 1995.)
What, then, is the scope of cognitive psychology? As we mentioned earlier, this field is sometimes defined as the scientific study of the acquisition, retention, and use of knowledge. We’ve now seen, though, that “knowledge” (and hence the study of how we gain and use knowledge) is relevant to a on our knowledge (and, in particular, huge range of concerns. Our self-concept, it seems, depends on our memory for various episodes in our past). Our emotional adjustments to the world rely on our memories. Even our ability to understand a simple story-or, presumably, our ability to supplementing that experience with some knowledge.
The suggestion, then, is that cognitive psychology can help us understand capacities relevant to virtually every moment of our lives. Activities that don’t appear to be intellectual would collapse without the support of our cognitive functioning. The same is true whether we’re considering our movements through the world, our social lives, our emotions, or any other domain. This is physical the scope of cognitive psychology and, in a real sense, the scope of this book.
e. Demonstration 1.1: The Broad Impact of Background Knowledge
The chapter emphasizes the important role of cognition for a range of activities that don’t seem, on the surface, to be deeply “intellectual.” The chapter uses a simple children’s story to make this point -highlighting how much knowledge you have to contribute in order to understand the story. Here is different example, making a similar point.
Imagine the following bit of dialogue:
Person 1: Where were you on the night of December 4?
Person 2: Come on. You know I had nothing to do with this.
Person 1: Where were you on the night of December 4?
Person 2: I was home watching TV.
Person 1: Alone? Person 2: Of course, I was alone.
Person 1: Want to have dinner with me next week?
Think about how you understood these seven lines of dialogue. By the end of the first line, were you already making inferences about who Person 1 might be (e.g., a police detective) and who Person 2 might be (e.g., a suspect)?
Why did Person 1 simply repeat the initial question? Do you believe Person 1 has a hearing problem?
Can you catalogue some of the other inferences and assumptions you made in understanding this dialogue?
Does the last line of dialogue (the dinner invitation) make sense? Why or why not? What knowledge are you using in deciding whether the invitation makes sense?
e. Demonstration 1.2: Understanding Depends on Background Knowledge
Consider the following simple story:
Fred went to his favorite restaurant.
After the waiter took his order, Fred heaved a great sigh. Another dinner alone.
Once again, though, he slogged through the meal and then slowly returned to his lonely apartment.
Take a moment and list the things that you now believe to be true about Fred and his life circumstances.
As a way of thinking this through, ask yourself questions like these:
Would it make sense if I learned that Fred was 5 years old?
Would it make sense if I learned that Fred was in a stable, contented relationship?
You might also think through what else you know about Fred’s time in the restaurant. Did he ever see a menu? Did he eat? Did he eat standing up or sitting down? Did he “slog” through the meal in the same way that biology students might slog through a marsh as part of a field study? (In other words, did he stomp on the meal, wearing waterproof boots?) Did Fred pay the check before he left?
All of these questions are likely to be enormously easy for you; but, once more, what knowledge are you using in answering them and, more broadly, in understanding this simple story?
The Cognitive Revolution:
The enterprise that we now call “cognitive psychology” is a bit more than 50 years old, and the emergence of this field was in some ways dramatic. Indeed, the science of psychology a succession of changes in the 1950s and 1960s that are often referred to as psychology’s “cognitive went through revolution.” This “revolution” involved a new style of research, aimed initially at questions we’ve already met: questions about memory, decision making, and so on. But this new type of research, and its new approach to theorizing, soon influenced other domains, with the result that the cognitive revolution dramatically changed the intellectual map of our field.
The cognitive revolution centered on two key ideas. One idea is that the science of psychology study the mental world directly. A second idea is that the science of psychology must study cannot the mental world if we’re going to understand behavior. As a path toward understanding these ideas, let’s look at two earlier traditions in psychology that offered a rather different perspective. Let’s emphasize, though, that our purpose here is not to describe the full history of modern cognitive psychology. That history is rich and interesting, but our goal is a narrow one-to explain why the were. (For readers interested in the history, see Bartlett they cognitive revolution’s themes were as 1932; Benjamin, 2008; Broadbent, 1958; Malone, 2009; Mandler, 2011.)
The Limits of Introspection:
In the late 19th century, Wilhelm Wundt (1832-1920) and his student Edward Bradford Titchener (1867-1927) launched a new research enterprise, and according to many scholars it was their work that eventually led to the modern field of experimental psychology. In Wundt’s and Titchener’s view, psychology needed to focus largely perceptions, and recollections. But how should these events be studied? These early researchers on the study of conscious mental events-feelings, thoughts, started with the fact that there is no way for you to experience my thoughts, or I yours. The only experience or observe your thoughts is you. Wundt, Titchener, and their colleague’s person who can study thoughts is through introspection, or “looking within” to observe and record the content of our own mental lives and the sequence of our own experiences.
Wundt and Titchener insisted, though, that this introspection could not be casual. Instead, introspections had to be meticulously trained: They were given a vocabulary to describe what they observed; they were taught to be as careful and as complete as possible; and above all, they were trained simply to report on their experiences, with a minimum of interpretation.
This style of research was enormously influential for several years, but psychologists gradually became disenchanted with it, and it’s easy to see why. As one concern, these investigators soon had to acknowledge that some thoughts are unconscious, which meant that introspection was limited as a research tool. After all, by its very nature introspection is the study of conscious experiences, so of course it can tell us nothing about unconscious events.
Indeed, we now know that unconscious thought plays a huge part in our mental lives. For example, what is your middle name? Most likely, the moment you read this question, the name “popped” into your thoughts without any effort. But, in fact, there’s good reason to think that this simple bit of remembering requires a complex series of steps. These steps take place outside of awareness; and so, if we rely on introspection as our means of studying mental events, we have no way of examining these processes.
But there’s another, deeper problem with introspection. In order for any science to proceed there must be some way to test its claims; otherwise, we have no means of separating correct assertions from false ones, accurate descriptions of the world from fictions. Along with this requirement, science needs some way of resolving disagreements. If you claim that Earth has one moon and I insist that it has two, we need some way of determining who is right. Otherwise, our “science” will become a matter of opinion, not fact.
With introspection, this testability of claims is often unattainable. To see why, imagine that I insist my headaches are worse than yours. How could we ever test my claim? It might be true that I describe my headaches in extreme terms: I talk about them being “agonizing” and “excruciating” But that might indicate only that I like to use extravagant descriptions; those words might reveal my tendency to exaggerate (or to complain), not the actual severity of my headaches. Similarly, it might be true that I need bed rest whenever one of my headaches strikes. Does that mean my headaches are truly intolerable? It might mean instead that I’m self-indulgent and rest even when I feel mild pain. Perhaps our headaches are identical, but you’re stoic about yours and I’m not.
How, therefore, should we test my claim about my headaches? What we need is some way of one of my directly comparing my headaches to yours, and that would require transplanting headaches into your experience, or vice versa. Then one of us could make the appropriate comparison. But (setting aside science fiction or fantasy) there’s no way to do this, leaving us, in the end, unable to determine whether my headache reports are distorted or accurate. We’re left, in other words, with the brute fact that our only information about my headaches is what comes through the filter of my description, and we have no way to know how (or whether) that filter is coloring the evidence.
we do want to understand conscious
For purposes of science, this is unacceptable. Ultimately, we will consider introspective reports. For example, we’ll talk about the subjective feeling of “familiarity” and the conscious experience of mental imagery; in Chapter 14, we’ll talk about consciousness itself. In these settings, though, we’ll rely on introspection introspective data as a means experience, and so, in later chapters, as a source of observations that need to be explained. We won’t rely on we can’t. If we want to test hypotheses, we need data hypotheses-because, usually, of evaluating can rely on, and, among other requirements, this means data that aren’t dependent particular descriptive style. Scientists generally achieve this objectivity by making sure the raw data are out in plain view, so that you can inspect my evidence, and I can inspect yours. In that way, we can be certain that neither of us is distorting facts. And that is precisely what we cannot do with introspection.
The Years of Behaviorism:
Historically, the concerns just described led many psychologists to abandon introspection as research tool. Psychology couldn’t be a science, they argued, if it relied on this method. Instead, psychology needed objective data, and that meant data out in the open for all to observe.
What sorts of data does this allow? First, an organism’s behaviors are observable in the right way: You can watch my actions, and so can anyone else who is appropriately positioned. Therefore, data concerned with behavior are objective data and thus grist for the scientific mill. Likewise, stimuli in the world are in the same “objective” category: These are measurable, recordable, physical events.
In addition, you can arrange to record the stimuli I experience day after day after day and also the behaviors I produce each day. This means that you can record how the pattern of my behavior changes over time and with the accumulation of experience. In other words, my learning history can be objectively recorded and scientifically studied.
In contrast, my beliefs, wishes, goals, preferences, hopes, and expectations cannot be directly observed, cannot be objectively recorded. These “mentalistic” notions can be observed only via scientific tool. Therefore, a introspection; and introspection, we’ve suggested, has little value as scientific psychology needs to avoid these invisible internal entities.
This perspective led to the behaviorist movement, a movement that dominated psychology in America for the first half of the 20th century. The movement was in many ways successful and uncovered a range of principles concerned with how behavior changes in response to various stimuli (including the stimuli we call “rewards” and “punishments”). By the late 1950s, however psychologists were convinced that a lot of our behavior could not be explained in these terms. The reason, basically, is that the ways people act, and the ways they feel, are guided by how they understand or interpret the situation, and not by the objective situation itself. Therefore, if we follow the behaviorists’ instruction and focus only on the objective situation, we will often misunderstand why people are doing what they’re doing and make the wrong predictions about how they’ll behave in the future. To put this point another way, the behaviorist perspective demands that we not talk about mental entities such as beliefs, memories, and so on, because these things cannot be studied directly and so cannot be studied scientifically. Yet it seems that these subjective entities play pivotal role in guiding behavior, and so we must consider them if we want to understand behavior.
Evidence pertinent to these assertions is threaded throughout the chapters of this book. Over and over, we’ll find it necessary to mention people’s perceptions and strategies and understanding, as we explain why (and how) they perform various tasks and accomplish various goals. Indeed, we’ve already seen an example of this pattern. Imagine that we present the “Betsy and Jacob” story to people and then ask various questions: Why did Betsy shake her piggy bank? Why did she go to look for her mother? People’s responses will surely reflect their understanding of the story, which in turn depends on far more than the physical stimulus-that is, the 29 syllables of the story itself. If we want to predict someone’s responses to these questions, therefore, well need to refer to the stimulus (the story itself) and also to the person’s knowledge and understanding of this stimulus.
Here’s a different example that makes the same general point. Imagine you’re sitting in the dining hall. A friend produces this physical stimulus: “Pass the salt, please,” and you immediately produce a bit of salt-passing behavior. In this exchange, there is a physical stimulus (the words your friend uttered) and an easily defined response (your passing of the salt), and so this simple event seems fine from the behaviorists’ perspective-the elements are out in the open, for all to observe, and can be objectively recorded. But note that the event would have unfolded in the same way if your friend had offered a different stimulus. “Could I have the salt?” would have done the trick. Ditto for “Salt, please!” or “Hmm, this sure needs salt!” If your friend is both loquacious and obnoxious, the utterance might have been: “Excuse me, but after briefly contemplating the gustatory qualities of these comestibles, I have discerned that their sensory qualities would be enhanced by the addition of a number of sodium and chloride ions, delivered in roughly equal proportions and in crystalline form; could you aid me in this endeavor?” You might giggle (or snarl) at your friend, but you would still pass the salt.
Now let’s work on the science of salt-passing behavior. When is this behavior produced? We’ve just seen that the behavior is evoked by a number of different stimuli, and so we would surely want to ask: What do these stimuli have in common? If we can answer that question, we’re on our way to understanding why these stimuli all have the same effect.
The problem, though, is that if we focus on the observable, objective aspects of these stimuli, they actually have little in common. After all, the sounds being produced in that long statement about sodium and chloride ions are rather different from the sounds in the utterance “Salt, please!” And in many circumstances, similar sounds would not lead to salt-passing behavior. Imagine that your friend says, “Salt the pass” or “Sass the palt.” These are acoustically similar to “Pass the salt” but wouldn’t have the same impact. Or imagine that your friend says, “She has only a small part in the play. All she gets to say is ‘Pass the salt, please.” In this case, the right syllables were uttered, but you wouldn’t pass the salt in response.
It seems, then, that our science of salt passing won’t get very far if we insist on talking only about the physical stimulus. Stimuli that are physically different from each other (“Salt, please” and the bit about ions) have similar effects. Stimuli that are physically similar to each other (“Pass the salt” and “Sass the palt”) have different effects. Physical similarity, therefore, is not what unites the various stimuli that evoke salt passing.
It’s clear, though, that the various stimuli that evoke salt passing do have something in common: They all mean same thing Sometimes this meaning derives from the words the themselves (“Please pass the salt”). In other the meaning depends on certain cases, pragmatic rules. (For example, you understand that the question “Could you pass the salt?” isn’t question about arm strength, although, interpreted literally, it might be understood that a way.) In all cases, though, it seems plain that to predict your behavior in the dining hall, we need to ask what these stimuli mean to you. This seems an extraordinarily simple point, but it is a point, echoed by countless other examples, that indicates the impossibility of a complete behaviorist psychology.1
The Intellectual Foundation of the Cognitive Revolution:
One might think, then, that we’re caught in a trap. On one side, it seems that the way people act is shaped by how they perceive the situation, how they understand the stimuli, and so on. If we want to explain behavior, then, we have no choice. We need to talk about the mental world. But, on the other side, the only direct means of studying the mental world is introspection, and introspection is scientifically unworkable. Therefore: We need to study the mental world, but we can’t.
There is, however, a solution to this impasse, and it was suggested years ago by the philosopher Immanuel Kant (1724-1804). To use Kant’s transcendental method, you begin with the observable facts and then work backward from these observations. In essence, you ask: How could these observations have come about? What must be the underlying causes that led to these effects?
This method, sometimes called “inference to best explanation,” is at the heart of most modern science. Physicists, for example, routinely use this method to study objects or events that cannot be observed directly. To take just one case, no physicist has ever observed an electron, but this hasn’t stopped physicists from learning a great deal about electrons. How do the physicists proceed? Even though electrons themselves aren’t observable, their presence often leads to observable results-in essence, visible effects from an invisible cause. For example, electrons leave observable tracks in cloud chambers, and they can produce momentary fluctuations in a magnetic field. Physicists can then use these observations in the same way a police detective uses clues-asking what the “crime” must have been like if it left this and that clue. (A size 11 footprint? That probably tells us what size feet the criminal has, even though no one saw his feet. A smell of tobacco smoke? That suggests the criminal was a smoker. And so on.) In the same way, physicists observe the clues that electrons leave behind, and from this information they form hypotheses about what electrons must be like in order to have produced those effects.
Of course, physicists (and other scientists) have a huge advantage over a police detective. If the detective has insufficient evidence, she can’t arrange for the crime to happen again in order to produce more evidence. (She can’t say to the robber, “Please visit the bank again, but this time don’t wear a mask.”) Scientists, in contrast can arrange for a repeat of the “crime” they’re seeking to explain-they can arrange for new experiments, with new measures. Better still, they can set the stage in advance, to maximize the likelihood that the “culprit” (in our example, the electron) will leave useful clues behind. They can, for example, add new recording devices to the situation, or they can place various obstacles in the electron’s path. In this way, scientists can gather more and more data, including data crucial for testing the predictions a particular theory. This prospect-of reproducing experiments and varying the experiments to test hypotheses-is what gives of science its power. It’s what enables scientists to that their hypotheses have rigorously tested, and it’s what gives scientists assert been assurance that their theories are correct.
Psychologists work in the same fashion-and the notion that we could work in this fashion was one of the great contributions of the cognitive revolution. The idea is this: We know that we need to study mental processes; that’s what we learned from the limitations of classical behaviorism. But we also know that mental processes cannot be observed directly; we learned that from the downfall of introspection. Our path forward, therefore, is to fact that these processes, themselves invisible, have visible consequences: measurable delays in study mental processes indirectly, relying on the producing a response, performances that can be assessed for accuracy, errors that can be scrutinized and categorized. By examining these (and other) effects produced by mental processes, we can develop-and test -hypotheses about what the mental processes must have been. In this way, we use Kant’s method, just as physicists (or biologists or chemists or astronomers) do, to develop a science that does not rest on direct observation.
The Path from Behaviorism to the Cognitive Revolution:
In setting after setting, cognitive psychologists have applied the Kantian logic remember, make decisions, pay attention, or solve problems. In each case, we begin with a particular performance-say, a problem that someone solved-and then hypothesize a series of unseen mental explain how people to events that made the performance possible. But we don’t stop there. We also ask whether some other, perhaps simpler, sequence of events might explain the data. In other words, we do more than ask how the data came about; we seek the best way to think about the data.
This pattern of theorizing has become the norm in psychology-a powerful indication that the cognitive revolution did indeed change the entire field. But what triggered the revolution? What happened in the 1950s and 1960s that propelled psychology forward in this way? It turns out that multiple forces were in play.
One contribution came from within the behaviorist movement itself. We’ve discussed concerns about classical behaviorism, and some of those concerns were voiced early on by Edward Tolman (1886-1959)-a researcher who can be counted both as a behaviorist and as one of the forerunners of cognitive psychology. Prior to Tolman, most behaviorists argued that change learning could be understood simply as a change in behavior. Tolman argued, however, that learning involved something more abstract: the acquisition of new knowledge.
In one of Tolman’s studies, rats were placed in a maze day after day. For the initial 10 days no food was available anywhere in the maze, and the rats wandered around with no pattern to their behavior. Across these days, therefore, there was no change in behavior-and so, according to the conventional view, no learning. But, in fact, there was learning, because the rats learning the layout of the maze. That became clear on the 11th day of testing, when were food was introduced into the maze in a particular location. The next day, the rats, placed back in the maze, ran immediately to that location. Indeed. their behavior was essentially identical to the behavior of rats who had had many days of training with food in the maze (Tolman, 1948: Gleitman, 1963).
What happened here? Across the initial 10 days, rats were acquiring what Tolman called a “cognitive map” of the maze. In the early days of the procedure, however, the rats had no motivation to use this knowledge. On Days 11 and 12, though, the rats gained a reason to use what they knew and at that point they revealed their knowledge. The key point, though, is that-even for rats-we need to talk about (invisible) mental processes (e.g., the formation of cognitive maps) if we want to explain behavior.
A different spur to the cognitive revolution also arose out of behaviorism-but this time from a strong critique of behaviorism. B.F. Skinner (1904-1990) was an influential American behaviorist, and in 1957 he applied his style of analysis to humans’ ability language use could be understood in terms of behaviors and rewards (Skinner, 1957). Two years later to learn and use language, arguing that the linguist Noam Chomsky (1928-) published a ferocious rebuttal to Skinner’s proposal, and entirely different approach was needed for explaining convinced many psychologists that an language learning and language use, and perhaps for other achievements well.
European Roots of the Cognitive Revolution:
Research psychology in the United States was we’ve said, dominated by the behaviorist movement for many years. The influence of behaviorism was not as strong, however, in Europe, and several strands of European research fed into and strengthened the cognitive revolution. In Chapter 3, we will describe some of the theorizing that grew out of the Gestalt psychology movement, an important movement based in Berlin in the early decades of the 20th century. (Many of the Gestaltists fled to the United States in the years leading up to World War II and became influential figures in their new home.) Overall, the Gestalt psychologists argued that behaviors, ideas, and perceptions are organized in a way that could not be understood through a part-by-part, element-by-element, analysis of the world. Instead, they claimed, the elements take on meaning only as part of the whole-and therefore psychology needed to understand the nature of the “whole.” This position had many implications, including an emphasis on the role of the perceiver in organizing his or her experience. As we will see, this notion-that perceiver shapes their own experience-is a central theme for modern cognitive psychology
Another crucial figure British was psychologist Frederic Bartlett (1886-1969). Although he was working in a very different tradition from the Gestalt psychologists, Bartlett also emphasized the ways in which each of us shapes and organizes our experience. Bartlett claimed that people spontaneously fit their experiences into a mental framework, or “schema,” and rely on this schema both to interpret the experience as it happens and to aid memory later on. We’ll say more about Bartlett’s work (found primarily in his book Remembering, published in 1932) in Chapter 8.
Computers and the Cognitive Revolution:
Tolman, Chomsky, the Gestaltists, and Bartlett disagreed on many points. Even so, a common theme ran through their theorizing: These scholars all agreed that we could not explain humans’ (or even rats) behavior unless we explain what is going on within the mind-whether our emphasis is on cognitive maps, schemata, or some other form of knowledge. But, in explaining this knowledge and how the knowledge is put to use, where should we begin? What sorts of processes or mechanisms might we propose?
Here we meet another crucial stream that fed into the cognitive revolution, because in the 1950s a new approach to psychological explanation became available and turned out to be immensely fruitful. This new approach was suggested by the rapid developments in electronic information processing, including developments in computer technology. It soon became clear that computers were capable of immensely efficient information storage and retrieval (“memory”), as well as performance that seemed to involve decision making and problem solving. Indeed, some computer scientists proposed that computers would soon be genuinely intelligent-and the field of “artificial intelligence” was launched and made rapid progress (e.g., Newell & Simon, 1959).
Psychologists were intrigued by these proposals and began to explore the possibility that the human mind followed processes and procedures similar to those used in computers. As a result, psychological data were soon being explained in terms of “buffers” and “gates” and “central processors,” terms borrowed from computer technology (e.g., Miller, 1956; Miller, Galanter, & Pribram, 1960). This approach was evident, for example, in the work of another British psychologist, Donald Broadbent (1926-1993). He was one of the earliest researchers to use the language of computer science in explaining human cognition. His work emphasized a succession of practical issues, including the mechanisms through which people focus their attention when working in comnlex environments and his book Percention and Commumication (1958) framed discussions of attention for many years.
This computer-based vocabulary allowed a new style of theorizing. Given a particular performance, say, in paying attention or on some memory task, one could hypothesize a series of information-processing events that made the performance possible. As we will see, hypotheses cast in these terms led psychologists to predict a broad range of new observations, and in this way both organized the available information and led to many new discoveries.
Research in Cognitive Psychology: The Diversity of Methods
Over the last half-century, cognitive psychologists have continued to frame many hypotheses in these computer-based terms. But we’ve also developed other options for theorizing. For example, before we’re done in this book, we’ll also discuss hypotheses framed in terms of the strategies a person is relying on, or the inferences she is making. No matter what the form of the hypothesis, though, the next steps are crucial. First, we derive new predictions from the hypothesis, along the lines of “If this is the mechanism behind the original findings, then things should work differently in this circumstance or that one.” Then, we gather new data to test those predictions. If the data fit with the predictions, this outcome confirms the hypothesis. If the data don’t line up with the predictions, a new hypothesis is needed.
But what methods do we use, and what sorts of data do we collect? The answer, in brief, is that we use diverse methods and collect many types of data. In other words, what unites cognitive psychology is not an allegiance to any particular procedure in the laboratory. Instead, what unites the field is the logic that underlies our research, no matter what method we use. (We discuss this logic more fully in the appendix for this textbook. The appendix contains a series of modules, with each module exploring an aspect of research methodology directly related to one of the book’s chapters.)
What sorts of data do we use? In some settings, we ask how well people perform a particular task. For example, in tests of memory we might ask how complete someone’s memory is (does the person remember all of the objects in view in a picture?) and also how accurate the memory is (does the person perhaps remember seeing a banana when, in truth, no banana was in view?). We can also ask how performance changes if we change the “input” (how well does the person remember a story rather than a picture?), and we can change the person’s circumstances (how is memory changed if the person is happy, or afraid, when hearing the story?). We can also manipulate the person’s plans or strategies (what happens if we teach the person some sort of memorization technique?), and we can compare different people (children vs. adults; novices at a task vs. experts; people with normal vision vs. people who have been blind since birth).
A different approach relies on measurements of speed. The idea here is that mental operations are fast but do take a measurable amount of time, and by examining the response time (RT)-that is, how long someone needs to make a particular response-we can often gain important insights into what’s on in the mind. For example, imagine that we ask you: “Yes or no: Do cats have whiskers?” And going then: “Yes or no: Do cats have heads?” Both questions absurdly easy, so there’s no point in asking are whether you’re accurate in your responses-it’s a sure bet that you will be. We can, however, measure your response times to questions like these, often with intriguing results. For example, if you’re forming a mental picture of a cat when you’re asked these questions, you’ll be faster for the “heads” question than the “whiskers” question. If you think about cats without forming a mental picture, the pattern reverses-you’ll be faster for the “whiskers” question. In Chapter 11, we’ll use hypotheses about how information- and mental pictures in particular-are results like these to test represented and analyzed in your mind.