Remembering Panna


DSC07191.JPG
Prof. Panna Lal Pradhan with his wife Late Durga Devi Pradhan
dsc07196
Receiving Gorkha Dakshin Bahu Award from Late King Birendra for his service to the nation.
dsc07235
Receiving felicitation and blessings from mother after getting the award from the king.
panna-lal
With Nepali Delegation during his China visit and meeting with Chairman Mao, Chinese communist leader and founding father of People’s Republic of China.
untitled-1
In conversation with Jawaharlal Nehru, 1st Indian Prime Minister, during his India visit.

Panna lal Pradhan (1932-2006) was the first Nepali Psychologist who made significant contributions for the development of educational system and higher education in Nepal.

The tribute blog post will be shared online on 25 Feb on the occasion of 11th death anniversary.

Thanks to Pratibha Pradhan Shrestha for sharing these valuable and historical pictures of an unsung hero!

Advertisements

Psych Bio: Daniel Kahneman


Daniel Kahneman is a Senior Scholar at the Woodrow Wilson School of Public and International Affairs. He is also Professor of Psychology and Public Affairs Emeritus at the Woodrow Wilson School, the Eugene Higgins Professor of Psychology Emeritus at Princeton University, and a fellow of the Center for Rationality at the Hebrew University in Jerusalem. Dr. Kahneman has held the position of professor of psychology at the Hebrew University in Jerusalem (1970-1978), the University of British Columbia (1978-1986), and the University of California, Berkeley (1986-1994). Dr. Kahneman is a member of the National Academy of Science, the Philosophical Society, the American Academy of Arts and Sciences and a fellow of the American Psychological Association, the American Psychological Society, the Society of Experimental Psychologists, and the Econometric Society. He has been the recipient of many awards, among them the Distinguished Scientific Contribution Award of the American Psychological Association (1982) and the Grawemeyer Prize (2002), both jointly with Amos Tversky, the Warren Medal of the Society of Experimental Psychologists (1995), the Hilgard Award for Career Contributions to General Psychology (1995), the Nobel Prize in Economic Sciences (2002), and the Lifetime Contribution Award of the American Psychological Association (2007). Dr. Kahneman holds honorary degrees from numerous Universities.

Autobiography

Early years
I was born in Tel Aviv, in what is now Israel, in 1934, while my mother was visiting her extended family there; our regular domicile was in Paris. My parents were Lithuanian Jews, who had immigrated to France in the early 1920s and had done quite well. My father was the chief of research in a large chemical factory. But although my parents loved most things French and had some French friends, their roots in France were shallow, and they never felt completely secure. Of course, whatever vestiges of security they’d had were lost when the Germans swept into France in 1940. What was probably the first graph I ever drew, in 1941, showed my family’s fortunes as a function of time – and around 1940 the curve crossed into the negative domain.

I will never know if my vocation as a psychologist was a result of my early exposure to interesting gossip, or whether my interest in gossip was an indication of a budding vocation. Like many other Jews, I suppose, I grew up in a world that consisted exclusively of people and words, and most of the words were about people. Nature barely existed, and I never learned to identify flowers or to appreciate animals. But the people my mother liked to talk about with her friends and with my father were fascinating in their complexity. Some people were better than others, but the best were far from perfect and no one was simply bad. Most of her stories were touched by irony, and they all had two sides or more.

In one experience I remember vividly, there was a rich range of shades. It must have been late 1941 or early 1942. Jews were required to wear the Star of David and to obey a 6 p.m. curfew. I had gone to play with a Christian friend and had stayed too late. I turned my brown sweater inside out to walk the few blocks home. As I was walking down an empty street, I saw a German soldier approaching. He was wearing the black uniform that I had been told to fear more than others – the one worn by specially recruited SS soldiers. As I came closer to him, trying to walk fast, I noticed that he was looking at me intently. Then he beckoned me over, picked me up, and hugged me. I was terrified that he would notice the star inside my sweater. He was speaking to me with great emotion, in German. When he put me down, he opened his wallet, showed me a picture of a boy, and gave me some money. I went home more certain than ever that my mother was right: people were endlessly complicated and interesting.

My father was picked up in the first large-scale sweep for Jews, and was interned for six weeks in Drancy, which had been set up as a way station to the extermination camps. He was released through the intervention of his firm, which was directed (a fact I learned only from an article I read a few years ago) by the financial mainstay of the Fascist anti-Semitic movement in France in the 1930s. The story of my father’s release, which I never fully understood, also involved a beautiful woman and a German general who loved her. Soon afterward, we escaped to Vichy France, and stayed on the Riviera in relative safety, until the Germans arrived and we escaped again, to the center of France. My father died of inadequately treated diabetes, in 1944, just six weeks before the D-day he had been waiting for so desperately. Soon my mother, my sister, and I were free, and beginning to hope for the permits that would allow us to join the rest of our family in Palestine.

I had grown up intellectually precocious and physically inept. The ineptitude must have been quite remarkable, because during my last term in a French lycée, in 1946, my eighth-grade physical-education teacher blocked my inclusion in the Tableau d’Honneur – the Honor Roll – on the grounds that even his extreme tolerance had limits. I must also have been quite a pompous child. I had a notebook of essays, with a title that still makes me blush: “What I write of what I think.” The first essay, written before I turned eleven, was a discussion of faith. It approvingly quoted Pascal’s saying “Faith is God made perceptible to the heart” (“How right this is!”), then went on to point out that this genuine spiritual experience was probably rare and unreliable, and that cathedrals and organ music had been created to generate a more reliable, ersatz version of the thrills of faith. The child who wrote this had some aptitude for psychology, and a great need for a normal life.

Adolescence
The move to Palestine completely altered my experience of life, partly because I was held back a year and enrolled in the eighth grade for a second time – which meant that I was no longer the youngest or the weakest boy in the class. And I had friends. Within a few months of my arrival, I had found happier ways of passing time than by writing essays to myself. I had much intellectual excitement in high school, but it was induced by great teachers and shared with like-minded peers. It was good for me not to be exceptional anymore.

At age seventeen, I had some decisions to make about my military service. I applied to a unit that would allow me to defer my service until I had completed my first degree; this entailed spending the summers in officer-training school, and part of my military service using my professional skills. By that time I had decided, with some difficulty, that I would be a psychologist. The questions that interested me in my teens were philosophical – the meaning of life, the existence of God, and the reasons not to misbehave. But I was discovering that I was more interested in what made people believe in God than I was in whether God existed, and I was more curious about the origins of people’s peculiar convictions about right and wrong than I was about ethics. When I went for vocational guidance, psychology emerged as the top recommendation, with economics not too far behind.

I got my first degree from the Hebrew University in Jerusalem, in two years, with a major in psychology and a minor in mathematics. I was mediocre in math, especially in comparison with some of the people I was studying with – several of whom went on to become world-class mathematicians. But psychology was wonderful. As a first-year student, I encountered the writings of the social psychologist Kurt Lewin and was deeply influenced by his maps of the life space, in which motivation was represented as a force field acting on the individual from the outside, pushing and pulling in various directions. Fifty years later, I still draw on Lewin’s analysis of how to induce changes in behavior for my introductory lecture to graduate students at the Woodrow Wilson School of Public Affairs at Princeton. I was also fascinated by my early exposures to neuropsychology. There were the weekly lectures of our revered teacher Yeshayahu Leibowitz – I once went to one of his lectures with a fever of 41 degrees Celsius; they were simply not to be missed. And there was a visit by the German neurosurgeon Kurt Goldstein, who claimed that large wounds to the brain eliminated the capacity for abstraction and turned people into concrete thinkers. Furthermore, and most exciting, as Goldstein described them, the boundaries that separated abstract from concrete were not the ones that philosophers would have set. We now know that there was little substance to Goldstein’s assertions, but at the time the idea of basing conceptual distinctions on neurological observations was so thrilling that I seriously considered switching to medicine in order to study neurology. The Chief of Neurosurgery at the Hadassah Hospital, who was a neighbor, wisely talked me out of that plan by pointing out that the study of medicine was too demanding to be undertaken as a means to any goal other than practice.

The military experience
In 1954, I was drafted as a second lieutenant, and after an eventful year as a platoon leader I was transferred to the Psychology branch of the Israel Defense Forces. There, one of my occasional duties was to participate in the assessment of candidates for officer training. We used methods that had been developed by the British Army in the Second World War. One test involved a leaderless group challenge, in which eight candidates, with all insignia of rank removed and only numbers to identify them, were asked to lift a telephone pole from the ground and were then led to an obstacle, such as a 2.5-meter wall, where they were told to get to the other side of the wall without the pole touching either the ground or the wall, and without any of them touching the wall. If one of these things happened, they had to declare it and start again. Two of us would watch the exercise, which often took half an hour or more. We were looking for manifestations of the candidates’ characters, and we saw plenty: true leaders, loyal followers, empty boasters, wimps – there were all kinds. Under the stress of the event, we felt, the soldiers’ true nature would reveal itself, and we would be able to tell who would be a good leader and who would not. But the trouble was that, in fact, we could not tell. Every month or so we had a “statistics day,” during which we would get feedback from the officer-training school, indicating the accuracy of our ratings of candidates’ potential. The story was always the same: our ability to predict performance at the school was negligible. But the next day, there would be another batch of candidates to be taken to the obstacle field, where we would face them with the wall and see their true natures revealed. I was so impressed by the complete lack of connection between the statistical information and the compelling experience of insight that I coined a term for it: “the illusion of validity.” Almost twenty years later, this term made it into the technical literature (Kahneman and Tversky, 1973). It was the first cognitive illusion I discovered.

Closely related to the illusion of validity was another feature of our discussions about the candidates we observed: our willingness to make extreme predictions about their future performance on the basis of a small sample of behavior. In fact, the issue of willingness did not arise, because we did not really distinguish predictions from observations. The soldier who took over when the group was in trouble and led the team over the wall was a leader at that moment, and if we asked ourselves how he would perform in officer-training, or on the battlefield, the best bet was simply that he would be as good a leader then as he was now. Any other prediction seemed inconsistent with the evidence. As I understood clearly only when I taught statistics some years later, the idea that predictions should be less extreme than the information on which they are based is deeply counterintuitive.

The theme of intuitive prediction came up again, when I was given the major assignment for my service in the Unit: to develop a method for interviewing all combat-unit recruits, in order to screen the unfit and help allocate soldiers to specific duties. An interviewing system was already in place, administered by a small cadre of interviewers, mostly young women, themselves recent graduates from good high schools, who had been selected for their outstanding performance in psychometric tests and for their interest in psychology. The interviewers were instructed to form a general impression of a recruit and then to provide some global ratings of how well the recruit was expected to perform in a combat unit. Here again, the statistics of validity were dismal. The interviewers’ ratings did not predict with substantial accuracy any of the criteria in which we were interested.

My assignment involved two tasks: first, to figure out whether there were personality dimensions that mattered more in some combat jobs than in others, and then to develop interviewing guidelines that would identify those dimensions. To perform the first task, I visited units of infantry, artillery, armor, and others, and collected global evaluations of the performance of the soldiers in each unit, as well as ratings on several personality dimensions. It was a hopeless task, but I didn’t realize that then. Instead, spending weeks and months on complex analyses using a manual Monroe calculator with a rather iffy handle, I invented a statistical technique for the analysis of multi-attribute heteroscedastic data, which I used to produce a complex description of the psychological requirements of the various units. I was capitalizing on chance, but the technique had enough charm for one of my graduate-school teachers, the eminent personnel psychologist Edwin Ghiselli, to write it up in what became my first published article. This was the beginning of a lifelong interest in the statistics of prediction and description.

I had devised personality profiles for a criterion measure, and now I needed to propose a predictive interview. The year was 1955, just after the publication of “Clinical versus statistical prediction” (Meehl, 1954), Paul Meehl’s classic book in which he showed that clinical prediction was consistently inferior to actuarial prediction. Someone must have given me the book to read, and it certainly had a big effect on me. I developed a structured interview schedule with a set of questions about various aspects of civilian life, which the interviewers were to use to generate ratings about six different aspects of personality (including, I remember, such things as “masculine pride” and “sense of obligation”). Soon I had a near-mutiny on my hands. The cadre of interviewers, who had taken pride in the exercise of their clinical skills, felt that they were being reduced to unthinking robots, and my confident declarations -“Just make sure that you are reliable, and leave validity to me”-did not satisfy them. So I gave in. I told them that after completing “my” six ratings as instructed, they were free to exercise their clinical judgment by generating a global evaluation of the recruit’s potential in any way they pleased. A few months later, we obtained our first validity data, using ratings of the recruits’ performance as a criterion. Validity was much higher than it had been. My recollection is that we achieved correlations of close to .30, in contrast to about .10 with the previous methods. The most instructive finding was that the interviewers’ global evaluation, produced at the end of a structured interview, was by far the most predictive of all the ratings they made. Trying to be reliable had made them valid. The puzzles with which I struggled at that time were the seed of the paper on the psychology of intuitive prediction that Amos Tversky and I published much later.

The interview system has remained in use, with little modification, for many decades. And if it appears odd that a twenty-one-year-old lieutenant would be asked to set up an interviewing system for an army, one should remember that the state of Israel and its institutions were only seven years old at the time, that improvisation was the norm, and that professionalism did not exist. My immediate supervisor was a man with brilliant analytical skills, who had trained in chemistry but was entirely self-taught in statistics and psychology. And with a B.A. in the appropriate field, I was the best-trained professional psychologist in the military.

Graduate school years
I came out of the Army in 1956. The academic planners at the Hebrew University had decided to grant me a fellowship to obtain a PhD abroad, so that I would be able to return and teach in the psychology department. But they wanted me to acquire some additional polish before facing the bigger world. Because the psychology department had temporarily closed, I took some courses in philosophy, did some research, and read psychology on my own for a year. In January of 1958, my wife, Irah, and I landed at the San Francisco airport, where the now famous sociologist Amitai Etzioni was waiting to take us to Berkeley, to the Flamingo Motel on University Avenue, and to the beginning of our graduate careers.

My experience of graduate school was quite different from that of students today. The main landmarks were examinations, including an enormous multiple-choice test that covered all of psychology. (A long list of classic studies preceded by the question “Which of the following is not a study of latent learning?” comes to mind.) There was less emphasis on formal apprenticeship, and virtually no pressure to publish while in school. We took quite a few courses and read broadly. I remember a comment of Professor Rosenweig’s on the occasion of my oral exam. I should enjoy my current state, he advised, because I would never again know as much psychology. He was right.

I was an eclectic student. I took a course on subliminal perception from Richard Lazarus, and wrote with him a speculative article on the temporal development of percepts, which was soundly and correctly rejected. From that subject I came to an interest in the more technical aspects of vision and I spent some time learning about optical benches from Tom Cornsweet. I audited the clinical sequence, and learned about personality tests from Jack Block and from Harrison Gough. I took classes on Wittgenstein in the philosophy department. I dabbled in the philosophy of science. There was no particular rhyme or reason to what I was doing, but I was having fun.

My most significant intellectual experience during those years did not occur in graduate school. In the summer of 1958, my wife and I drove across the United States to spend a few months at the Austen Riggs Clinic in Stockbridge, Massachusetts, where I studied with the well-known psychoanalytic theorist David Rapaport, who had befriended me on a visit to Jerusalem a few years earlier. Rapaport believed that psychoanalysis contained the elements of a valid theory of memory and thought. The core ideas of that theory, he argued, were laid out in the seventh chapter of Freud’s “Interpretation of Dreams,” which sketches a model of mental energy (cathexis). With the other young people in Rapaport’s circle, I studied that chapter like a Talmudic text, and tried to derive from it experimental predictions about short-term memory. This was a wonderful experience, and I would have gone back if Rapaport had not died suddenly later that year. I had enormous respect for his fierce mind. Fifteen years after that summer, I published a book entitled “Attention and Effort,” which contained a theory of attention as a limited resource. I realized only while writing the acknowledgments for the book that I had revisited the terrain to which Rapaport had first led me.

Austen Riggs was a major intellectual center for psychoanalysis, dedicated primarily to the treatment of dysfunctional descendants of wealthy families. I was allowed into the case conferences, which were normally scheduled on Fridays, usually to evaluate a patient who had spent a month of live-in observation at the clinic. Those attending would have received and read, the night before, a folder with detailed notes from every department about the person in question. There would be a lively exchange of impressions among the staff, which included the fabled Erik Erikson. Then the patient would come in for a group interview, which was followed by a brilliant discussion. On one of those Fridays, the meeting took place and was conducted as usual, despite the fact that the patient had committed suicide during the night. It was a remarkably honest and open discussion, marked by the contradiction between the powerful retrospective sense of the inevitability of the event and the obvious fact that the event had not been foreseen. This was another cognitive illusion to be understood. Many years later, Baruch Fischhoff wrote, under my and Amos Tversky’s supervision, a beautiful PhD thesis that illuminated the hindsight effect.

In the spring of 1961, I wrote my dissertation on a statistical and experimental analysis of the relations between adjectives in the semantic differential. This allowed me to engage in two of my favorite pursuits: the analysis of complex correlational structures and FORTRAN programming. One of the programs I wrote would take twenty minutes to run on the university mainframe, and I could tell whether it was working properly by the sequence of movement on the seven tape units that it used. I wrote the thesis in eight days, typing directly on the purple “ditto” sheets that we used for duplication at the time. That was probably the last time I wrote anything without pain. The paper itself, by sharp contrast, was so convoluted and dreary that my teacher, Susan Ervin, memorably described the experience of reading it as “wading through wet mush.” I spent the summer of 1961 in the ophthalmology department, doing research on contour interference. And then it was time to go home to Jerusalem, and start teaching in the psychology department at the Hebrew University.

Training to become a professional
I loved teaching undergraduates and I was good at it. The experience was consistently gratifying because the students were so good: they were selected on the basis of a highly competitive entrance exam, and most were easily PhD material. I took charge of the basic first-year statistics class and, for some years, taught both that course and the second-year course in research methods, which also included a large dose of statistics. To teach effectively I did a lot of serious thinking about valid intuitions on which I could draw and erroneous intuitions that I should teach students to overcome. I had no idea, of course, but I was laying the foundation for a program of research on judgment under uncertainty. Another course I was teaching concerned the psychology of perception, which also contributed quite directly to the same program.

I had learned a lot in Berkeley, but I felt that I had not been adequately trained to do research. I therefore decided that in order to acquire the basic skills I would need to have a proper laboratory and do regular science – I needed to be a solid short-order cook before I could aspire to become a chef. So I set up a vision lab, and over the next few years I turned out competent work on energy integration in visual acuity. At the same time, I was trying to develop a research program to study affiliative motivation in children, using an approach that I called a “psychology of single questions.” My model for this kind of psychology was research reported by Walter Mischel (1961a, 1961b) in which he devised two questions that he posed to samples of children in Caribbean islands: “You can have this (small) lollipop today, or this (large) lollipop tomorrow,” and “Now let’s pretend that there is a magic man … who could change you into anything that you would want to be, what you would want to be?” The answer to the latter question was scored 1, if it referred to a profession or to an achievement-related trait, otherwise 0. The responses to these lovely questions turned out to be plausibly correlated with numerous characteristics of the child and the child’s background. I found this inspiring: Mischel had succeeded in creating a link between an important psychological concept and a simple operation to measure it. There was (and still is) almost nothing like it in psychology, where concepts are commonly associated with procedures that can be described only by long lists or by convoluted paragraphs of prose.

I got quite nice results in my one-question studies, but never wrote up any of the work, because I had set myself impossible standards: in order not to pollute the literature, I wanted to report only findings that I had replicated in detail at least once, and the replications were never quite perfect. I realized only gradually that my aspirations demanded more statistical power and therefore much larger samples than I was intuitively inclined to run. This observation also came in handy some time later.

My achievements in research in these early years were quite humdrum, but I was excited by several opportunities to bring psychology to bear on the real world. For these tasks, I teamed up with a colleague and friend, Ozer Schild. Together, we designed a training program for functionaries who were to introduce new immigrants from underdeveloped countries, such as Yemen, to modern farming practices (Kahneman and Schild, 1966). We also developed a training course for instructors in the flight school of the Air Force. Our faith in the usefulness of psychology was great, but we were also well aware of the difficulties of changing behavior without changing institutions and incentives. We may have done some good, and we certainly learned a lot.

I had the most satisfying Eureka experience of my career while attempting to teach flight instructors that praise is more effective than punishment for promoting skill-learning. When I had finished my enthusiastic speech, one of the most seasoned instructors in the audience raised his hand and made his own short speech, which began by conceding that positive reinforcement might be good for the birds, but went on to deny that it was optimal for flight cadets. He said, “On many occasions I have praised flight cadets for clean execution of some aerobatic maneuver, and in general when they try it again, they do worse. On the other hand, I have often screamed at cadets for bad execution, and in general they do better the next time. So please don’t tell us that reinforcement works and punishment does not, because the opposite is the case.” This was a joyous moment, in which I understood an important truth about the world: because we tend to reward others when they do well and punish them when they do badly, and because there is regression to the mean, it is part of the human condition that we are statistically punished for rewarding others and rewarded for punishing them. I immediately arranged a demonstration in which each participant tossed two coins at a target behind his back, without any feedback. We measured the distances from the target and could see that those who had done best the first time had mostly deteriorated on their second try, and vice versa. But I knew that this demonstration would not undo the effects of lifelong exposure to a perverse contingency.

My first experience of truly successful research came in 1965, when I was on sabbatical leave at the University of Michigan, where I had been invited by Jerry Blum, who had a lab in which volunteer participants performed various cognitive tasks while in the grip of powerful emotional states induced by hypnosis. Dilation of the pupil is one of the manifestations of emotional arousal, and I therefore became interested in the causes and consequences of changes of pupil size. Blum had a graduate student called Jackson Beatty. Using primitive equipment, Beatty and I made a real discovery: when people were exposed to a series of digits they had to remember, their pupils dilated steadily as they listened to the digits, and contracted steadily when they recited the series. A more difficult transformation task (adding 1 to each of a series of four digits) caused a much larger dilation of the pupil. We quickly published these results, and within a year had completed four articles, two of which appeared in Science. Mental effort remained the focus of my research during the subsequent year, which I spent at Harvard. During that year, I also heard a brilliant talk on experimental studies of attention by a star English psychologist named Anne Treisman, who would become my wife twelve years later. I was so impressed that I committed myself to write a chapter on attention for a Handbook in Cognitive Psychology. The Handbook was never published, and my chapter eventually became a rather ambitious book. The work on vision that I did that year was also more interesting than the work I had been doing in Jerusalem. When I returned home in 1967, I was, finally, a well-trained research psychologist.

The collaboration with Amos Tversky
From 1968 to 1969, I taught a graduate seminar on the applications of psychology to real-world problems. In what turned out to be a life-changing event, I asked my younger colleague Amos Tversky to tell the class about what was going on in his field of judgment and decision-making. Amos told us about the work of his former mentor, Ward Edwards, whose lab was using a research paradigm in which the subject is shown two bookbags filled with poker chips. The bags are said to differ in their composition (e.g., 70:30 or 30:70 white/red). One of them is randomly chosen, and the participant is given an opportunity to sample successively from it, and required to indicate after each trial the probability that it came from the predominantly red bag. Edwards had concluded from the results that people are “conservative Bayesians”: they almost always adjust their confidence interval in the proper direction, but rarely far enough. A lively discussion developed around Amos’s talk. The idea that people were conservative Bayesian did not seem to fit with the everyday observation of people commonly jumping to conclusions. It also appeared unlikely that the results obtained in the sequential sampling paradigm would extend to the situation, arguably more typical, in which sample evidence is delivered all at once. Finally, the label of ‘conservative Bayesian’ suggested the implausible image of a process that gets the correct answer, then adulterates it with a bias. I learned recently that one of Amos’s friends met him that day and heard about our conversation, which Amos described as having severely shaken his faith in the neo-Bayesian idea. I do remember that Amos and I decided to meet for lunch to discuss our hunches about the manner in which probabilities are “really” judged. There we exchanged personal accounts of our own recurrent errors of judgment in this domain, and decided to study the statistical intuitions of experts.

I spent the summer of 1969 doing research at the Applied Psychological Research Unit in Cambridge, England. Amos stopped there for a few days on his way to the United States. I had drafted a questionnaire on intuitions about sampling variability and statistical power, which was based largely on my personal experiences of incorrect research planning and unsuccessful replications. The questionnaire consisted of a set of questions, each of which could stand on its own – this was to be another attempt to do psychology with single questions. Amos went off and administered the questionnaire to participants at a meeting of the Mathematical Psychology Association, and a few weeks later we met in Jerusalem to look at the results and write a paper.

The experience was magical. I had enjoyed collaborative work before, but this was something different. Amos was often described by people who knew him as the smartest person they knew. He was also very funny, with an endless supply of jokes appropriate to every nuance of a situation. In his presence, I became funny as well, and the result was that we could spend hours of solid work in continuous mirth. The paper we wrote was deliberately humorous – we described a prevalent belief in the “law of small numbers,” according to which the law of large numbers extends to small numbers as well. Although we never wrote another humorous paper, we continued to find amusement in our work – I have probably shared more than half of the laughs of my life with Amos.

And we were not just having fun. I quickly discovered that Amos had a remedy for everything I found difficult about writing. No wet-mush problem for him: he had an uncanny sense of direction. With him, movement was always forward. Progress might be slow, but each of the myriad of successive drafts that we produced was an improvement – this was not something I could take for granted when working on my own. Amos’s work was always characterized by confidence and by a crisp elegance, and it was a joy to find those characteristics now attached to my ideas as well. As we were writing our first paper, I was conscious of how much better it was than the more hesitant piece I would have written by myself. I don’t know exactly what it was that Amos found to like in our collaboration – we were not in the habit of trading compliments -but clearly he was also having a good time. We were a team, and we remained in that mode for well over a decade. The Nobel Prize was awarded for work that we produced during that period of intense collaboration.

At the beginning of our collaboration, we quickly established a rhythm that we maintained during all our years together. Amos was a night person, and I was a morning person. This made it natural for us to meet for lunch and a long afternoon together, and still have time to do our separate things. We spent hours each day, just talking. When Amos’s first son Oren, then fifteen months old, was told that his father was at work, he volunteered the comment “Aba talk Danny.” We were not only working, of course – we talked of everything under the sun, and got to know each other’s mind almost as well as our own. We could (and often did) finish each other’s sentences and complete the joke that the other had wanted to tell, but somehow we also kept surprising each other.

We did almost all the work on our joint projects while physically together, including the drafting of questionnaires and papers. And we avoided any explicit division of labor. Our principle was to discuss every disagreement until it had been resolved to mutual satisfaction, and we had tie-breaking rules for only two topics: whether or not an item should be included in the list of references (Amos had the casting vote), and who should resolve any issue of English grammar (my dominion). We did not initially have a concept of a senior author. We tossed a coin to determine the order of authorship of our first paper, and alternated from then on until the pattern of our collaboration changed in the 1980s.

One consequence of this mode of work was that all our ideas were jointly owned. Our interactions were so frequent and so intense that there was never much point in distinguishing between the discussions that primed an idea, the act of uttering it, and the subsequent elaboration of it. I believe that many scholars have had the experience of discovering that they had expressed (sometimes even published) an idea long before they really understood its significance. It takes time to appreciate and develop a new thought. Some of the greatest joys of our collaboration-and probably much of its success – came from our ability to elaborate each other’s nascent thoughts: if I expressed a half-formed idea, I knew that Amos would be there to understand it, probably more clearly than I did, and that if it had merit he would see it. Like most people, I am somewhat cautious about exposing tentative thoughts to others – I must first make sure that they are not idiotic. In the best years of the collaboration, this caution was completely absent. The mutual trust and the complete lack of defensiveness that we achieved were particularly remarkable because both of us – Amos even more than I – were known to be severe critics. Our magic worked only when we were by ourselves. We soon learned that joint collaboration with any third party should be avoided, because we became competitive in a threesome.

Amos and I shared the wonder of together owning a goose that could lay golden eggs – a joint mind that was better than our separate minds. The statistical record confirms that our joint work was superior, or at least more influential, than the work we did individually (Laibson and Zeckhauser, 1998). Amos and I published eight journal articles during our peak years (1971-1981), of which five had been cited more than a thousand times by the end of 2002. Of our separate works, which in total number about 200, only Amos’ theory of similarity (Tversky, 1977) and my book on attention (Kahneman, 1973) exceeded that threshold. The special style of our collaborative work was recognized early by a referee of our first theoretical paper (on representativeness), who caused it to be rejected by Psychological Review. The eminent psychologist who wrote that review – his anonymity was betrayed years later – pointed out that he was familiar with the separate lines of work that Amos and I had been pursuing, and considered both quite respectable. However, he added the unusual remark that we seemed to bring out the worst in each other, and certainly should not collaborate. He found most objectionable our method of using multiple single questions as evidence – and he was quite wrong there as well.

The Science ’74 article and the rationality debate
From 1971 to 1972, Amos and I were at the Oregon Research Institute (ORI) in Eugene, a year that was by far the most productive of my life. We did a considerable amount of research and writing on the availability heuristic, on the psychology of prediction, and on the phenomena of anchoring and overconfidence – thereby fully earning the label “dynamic duo” that our colleagues attached to us. Working evenings and nights, I also completely rewrote my book on Attention and Effort, which went to the publisher that year, and remains my most significant independent contribution to psychology.

At ORI, I came into contact for the first time with an exciting community of researchers that Amos had known since his student days at Michigan: Paul Slovic, Sarah Lichtenstein, and Robyn Dawes. Lewis Goldberg was also there, and I learned much from his work on clinical and actuarial judgment, and from Paul Hoffman’s ideas about paramorphic modeling. ORI was one of the major centers of judgment research, and I had the occasion to meet quite a few of the significant figures of the field when they came visiting, Ken Hammond among them.

Some time after our return from Eugene, Amos and I settled down to review what we had learned about three heuristics of judgment (representativeness, availability, and anchoring) and about a list of a dozen biases associated with these heuristics. We spent a delightful year in which we did little but work on a single article. On our usual schedule of spending afternoons together, a day in which we advanced by a sentence or two was considered quite productive. Our enjoyment of the process gave us unlimited patience, and we wrote as if the precise choice of every word were a matter of great moment.

We published the article in Science because we thought that the prevalence of systematic biases in intuitive assessments and predictions could possibly be of interest to scholars outside psychology. This interest, however, could not be taken for granted, as I learned in an encounter with a well-known American philosopher at a party in Jerusalem. Mutual friends had encouraged us to talk about the research that Amos and I were doing, but almost as soon as I began my story he turned away, saying, “I am not really interested in the psychology of stupidity.”

The Science article turned out to be a rarity: an empirical psychological article that (some) philosophers and (a few) economists could and did take seriously. What was it that made readers of the article more willing to listen than the philosopher at the party? I attribute the unusual attention at least as much to the medium as to the message. Amos and I had continued to practice the psychology of single questions, and the Science article – like others we wrote – incorporated questions that were cited verbatim in the text. These questions, I believe, personally engaged the readers and convinced them that we were concerned not with the stupidity of Joe Public but with a much more interesting issue: the susceptibility to erroneous intuitions of intelligent, sophisticated, and perceptive individuals such as themselves. Whatever the reason, the article soon became a standard reference as an attack on the rational-agent model, and it spawned a large literature in cognitive science, philosophy, and psychology. We had not anticipated that outcome.

I realized only recently how fortunate we were not to have aimed deliberately at the large target we happened to hit. If we had intended the article as a challenge to the rational model, we would have written it differently, and the challenge would have been less effective. An essay on rationality would have required a definition of that concept, a treatment of boundary conditions for the occurrence of biases, and a discussion of many other topics about which we had nothing of interest to say. The result would have been less crisp, less provocative, and ultimately less defensible. As it was, we offered a progress report on our study of judgment under uncertainty, which included much solid evidence. All inferences about human rationality were drawn by the readers themselves.

The conclusions that readers drew were often too strong, mostly because existential quantifiers, as they are prone to do, disappeared in the transmission. Whereas we had shown that (some, not all) judgments about uncertain events are mediated by heuristics, which (sometimes, not always) produce predictable biases, we were often read as having claimed that people cannot think straight. The fact that men had walked on the moon was used more than once as an argument against our position. Because our treatment was mistakenly taken to be inclusive, our silences became significant. For example, the fact that we had written nothing about the role of social factors in judgment was taken as an indication that we thought these factors were unimportant. I suppose that we could have prevented at least some of these misunderstandings, but the cost of doing so would have been too high.

The interpretation of our work as a broad attack on human rationality – rather than as a critique of the rational-agent model – attracted much opposition, some quite harsh and dismissive. Some of the critiques were normative, arguing that we compared judgments to inappropriate normative standards (Cohen, 1981; Gigerenzer, 1991, 1996). We were also accused of spreading a tendentious and misleading message that exaggerated the flaws of human cognition (Lopes, 1991, and many others). The idea of systematic bias was rejected as unsound on evolutionary grounds (Cosmides & Tooby, 1996). Some authors dismissed the research as a collection of artificial puzzles designed to fool undergraduates. Numerous experiments were conducted over the years, to show that cognitive illusions could “be made to disappear” and that heuristics had been invented to explain “biases that do not exist” (Gigerenzer, 1991). After participating in a few published skirmishes in the early 80’s, Amos and I adopted a policy of not criticizing the critiques of our work, although we eventually felt compelled to make an exception (Kahneman and Tversky, 1996).

A young colleague and I recently reviewed the experimental literature, and concluded that the empirical controversy about the reality of cognitive illusions dissolves when viewed in the perspective of a dual-process model (Kahneman and Frederick, 2002). The essence of such a model is that judgments can be produced in two ways (and in various mixtures of the two): a rapid, associative, automatic, and effortless intuitive process (sometimes called System 1), and a slower, rule-governed, deliberate and effortful process (System 2) (Sloman, 1996; Stanovich and West, 1999). System 2 ‘knows” some of the rules that intuitive reasoning is prone to violate, and sometimes intervenes to correct or replace erroneous intuitive judgments. Thus, errors of intuition occur when two conditions are satisfied: System 1 generates the error and System 2 fails to correct. In this view, the experiments in which cognitive illusions were “made to disappear” did so by facilitating the corrective operations of System 2. They tell us little about the intuitive judgments that are suppressed.

If the controversy is so simply resolved, why was it not resolved in 1971, or in 1974? The answer that Frederick and I proposed refers to the conversational context in which the early work was done:

A comprehensive psychology of intuitive judgment cannot ignore such controlled thinking, because intuition can be overridden or corrected by self-critical operations, and because intuitive answers are not always available. But this sensible position seemed irrelevant in the early days of research on judgment heuristics. The authors of the “law of small numbers” saw no need to examine correct statistical reasoning. They believed that including easy questions in the design would insult the participants and bore the readers. More generally, the early studies of heuristics and biases displayed little interest in the conditions under which intuitive reasoning is preempted or overridden – controlled reasoning leading to correct answers was seen as a default case that needed no explaining. A lack of concern for boundary conditions is typical of “young” research programs, which naturally focus on demonstrating new and unexpected effects, not on making them disappear. (Kahneman and Frederick, 2002, p. 50).

What happened, I suppose, is that because the 1974 paper was influential it altered the context in which it was read in subsequent years. Its being misunderstood was a direct consequence of its being taken seriously. I wonder how often this occurs.

Amos and I always dismissed the criticism that our focus on biases reflected a generally pessimistic view of the human mind. We argued that this criticism confuses the medium of bias research with a message about rationality. This confusion was indeed common. In one of our demonstrations of the availability heuristic, for example, we asked respondents to compare the frequency with which some letters appeared in the first and in the third position in words. We selected letters that in fact appeared more frequently in the third position, and showed that even for these letters the first position was judged more frequent, as would be predicted on the idea that it is easier to search through a mental dictionary by the first letter. The experiment was used by some critics as an example of our own confirmation bias, because we had demonstrated availability only in cases in which this heuristic led to bias. But this criticism assumes that our aim was to demonstrate biases, and misses the point of what we were trying to do. Our aim was to show that the availability heuristic controls frequency estimates even when that heuristic leads to error – an argument that cannot be made when the heuristic leads to correct responses, as it often does.

There is no denying, however, that the name of our method and approach created a strong association between heuristics and biases, and thereby contributed to giving heuristics a bad name, which we did not intend. I recently came to realize that the association of heuristics and biases has affected me as well. In the course of an exchange of messages with Ralph Hertwig (no fan of heuristics and biases), I noticed that the phrase “judging by representativeness” was in my mind a label for a cluster of errors in intuitive statistical judgment. Judging probability by representativeness is indeed associated with systematic errors. But a large component of the process is the judgment of representativeness, and that judgment is often subtle and highly skilled. The feat of the master chess player who instantly recognizes a position as “white mates in three” is an instance of judgment of representativeness. The undergraduate who instantly recognizes that enjoyment of puns is more representative of a computer scientist than of an accountant is also exhibiting high skill in a social and cultural judgment. My long-standing failure to associate specific benefits to the concept of representativeness was a revealing mistake.

What did I learn from the controversy about heuristics and biases? Like most protagonists in debates, I have few memories of having changed my mind under adversarial pressure, but I have certainly learned more than I know. For example, I am now quick to reject any description of our work as demonstrating human irrationality. When the occasion arises, I carefully explain that research on heuristics and biases only refutes an unrealistic conception of rationality, which identifies it as comprehensive coherence. Was I always so careful? Probably not. In my current view, the study of judgment biases requires attention to the interplay between intuitive and reflective thinking, which sometimes allows biased judgments and sometimes overrides or corrects them. Was this always as clear to me as it is now? Probably not. Finally, I am now very impressed by the observation I mentioned earlier, that the most highly skilled cognitive performances are intuitive, and that many complex judgments share the speed, confidence and accuracy of routine perception. This observation is not new to me, but did it always loom as large in my views as it now does? Almost certainly not.

As my obvious struggle with this topic reveals, I thoroughly dislike controversies where it is clear that no minds will be changed. I feel diminished by losing my objectivity when in point-scoring mode, and downright humiliated when I get angry. Indeed, my phobia for professional anger is such that I have allowed myself for many years the luxury of refusing to referee papers that might arouse that emotion: If the tone is snide, or the review of the facts more tendentious than normal, I return the paper back to the editor without commenting on it. I consider myself fortunate not to have had too many of the nasty experiences of professional quarrels, and am grateful for the occasional encounters with open minds across lines of sharp debate (Ayton, 1998; Klein, 2000).

Prospect theory
After the publication of our paper on judgment in Science in 1974, Amos suggested that we study decision-making together. This was a field in which he was already an established star, and about which I knew very little. For an introduction, he suggested that I read the relevant chapters of the text “Mathematical Psychology,” of which he was a co-author (Coombs, Dawes and Tversky, 1970). Utility theory and the paradoxes of Allais and Ellsberg were discussed in the book, along with some of the classic experiments in which major figures in the field had joined in an effort to measure the utility function for money by eliciting choices between simple gambles.

I learned from the book that the name of the game was the construction of a theory that would explain Allais’s paradox parsimoniously. As psychological questions go, this was not a difficult one, because Allais’s famous problems are, in effect, an elegant way to demonstrate that the subjective response to probability is not linear. The subjective non-linearity is obvious: the difference between probabilities of .10 and .11 is clearly less impressive than the difference between 0 and .01, or between .99 and 1.00. The difficulty and the paradox exist only for decision theorists, because the non-linear response to probability produces preferences that violate compelling axioms of rational choice and are therefore incompatible with standard expected utility theory. The natural response of a decision theorist to the Allais paradox, certainly in 1975 and probably even today, would be to search for a new set of axioms that have normative appeal and yet permit the non-linearity. The natural response of psychologists was to set aside the issue of rationality and to develop a descriptive theory of the preferences that people actually have, regardless of whether or not these preferences can be justified.

The task we set for ourselves was to account for observed preferences in the quaintly restricted universe within which the debate about the theory of choice has traditionally been conducted: monetary gambles with few outcomes (all positive), and definite probabilities. This was an empirical question, and data were needed. Amos and I solved the data collection problem with a method that was both efficient and pleasant. We spent our hours together inventing interesting choices and examining our preferences. If we agreed on the same choice we provisionally assumed that other people would also accept it, and we went on to explore its theoretical implications. This unusual method enabled us to move quickly, and we constructed and discarded models at a dizzying rate. I have a distinct memory of a model that was numbered 37, but cannot vouch for the accuracy of our count.

As was the case in our work on judgment, our central insights were acquired early and, as was the case in our work on judgment, we spent a vast amount of time and effort before publishing a paper that summarized those insights (Kahneman and Tversky, 1979). The first insight came as a result of my naïveté. When reading the mathematical psychology textbook, I was puzzled by the fact that all the choice problems were described in terms of gains and losses (actually, almost always gains), whereas the utility functions that were supposed to explain the choices were drawn with wealth as the abscissa. This seemed unnatural, and psychologically unlikely. We immediately decided to adopt changes and/or differences as carriers of utility. We had no inkling that this obvious move was truly fundamental, or that it would open the path to behavioral economics. Harry Markowitz, who won the Nobel Prize in economics in 1990, had proposed changes of wealth as carriers of utility in 1952, but he did not take this idea very far.

The shifts from wealth to changes of wealth as carriers of utility is significant because of a property of preferences that we later labeled loss-aversion: the response to losses is consistently much more intense than the response to corresponding gains, with a sharp kink in the value function at the reference point. Loss aversion is manifest in the extraordinary reluctance to accept risk that is observed when people are offered a gamble on the toss of a coin: most will reject a gamble in which they might lose $20, unless they are offered more than $40 if they win. The concept of loss aversion was, I believe, our most useful contribution to the study of decision making. The asymmetry between gains and losses solves quite a few puzzles, including the widely noted and economically irrational distinction that people draw between opportunity costs and ‘real’ losses. Loss aversion also helps explain why real-estate markets dry up for long periods when prices are down, and it contributes to the explanation of a widespread bias favoring the status quo in decision making. Finally, the asymmetric consideration of gains and losses extends to the domain of moral intuitions, in which imposing losses and failing to share gains are evaluated quite differently. But of course, none of that was visible to Amos and me when we first decided to assume a kinked value function – we needed that kink to account for choices between gambles.

Another set of early insights came when Amos suggested that we flip the signs of outcomes in the problems we had been considering. The result was exciting. We immediately detected a remarkable pattern, which we called “reflection”: changing the signs of all outcomes in a pair of gambles almost always caused the preference to change from risk averse to risk seeking, or viceversa. For example, we both preferred a sure gain of $900 over a .9 probability of gaining $1,000 (or nothing), but we preferred a gamble with a .9 probability of losing $1,000 over a sure loss of $900. We were not the first to observe this pattern. Raiffa (1968) and Williams (1966) knew about the prevalence of risk-seeking in the negative domain. But ours was apparently the first serious attempt to make something of it.

We soon had a draft of a theory of risky choice, which we called “value theory” and presented at a conference in the spring of 1975. We then spent about three years polishing it, until we were ready to submit the article for publication. Our effort during those years was divided between the tasks of exploring interesting implications of our theoretical formulation and developing answers to all plausible objections. To amuse ourselves, we invented the specter of an ambitious graduate student looking for flaws, and we labored to make that student’s task as thankless as possible. The most novel idea of prospect theory occurred to us in that defensive context. It came quite late, as we were preparing the final version of the paper. We were concerned with the fact that a straightforward application of our model implied that the value of the prospect ($100, .01; $100, .01) is larger than the value of ($100, .02). The prediction is wrong, of course, because most decision makers will spontaneously transform the former prospect into the latter and treat them as equivalent in subsequent operations of evaluation and choice. To eliminate the problem we proposed that decision-makers, prior to evaluating the prospects, perform an editing operation that collects similar outcomes and adds their probabilities. We went on to propose several other editing operations that provided an explicit and psychologically plausible defense against a variety of superficial counter-examples to the core of the theory. We had succeeded in making life quite difficult for that pedantic graduate student. But we had also made a truly significant advance, by making it explicit that the objects of choice are mental representations, not objective states of the world. This was a large step toward the development of a concept of framing, and eventually toward a new critique of the model of the rational agent.

When we were ready to submit the work for publication, we deliberately chose a meaningless name for our theory: “prospect theory.” We reasoned that if the theory ever became well known, having a distinctive label would be an advantage. This was probably wise.

I looked at the 1975 draft recently, and was struck by how similar it is to the paper that was eventually published, and also by how different the two papers are. Most of the key ideas, most of the key examples, and much of the wording were there in the early draft. But that draft lacks the authority that was gained during the years that we spent anticipating objections. “Value theory” would not have survived the close scrutiny that a significant article ultimately gets from generations of scholars and students, who only are obnoxious if you give them a chance.

We published the paper in Econometrica. The choice of venue turned out to be important; the identical paper, published in Psychological Review, would likely have had little impact on economics. But our decision was not guided by a wish to influence economics. Econometrica just happened to be the journal where the best papers on decision-making to date had been published, and we were aspiring to be in that company.

And there was another way in which the impact of prospect theory depended crucially on the medium, as well as the message. Prospect theory was a formal theory, and its formal nature was the key to the impact it had in economics. Every discipline of social science, I believe, has some ritual tests of competence, which must be passed before a piece of work is considered worthy of attention. Such tests are necessary to prevent information overload, and they are also important aspects of the tribal life of the disciplines. In particular, they allow insiders to ignore just about anything that is done by members of other tribes, and to feel no scholarly guilt about doing so. To serve this screening function efficiently, the competence tests usually focus on some aspect of form or method, and have little or nothing to do with substance. Prospect theory passed such a test in economics, and its observations became a legitimate (though optional) part of the scholarly discourse in that discipline. It is a strange and rather arbitrary process that selects some pieces of scientific writing for relatively enduring fame while committing most of what is published to almost immediate oblivion.

Framing and mental accounting
Amos and I completed prospect theory during the academic year of 1977 to 1978, which I spent at the Center for Advanced Studies at Stanford, while he was visiting the psychology department there. Around that time, we began work on our next project, which became the study of framing. This was also the year in which the second most important professional friendship in my life – with Richard Thaler – had its start.

A framing effect is demonstrated by constructing two transparently equivalent versions of a given problem, which nevertheless yield predictably different choices. The standard example of a framing problem, which was developed quite early, is the ‘lives saved, lives lost’ question, which offers a choice between two public-health programs proposed to deal with an epidemic that is threatening 600 lives: one program will save 200 lives, the other has a 1/3 chance of saving all 600 lives and a 2/3 chance of saving none. In this version, people prefer the program that will save 200 lives for sure. In the second version, one program will result in 400 deaths, the other has a 2/3 chance of 600 deaths and a 1/3 chance of no deaths. In this formulation most people prefer the gamble. If the same respondents are given the two problems on separate occasions, many give incompatible responses. When confronted with their inconsistency, people are quite embarrassed. They are also quite helpless to resolve the inconsistency, because there are no moral intuitions to guide a choice between different sizes of a surviving population.

Amos and I began creating pairs of problems that revealed framing effects while working on prospect theory. We used them to show sensitivity to gains and losses (as in the lives example), and to illustrate the inadequacy of a formulation in which the only relevant outcomes are final states. In that article, we also showed that a single-stage gamble could be rearranged as a two-stage gamble in a manner that left the bottom-line probabilities and outcomes unchanged but reversed preferences. Later, we developed examples in which respondents are asked to make simultaneous choices in two problems, A and B. One of the problems involves gains and elicits a risk-averse choice; the other problem involves losses and elicits risk-seeking. A majority of respondents made both these choices. However, the problems were constructed so that the combination of choices that people made was actually dominated by the combination of the options they had rejected.

These are not parlor-game demonstrations of human stupidity. The ease with which framing effects can be demonstrated reveals a fundamental limitation of the human mind. In a rational-agent model, the agent’s mind functions just as she would like it to function. Framing effects violate that basic requirement: the respondents who exhibit susceptibility to framing effects wish their minds were able to avoid them. We were able to conceive of only two kinds of mind that would avoid framing effects: (1) If responses to all outcomes and probabilities were strictly linear, the procedures that we used to produce framing effects would fail. (2) If individuals maintained a single canonical and all-inclusive view of their outcomes, truly equivalent problems would be treated equivalently. Both conditions are obviously impossible. Framing effects violate a basic requirement of rationality which we called invariance (Kahneman and Tversky, 1984) and Arrow (1982) called extensionality. It took us a long time and several iterations to develop a forceful statement of this contribution to the rationality debate, which we presented several years after our framing paper (Tversky and Kahneman, 1986).

Another advance that we made in our first framing article was the inclusion of riskless choice problems among our demonstrations of framing. In making that move, we had help from a new friend. Richard Thaler was a young economist, blessed with a sharp and irreverent mind. While still in graduate school, he had trained his ironic eye on his own discipline and had collected a set of pithy anecdotes demonstrating obvious failures of basic tenets of economic theory in the behavior of people in general – and of his very conservative professors in Rochester in particular. One key observation was the endowment effect, which Dick illustrated with the example of the owner of a bottle of old wine, who would refuse to sell it for $200 but would not pay as much as $100 to replace it if it broke. Sometime in 1976, a copy of the 1975 draft of prospect theory got into Dick’s hands, and that event made a significant difference to our lives. Dick realized that the endowment effect, which is a genuine puzzle in the context of standard economic theory, is readily explained by two assumptions derived from prospect theory. First, the carriers of utility are not states (owning or not owning the wine), but changes – getting the wine or giving it up. And giving up is weighted more than getting, by loss aversion. When Dick learned that Amos and I would be in Stanford in 1977/8, he secured a visiting appointment at the Stanford branch of the National Bureau of Economic Research, which is located on the same hill as the Center for Advanced Studies. We soon became friends, and have ever since had a considerable influence on each other’s thinking.

The endowment effect was not the only thing we learned from Dick. He had also developed a list of phenomena of what we now call “mental accounting.” Mental accounting describes how people violate rationality by failing to maintain a comprehensive view of outcomes, and by failing to treat money as fungible. Dick showed how people segregate their decisions into separate accounts, then struggle to keep each of these accounts in the black. One of his compelling examples was the couple who drove through a blizzard to a basketball game because they had already paid for the tickets, though they would have stayed at home if the tickets had been free. As this example illustrates, Dick had independently developed the skill of doing “one-question economics.” He inspired me to invent another story, in which a person who comes to the theater realizes that he has lost his ticket (in one version), or an amount of cash equal to the ticket value (in another version). People report that they would be very likely still to buy a ticket if they had lost the cash, presumably because the loss has been charged to general revenue. On the other hand, they describe themselves as quite likely to go home if they have lost an already purchased ticket, presumably because they do not want to pay twice to see the same show.

Behavioral economics
Our interaction with Thaler eventually proved to be more fruitful than we could have imagined at the time, and it was a major factor in my receiving the Nobel Prize. The committee cited me “for having integrated insights from psychological research into economic science ….”. Although I do not wish to renounce any credit for my contribution, I should say that in my view the work of integration was actually done mostly by Thaler and the group of young economists that quickly began to form around him, starting with Colin Camerer and George Loewenstein, and followed by the likes of Matthew Rabin, David Laibson, Terry Odean, and Sendhil Mullainathan. Amos and I provided quite a few of the initial ideas that were eventually integrated into the thinking of some economists, and prospect theory undoubtedly afforded some legitimacy to the enterprise of drawing on psychology as a source of realistic assumptions about economic agents. But the founding text of behavioral economics was the first article in which Thaler (1980) presented a series of vignettes that challenged fundamental tenets of consumer theory. And the respectability that behavioral economics now enjoys within the discipline was secured, I believe, by some important discoveries Dick made in what is now called behavioral finance, and by the series of “Anomalies” columns that he published in every issue of the Journal of Economic Perspectives from 1987 to 1990, and has continued to write occasionally since that time.

In 1982, Amos and I attended a meeting of the Cognitive Science Society in Rochester, where we had a drink with Eric Wanner, a psychologist who was then vice-president of the Sloan Foundation. Eric told us that he was interested in promoting the integration of psychology and economics, and asked for our advice on ways to go about it. I have a clear memory of the answer we gave him. We thought that there was no way to “spend a lot of money honestly” on such a project, because interest in interdisciplinary work could not be coerced. We also thought that it was pointless to encourage psychologists to make themselves heard by economists, but that it could be useful to encourage and support the few economists who were interested in listening. Thaler’s name surely came up. Soon after that conversation, Wanner became the president of the Russell Sage Foundation, and he brought the psychology/economics project with him. The first grant that he made in that program was for Dick Thaler to spend an academic year (1984-85) visiting me at the University of British Columbia, in Vancouver.

That year was one of the best in my career. We worked as a trio that also included the economist Jack Knetsch, with whom I had already started constructing surveys on a variety of issues, including valuation of the environment and public views about fairness in the marketplace. Jack had done experimental studies of the endowment effect and had seen the implications of that effect for the Coase theorem and for issues of environmental policy. We made a very good team: Jack’s wisdom and imperturbable calm withstood the stress of Dick’s boisterous temperament and of my perfectionist anxieties and intellectual restlessness.

We did a lot together that year. We conducted a series of market experiments involving real goods (the “mugs” studies), which eventually became a standard in that literature (Kahneman, Knetsch and Thaler, 1990). We also conducted multiple surveys in which we used experimentally varied vignettes to identify the rules of fairness that the public would apply to merchants, landlords, and employers (Kahneman, Knetsch and Thaler, 1986a). Our central observation was that in many contexts the existing situation (e.g., price, rent, or wage) defines a “reference transaction,” to which the transactor (consumer, tenant, and employee) has an entitlement – the violation of such entitlements is considered unfair and may evoke retaliation. For example, cutting the wages of an employee merely because he could be replaced by someone who would accept a lower wage is unfair, although paying a lower wage to the replacement of an employee who quit is entirely acceptable. We submitted the paper to the American Economic Review and were utterly surprised by the outcome: the paper was accepted without revision. Luckily for us, the editor had asked two economists quite open to our approach to review the paper. We later learned that one of the referees was George Akerlof and the other was Alan Olmstead, who had studied the failures of markets to clear during an acute gas shortage.

One question that arose during this research was whether people would be wiling to pay something to punish another agent who treated them “unfairly”, and in some circumstances would share a windfall with a stranger in an effort to be “fair”. We decided to investigate these ideas using experiments for real stakes. The games that we invented for this purpose have become known as the ultimatum game and the dictator game. Alas, while writing up our second paper on fairness (Kahneman, Knetsch and Thaler, 1986b) we learned that we had been scooped on the ultimatum game by Werner Guth and his colleagues, who had published experiments using the same design a few years earlier. I remember being quite crestfallen when I learned this. I would have been even more depressed if I had known how important the ultimatum game would eventually become.

Most of the economics I know I learned that year, from Jack and Dick, my two willing teachers, and from what was in fact my first experience of communicating across tribal boundaries. I was also much impressed by an experimental game that Dick Thaler, James Brander, and I invented and called the N* game. The game is played by a group of, say, fifteen people. On each trial, a number 0< N* <15 is announced. The participants then make simultaneous choices of whether or not to “enter.” Those who decide to enter announce their choice simultaneously. The payoff to the N entrants depends on their number, according to the following formula: $.25(N* – N). We played the game a few times, once with the faculty of the psychology department at U.B.C. The results, although not surprising to an economist, struck me as magical. Within very few trials, a pattern emerged in which the number of entrants, N, was within 1 or 2 of N*, with no obvious systematic tendency to be higher or lower than N*. The group was doing the right thing collectively, although conversations with the participants and the obvious statistical analyses did not reveal any consistent strategies that made sense. It took me some time to realize that the magic we were observing was an equilibrium: the pattern we saw existed because no other pattern could be sustained. This idea had not been in my intellectual bag of tools. We never formally published the N* game – I described it informally in Kahneman (1987) – but it has been taken up by others (Erev & Rapoport, 1998).

That was the closest my research ever came to core economics, and since that time I have been mostly cheering Thaler and behavioral economics from the sidelines. There has been much to cheer about. As a mark of the progress that has been made, I recall a seminar in psychology and economics that I co-taught with George Akerlof, after Anne Treisman and I had moved from the University of British Columbia to Berkeley in 1986. I remember being struck by the reverence with which the rationality assumption was treated even by a free thinker such as George, and also by his frequent warnings to the students that they should not let themselves be seduced by the material we were presenting, lest their careers be permanently damaged. His advice to them was to stick to what he called “meat-and-potatoes economics,” at least until their careers were secure. This opinion was quite common at the time. When Matthew Rabin joined the Berkeley economics department as a young assistant professor and chose to immerse himself in psychology, many considered the move professional suicide. Some fifteen years later, Rabin had earned the Clark medal, and George Akerlof had delivered a Nobel lecture entitled “Behavioral Macroeconomics.”

Eric Wanner and the Russell Sage Foundation continued to support behavioral economics over the years. I was instrumental in the idea of using some of that support to set up a summer school for graduate students and young faculty in that field, and I helped Dick Thaler and Colin Camerer organize the first one, in 1994. When the fifth summer school convened in 2002, David Laibson, who had been a participant in 1994, was tenured at Harvard and was one of the three organizers. Terrance Odean and Sendhil Mullainathan, who had also participated as students, came back to lecture as successful researchers with positions in two of the best universities in the world. It was a remarkable experience to hear Matthew Rabin teach a set of guidelines for developing theories in behavioral economics – including the suggestion that the standard economic model should be a special case of the more complex and general models that were to be constructed. We had come a long way.

Although behavioral economics has enjoyed much more rapid progress and gained more respectability in economics than appeared possible fifteen years ago, it is still a minority approach and its influence on most fields of economics is negligible. Many economists believe that it is a passing fad, and some hope that it will be. The future may prove them right. But many bright young economists are now betting their careers on the expectation that the current trend will last. And such expectations have a way of being self-fulfilling.

Later years
Anne Treisman and I married and moved together to U.B.C. in 1978, and Amos and Barbara Tversky settled in Stanford that year. Amos and I were then at the peak of our joint game, and completely committed to our collaboration. For a few years, we managed to maintain it, by spending every second weekend together and by placing multiple phone calls each day, some lasting several hours. We completed the study of framing in that mode, as well as a study of the ‘conjunction fallacy’ in judgment (Tversky and Kahneman, 1983). But eventually the goose that had laid the golden eggs languished, and our collaboration tapered off. Although this outcome now appears inevitable, it came as a painful surprise to us. We had completely failed to appreciate how critically our successful interaction had depended on our being together at the birth of every significant idea, on our rejection of any formal division of labor, and on the infinite patience that became a luxury when we could meet only periodically. We struggled for years to revive the magic we had lost, but in vain.

We were again trying when Amos died. When he learned in the early months of 1996 that he had only a few months to live, we decided to edit a joint book on decision-making that would cover some of the progress that had been made since we had started working together on the topic more than twenty years before (Kahneman and Tversky, 2000). We planned an ambitious preface as a joint project, but I think we both knew from the beginning that we would not be granted enough time to complete it. The preface I wrote alone was probably my most painful writing experience.

During the intervening years, of course, we had continued to work, sometimes together sometimes with other collaborators. Amos took the lead in our most important joint piece, an extension of prospect theory to the multipleoutcome case in the spirit of rank-dependent models. He also carried out spectacular studies of the role of argument and conflict in decision-making, in collaborations with Eldar Shafir and with Itamar Simonson, as well as influential work on violations of procedural invariance in collaborations with Shmuel Sattath and with Paul Slovic. He engaged in a deep exploration of the mathematical structure of decision theories with Peter Wakker. And, in his last years, Amos was absorbed in the development of support theory, a general approach to thinking under uncertainty that his students have continued to explore. These are only his major programmatic research efforts in the field of decision-making – he did much more.

I, too, kept busy, and also kept moving. Anne Treisman and I moved to UC Berkeley in 1986, and from there to Princeton in 1993, where I happily took a split appointment that located me part-time in the Woodrow Wilson School of Public Affairs. Moving East also made it easier to maintain frequent contacts with friends, children and adored grandchildren in Israel.

Over the years I enjoyed productive collaborations with Dale Miller in the development of a theory of counterfactual thinking (Kahneman and Miller, 1986), and with Anne Treisman, in studies of visual attention and object perception. In addition to the work on fairness and on the endowment effect that we did with Dick Thaler, Jack Knetsch and I carried out studies of the valuation of public goods that became quite controversial and had a great influence on my own thinking. Further studies of that problem with Ilana Ritov eventually led to the idea that the translation of attitudes into dollars involves the almost arbitrary choice of a scale factor, leading some people who have quite similar values to state very different values of their willingness to pay, for no good reason (Kahneman, Ritov and Schkade, 1999). With David Schkade and the famous jurist Cass Sunstein I extended this idea into a program of research on arbitrariness in punitive damage decisions, which may yet have some influence on policy (Sunstein, Kahneman, Schkade and Ritov, 2002).

The focus of my research for the past fifteen years has been the study of various aspects of experienced utility – the measure of the utility of outcomes as people actually live them. The concept of utility in which I am interested was the one that Bentham and Edgeworth had in mind. However, experienced utility largely disappeared from economic discourse in the twentieth century, in favor of a notion that I call decision utility, which is inferred from choices and used to explain choices. The distinction could be of little relevance for fully rational agents, who presumably maximize experienced utility as well as decision utility. But if rationality cannot be assumed, the quality of consequences becomes worth measuring and the maximization of experienced utility becomes a testable proposition. Indeed, my colleagues and I have carried out experiments in which this proposition was falsified. These experiments exploit a simple rule that governs the assignment of remembered utility to past episodes in which an agent is passively exposed to a pleasant or unpleasant experience, such as watching a horrible film or an amusing one (Frederickson and Kahneman, 1993), or undergoing a colonoscopy (Redelmeier and Kahneman, 1993). Remembered utility turns out to be determined largely by the peak intensity of the pleasure or discomfort experienced during the episode, and by the intensity of pleasure or discomfort when the episode ended. The duration of the episode has almost no effect on its remembered utility. In accord with this rule, an episode of 60 seconds during which one hand is immersed in painfully cold water will leave a more aversive memory than a longer episode, in which the same 60 seconds are followed by another 30 seconds during which the temperature rises slightly. Although the extra 30 seconds are painful, they provide an improved end. When experimental participants are exposed to the two episodes, then given a choice of which to repeat, most choose the longer one (Kahneman, Fredrickson, Schreiber and Redelmeier, 1993). In these and in other experiments of the same kind (Schreiber and Kahneman, 2000), people make wrong choices between experiences to which they may be exposed, because they are systematically wrong about their affective memories Our evidence contradicts the standard rational model, which does not distinguish between experienced utility and decision utility. I have presented it as a new type of challenge to the assumption of rationality (Kahneman, 1994).

Most of my empirical work in recent years has been done in collaboration with my friend David Schkade. The current topic of our research is a study of well-being that builds on my previous research on experienced utility. We have assembled a multi-disciplinary team for an attempt to develop tools for measuring welfare, with the design specification that economists should be willing to take the measurements seriously.

Another major effort went into an essay that attempted to update the notion of judgment heuristics. That work was done in close collaboration with a young colleague, Shane Frederick. In the pains we took in the choice of every word it came close to matching my experiences with Amos (Kahneman and Frederick, 2002). My Nobel lecture is an extension of that essay.

One line of work that I hope may become influential is the development of a procedure of adversarial collaboration, which I have championed as a substitute for the format of critique-reply-rejoinder in which debates are currently conducted in the social sciences.1 Both as a participant and as a reader I have been appalled by the absurdly adversarial nature of these exchanges, in which hardly anyone ever admits an error or acknowledges learning anything from the other. Adversarial collaboration involves a good-faith effort to conduct debates by carrying out joint research – in some cases there may be a need for an agreed arbiter to lead the project and collect the data. Because there is no expectation of the contestants reaching complete agreement at the end of the exercise, adversarial collaborations will usually lead to an unusual type of joint publication, in which disagreements are laid out as part of a jointly authored paper. I have had three adversarial collaborations, with Tom Gilovich and Victoria Medvec (Gilovich, Medvec and Kahneman, 1998), with Ralph Hertwig (where Barbara Mellers was the agreed arbiter, see Mellers, Hertwig and Kahneman, 2001), and with a group of experimental economists in the UK (Bateman et al., 2003). An appendix in the Mellers et al. article proposes a detailed protocol for the conduct of adversarial collaboration. In another case I did not succeed in convincing two colleagues that we should engage in an adversarial collaboration, but we jointly developed another procedure that is also more constructive than the reply-rejoinder format. They wrote a critique of one of my lines of work, but instead of following up with the usual exchange of unpleasant comments we decided to write a joint piece, which started by a statement of what we did agree on, then went on to a series of short debates about issues on which we disagreed (Ariely, Kahneman, & Loewenstein, 2000). I hope that more efficient procedures for the conduct of controversies will be part of my legacy.

Part 2 – Eulogy for Amos Tversky (June 5, 1996)
People who make a difference do not die alone. Something dies in everyone who was affected by them. Amos made a great deal of difference, and when he died, life was dimmed and diminished for many of us.

There is less intelligence in the world. There is less wit. There are many questions that will never be answered with the same inimitable combination of depth and clarity. There are standards that will not be defended with the same mix of principle and good sense. Life has become poorer.

There is a large Amos-shaped gap in the mosaic, and it will not be filled. It cannot be filled because Amos shaped his own place in the world, he shaped his life, and even his dying. And in shaping his life and his world, he changed the world and the life of many around him.

Amos was the freest person I have known, and he was able to be free because he was also one of the most disciplined.

Some of you may have tried to make Amos do something he did not want to do. I don’t think that there are many with successes to recount. Unlike many of us, Amos could not be coerced or embarrassed into chores or empty rituals. In that sense he was free, and the object of envy for many of us. But the other side of freedom is the ability to find joy in what one does, and the ability to adapt creatively to the inevitable. I will say more about the joy later. The supreme test of Amos’s ability to accept what cannot be changed came in the last few months. Amos loved living. Death, at a cruelly young age was imposed on him, before his children’s lives had fully taken shape, before his work was done. But he managed to die as he had lived – free. He died as he intended. He wanted to work to the last, and he did. He wanted to keep his privacy, and he did. He wanted to help his family through their ordeal, and he did. He wanted to hear the voices of his friends one last time, and he found a way to do that through the letters that he read with pleasure, sadness and pride, to the end.

There are many forms of courage, and Amos had them all. The indomitable serenity of his last few months is one. The civic courage of adopting principled and unpopular positions is another, and he had that too. And then there is the heroic, almost reckless courage, and he had that too.

My first memory of Amos goes back to 1957, when someone pointed out to me a thin and handsome lieutenant, wearing the red beret of the paratroopers, who had just taken the competitive entrance exam to the undergraduate program in Psychology at Hebrew University. The handsome lieutenant looked very pale, I remember. He had been wounded.

The paratrooper unit to which he belonged had been performing an exercise with live fire in front of the general staff of the Israel Defense Forces and all the military attaches. Amos was a platoon commander. He sent one of his soldiers carrying a long metal tube loaded with an explosive charge, which was to be slid under the barbed wire of the position they were attacking, and was to be detonated to create an opening for the attacking troops. The soldier moved forward, placed the explosive charge, and lit the fuse. And then he froze, standing upright in the grip of some unaccountable attack of panic. The fuse was short and the soldier was certainly about to be killed. Amos leapt from behind the rock he was using for cover, ran to the soldier, and managed to jump at him and bring him down just before the charge exploded. This was how he was wounded. Those who have been soldiers will recognize this act as one of almost unbelievable presence of mind and bravery. It was awarded the highest citation available in the Israeli army.

Amos almost never mentioned this incident, but some years ago, in the context of one of our frequent conversations about the importance of memory in our lives, he mentioned it and said that it had greatly affected him. We can probably appreciate what it means for a 20-year old to have passed a supreme test, to have done the impossible. We can understand how one could draw strength from such an event, especially if – as was the case for Amos – achieving the almost impossible was not a once-off thing. Amos achieved the almost impossible many times, in different contexts.

Amos’ almost impossible achievements, as you all know, extended to the academic life. Amos derived some quiet pleasure from one aspect of his record: by a large margin, he published more articles in Psychological Review, the prestigious theory journal of the discipline, than anyone else in the history of that journal, which goes back more than 100 years. He had two pieces in press in Psychological Review when he died.

But other aspects of the record are even more telling than this statistic. The number of gems and enduring classics sets Amos apart even more. His early work on transitivity violations, elimination by aspects, similarity, the work we did together on judgment, prospect theory and framing, the Hot Hand, the beautiful work on the disjunction effect and Argument-Based Choice, and most recently an achievement of which Amos was particularly proud: Support Theory.

How did he do it? There are many stories one could tell. Amos’ lifelong habit of working alone at night while others slept surely helped, but that wouldn’t quite do it. Then there was that mind – the bright beam of light that would clear out an idea from the fog of other people’s words, the inventiveness that could come up with six different ways of doing anything that needed to be done. You might think that having the best mind in the field and the most efficient work style would suffice. But there was more.

Amos had simply perfect taste in choosing problems, and he never wasted much time on anything that was not destined to matter. He also had an unfailing compass that always kept him going forward. I can attest to that from long experience.

It is not uncommon for me to write dozens of drafts of a paper, but I am never quite sure that they are actually improving, and often I wander in circles. Almost everything I wrote with Amos also went through dozens of drafts, but when you worked with Amos you just knew. There would be many drafts, and they would get steadily better.

Amos and I wrote an article in Science in 1974. It took us a year. We would meet at the van Leer Institute in Jerusalem for 4-6 hours a day. On a good day we would mark a net advance of a sentence or two. It was worth every minute. And I have never had so much fun. When we started work on Prospect Theory it was 1974, and in about 6 months we had been through 30- odd versions of the theory and had a paper ready for a conference. The paper had about 90% of the ideas of Prospect Theory, and quite properly did not impress anyone. We spent the better part of the following four years debugging it, trying to anticipate every objection.

What kept us at it was a phrase that Amos often used: “Let’s do it right”. There was never any hurry, any thought of compromising quality for speed. We could do it because Amos said the work was important, and you could trust him when he said that. We could also do it because the process was so intensely enjoyable.

But even that is not all. To understand Amos’ genius – not a word I use lightly – you have to consider a phrase that he was using increasingly often in the last few years: “Let us take what the terrain gives”. In his growing wisdom Amos believed that Psychology is almost impossible, because there is just not all that much we can say that is both important and demonstrably true. “Let us take what the terrain gives” meant not over-reaching, not believing that setting a problem implies it can be solved.

The unique ability Amos had – no one else I know comes close – was to find the one place where the terrain will yield (for Amos, usually gold) – and then to take it all. This skill in taking it all is what made so many of Amos’ papers not only classics, but definitive. What Amos had done did not need redoing.

Whether or not to over-reach was a source of frequent, and frequently productive tension between Amos and me over nearly 30 years. I have always wanted to do more than could be done without risk of error, and have always taken pride in preferring to be approximately right rather than precisely wrong. Amos thought that if you pick the terrain properly you won’t have to choose, because you can be precisely right. And time and time again he managed to be precisely right on things that mattered. Wisdom was part of his genius.

Fun was also part of Amos’ genius. Solving problems was a lifelong source of intense joy for him, and the fact that he was richly rewarded for his problem solving never undermined that joy.

Much of the joy was social. Almost all of Amos’ work was collaborative. He enjoyed working with colleagues and students, and he was supremely good at it. And his joy was infectious. The 12 or 13 years in which most of our work was joint were years of interpersonal and intellectual bliss. Everything was interesting, almost everything was funny, and there was the recurrent joy of seeing an idea take shape. So many times in those years we shared the magical experience of one of us saying something which the other would understand more deeply than the speaker had done. Contrary to the old laws of information theory, it was common for us to find that more information was received than had been sent. I have almost never had that experience with anyone else. If you have not had it, you don’t know how marvelous collaboration can be …

References 


Ariely, D., Kahneman, D. & Loewenstein, G. (2000). Joint comment on “When does duration matter in judgment and decision making”. Journal of Experimental Psychology: General, 129, 524-529.
Arrow, K. J. (1982). Risk perception in psychology and economics. Economic Inquiry, 20, 1-9.
Ayton, P. (1998). How bad is human judgment? In Forecasting with judgment, G. Wright & P. Goodwin (Eds.). West Sussex, England: John Wiley & Sons.
Bateman, I., Kahneman, D., Munro, A., Starmer, C. & Sugden, R. (2003). Is there loss aversion in buying? An adversarial collaboration. (under review).
Cohen, L.J. (1981). Can human irrationality be experimentally demonstrated? The Behavioral and Brain Sciences, 4, 317-331.
Coombs, C.H., Dawes, R.M., Tversky, A. (1970). Mathematical Psychology: An elementary introduction. Oxford, England: Prentice-Hall.
Cosmides, L. & Tooby, J. (1996). Are humans good intuitive statisticians after all? Rethinking some conclusions from the literature on judgment under uncertainty. Cognition, 58, 1-73.
Erev, I. & Rapoport, A. (1998). Coordination, “magic”, and reinforcement learning in a market entry game. Games and Economic Behavior, 23, 146-175.
Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond ‘heuristics and biases’. In W. Stroebe & M. Hewstone (Eds.), European review of social psychology, (Vol. 2, 83-115). Chichester, England: Wiley.
Gigerenzer, G. (1996). On narrow norms and vague heuristics: A rebuttal to Kahneman and Tversky (1996). Psychological Review, 103, 592-596.
Gilovich, T., Medvec, V.H., & Kahneman, D. (1998). Varieties of regret: A debate and partial resolution. Psychological Review, 105, 602-605.
Kahneman, D., & Schild, E.O. (1966). Training agents of social change in Israel: Definitions of objectives and a training approach. Human Organization, 25, 323-327.
Kahneman, D. (1973). Attention and Effort. Englewood Cliffs, NJ: Prentice-Hall.
Kahneman, D., & Tversky, A. (1973). On the psychology of prediction. Psychological Review, 80, 237-25l.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decisions under risk. Econometrica, 47, 313-327.
Kahneman, D., & Tversky, A. (1984). Choices, values and frames. American Psychologist, 39, 341-350.
Kahneman, D., Knetsch, J., & Thaler, R. (1986a). Fairness as a constraint on profit seeking: Entitlements in the market. The American Economic Review, 76, 728-741.
Kahneman, D., Knetsch, J., & Thaler, R. (1986b). Fairness and the assumptions of economics. Journal of Business, 59, S285-S300.
Kahneman, D., Knetsch, J., & Thaler, R. Experimental tests of the endowment effect and the Coase theorem. Journal of Political Economy, 1990, 98(6), 1325-1348.
Kahneman, D., & Miller, D.T. (1986). Norm theory: Comparing reality to its alternatives. Psychological Review, 93, 136-153.
Kahneman, D. (1987). Experimental economics: A psychological perspective. In R. Tietz, W. Albers and R. Selten (Eds.), Modeling Bounded Rationality, 11-20.
Kahneman, D., Fredrickson, D.L., Schreiber, C.A., & Redelmeier, D.A. (1993). When more pain is preferred to less: Adding a better end. Psychological Science, 4, 401-405.
Kahneman, D. (1994). New challenges to the rationality assumption. Journal of Institutional and Theoretical Economics, 150, 18-36. Reprinted as Kahneman, D. New challenges to the rationality assumption. Legal Theory, 3, 1997, 105-124.
Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions: A reply to Gigerenzer’s critique. Psychological Review, 103, 582-591.
Kahneman, D., Ritov, I., and Schkade, D. (1999). Economic preferences or attitude expressions? An analysis of dollar responses to public issues. Journal of Risk and Uncertainty, 19, 220-242. Reprinted as Ch. 36 in Kahneman, D, and Tversky, A. (Eds.), Choices, Values and Frames. New York: Cambridge University Press and the Russell Sage Foundation, 2000.
Kahneman, D, and Tversky, A. (Eds.), Choices, Values and Frames. New York: Cambridge University Press and the Russell Sage Foundation, 2000.
Kahneman, D., and Frederick, S. (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In T. Gilovich, D. Griffin and D. Kahneman (Eds.) Heuristics and Biases: The Psychology of Intuitive Judgment. New York: Cambridge University Press, 2002.
Klein, G. (2000). The fiction of optimization. In Bounded rationality: The adaptive toolbox, G. Gigerenzer & R. Selton (Eds.). Cambridge, USA: The MIT Press. 103-121.
Latham, G., Erez, M. & Locke, E. (1988), Resolving Scientific Disputes by the Joint Design of Crucial Experiments by the Antagonists: Application to the Erez-Latham Dispute Regarding Participation in Goal-Setting. J. of Applied Psychology, 73, 753-772.
Laibson, D. & Zeckhauser, R. (1998). Amos Tversky and the ascent of behavioral economics. Journal of Risk and Uncertainty, 16, 7-47.
Lopes, (1991). The rhetoric of irrationality. Theory and Psychology, 1, 65-82.
Meehl, P.E. (1954). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. Minneapolis: University of Minnesota Press.
Mellers, A., Hertwig, R., and Kahneman, D. (2001). Do frequency representations eliminate conjunction effects? An exercise in adversarial collaboration. Psychological Science, 12, 269-275.
Mischel, W. (1961a). Preference for delayed reinforcement and social-responsibility. Journal of Abnormal and Social Psychology, 62, 1-15.
Mischel, W. (1961b). Delay of gratification, need for achievement, and acquiescence in another culture. Journal of Abnormal and Social Psychology, 62, 543-560.
Raiffa, H. (1968). Decision analysis: Introductory lectures on choices under uncertainty. Reading, MA: Addison-Wesley.
Schreiber, C.A., & Kahneman, D. (2000). Determinants of the remembered utility of aversive sounds, Journal of Experimental Psychology: General, 129, 27-42.
Sloman, S.A. 1996. The empirical case for two systems of reasoning. Psychological Bulletin, 119, 3-22.
Stanovich, K. E. (1999). Who is Rational?: Studies of Individual Differences in Reasoning. Lawrence Erlbaum. Mahwah, New Jersey.
Sunstein, C., Kahneman, D., Schkade, D., & Ritov, I. (2002). Predictably incoherent judgments. Standard Law Review.
Thaler, R. (1980). Toward a positive theory of consumer choice. Journal of Economic Behavior and Organization, 39, 36-90.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124-1131.
Tversky, A. (1977). Features of similarity. Psychological Review, 84, 327-352.
Tversky, A., & Kahneman, D. (1983). Extensional vs. intuitive reasoning: The conjunction fallacy in probability judgment. Psychological Review, 293-3l5.
Tversky, A., & Kahneman, D. (1986). Rational choice and the framing of decisions. Journal of Business, 59, S251-0S278.
Williams, A.C. (1966). Attitudes toward speculative risks as an indicator of attitudes toward pure risks. Journal of Risk and Insurance, 33, 577-586.

Sources:

http://www.princeton.edu/~kahneman/

http://nobelprize.org/nobel_prizes/economics/laureates/2002/kahneman-autobio.html

Enhanced by Zemanta

Psych Bio: Kenneth J Gergen


Kenneth J. Gergen (born 1935) is an American psychologist and professor at Swarthmore College. He obtained his B.A. at Yale University in 1957 and his Ph.D. at Duke University in 1962.

The son of John J. Gergen, the Chair of the Mathematics Department at Duke University, Gergen grew up in Durham, North Carolina. He had three brothers, one of whom is David Gergen, the prominent political analyst. After completing public schooling, he attended Yale University. Graduating in 1957, he subsequently became an officer in the U.S. Navy. He then returned to graduate school at Duke University, where he received his PhD in psychology in 1963. His dissertation advisor was Edward E. Jones. Gergen went on to become an Assistant Professor in the Department of Social Relations at Harvard University, where he also became the Chairman of the Board of Tutors and Advisors for the department and representative to the university’s Council on Educational Policy. During his tenure at Harvard, Gergen served on review panels of the National Science Foundation and the National Institute of Mental Health; he also collaborated with Raymond Bauer at the Harvard Business School, and served as a consultant with … In 1967 Gergen took a position as Chair of the Department of Psychology at Swarthmore College, a position he held for ten years. At various intervals he served as visiting professor at the University of Heidelberg, the University of Marburg, the Sorbonne, the University of Rome, Kyoto University, and Adolfo Ibanez University. At Swarthmore he spearheaded the development of the academic concentration in Interpretation Theory. In an attempt to link his academic work to societal practices he collaborated with colleagues to create the Taos Institute in 1996. He is currently a Senior Research Professor at Swarthmore, the Chairman of the Board of the Taos Institute, and an adjunct professor at Tilburg University.

Gergen is married to Mary M. Gergen, Professor Emeritus at Penn State University, and a major contributor to feminist psychology and performance inquiry. She is the author of over 50 articles and is the co-author (with Ken Gergen) of “Social Construction.” She often collaborates with her husband, and together they publish the Positive Aging Newsletter with a readership of at least 12,000. He has five children, Laura Houston, Stan Gergen, Antonia Gergen, Lisa Bell, and Michael Gebhart.

Gergen is a major figure in the development of social constructionist theory and its applications to practices of social change. He also lectures widely on contemporary issues in cultural life, including the self, technology, postmodernism, the civil society, organizational change, developments in psychotherapy, educational practices, aging, and political conflict. Gergen has published over 300 articles in journals, magazines and books, and his major books include Toward Transformation in Social Knowledge, The Saturated Self, Realities and Relationships, and An Invitation to Social Construction. With Mary Gergen, he publishes an electronic newsletter, Positive Aging (www.positiveaging.net) now distributed to 20,000 recipients.

Gergen has served as the President of two divisions of the American Psychological Association, the Division on Theoretical and Philosophical Psychology, and on Psychology and the Arts. He has served on the editorial board of 35 journals, and as the Associate Editor of The American Psychologist and Theory and Psychology. He has also served as a consultant to Sandoz Pharmaceutical Company, Arthur D. Little, Inc, the National Academy of Science, Trans-World Airlines, Bio-Dynamics, and Knight, Gladieux & Smith, Inc.

Major Contributions

After completing graduate school in experimental social psychology, Gergen had an impact on the field with his 1973 article, “Social Psychology as History”. In the article, he argues that the laws and principles of social interaction are variable over time, and that the scientific knowledge generated by social psychologists actually influences the phenomena it is meant to passively describe. The article proved contentious, receiving both criticism and support from various social psychologists.

Gergen’s work is associated with social constructionism. He has been particularly concerned with fostering a “relational” view of the self—where the “traditional emphasis on the individual mind is replaced by a concern with the relational processes from which rationality and morality emerge.” He is also known for his comment “I am linked therefore I am” as an answer to Descartes view “I think, therefore I am”. Other major interests in his diverse works include analyzing the effects of technology on social life, examining connections between social construction and theology, and promoting a more optimistic model of aging.

From the earliest point in his academic career, Gergen’s work was characterized by its catalytic potential. As an experimental social psychologist, his earliest studies challenged the presumption of a unified or coherent self. He then raised questions about the value of altruism, by exploring the ways in which helping others leads to the recipient’s resentment and alienation. However, it was his 1973 paper, “Social psychology as history,” that precipitated a major shift in his career. Here he argued that most of the behavior patterns studied by social psychologists were historically perishable. Further, because of the implicit values embedded in psychological theory and description, the dissemination of knowledge had the potential to alter patterns of social activity. To study obedience to authority, for example, might reduce the likelihood of obedience. In effect, social psychology was not fundamentally a cumulative science, but was effectively engaged in the recording and transformation of cultural life. These arguments created broad controversy and the article subsequently won an award for the volume of its citations. Also contributing to what was called “the crisis in social psychology” was Gergen’s subsequent publication on generative theory. Here he proposed that because theoretical suppositions were not so much recordings of social life as creators, theory should not be judged by their accuracy so much as their potential to open new spaces of action.

Combining these ideas with developments in literary and critical theory, along with the history of science, Gergen went on to develop a radical view of socially constructed knowledge. This view was proposed as a successor project to what Gergen considered an inherently flawed empiricist conception of knowledge. From Gergen’s perspective, all human intelligibility (including claims to knowledge) is generated within relationships. It is from relationships that humans derive their conceptions of what is real, rational, and good. From this perspective scientific theories, like all other reality posits, should not be assessed in terms of Truth, but in terms of pragmatic outcomes. Such assessments are inevitably wedded to values, and thus all science is morally and politically weighted in implication. As he saw it, this same form of assessment also applies to social constructionist theory. The question is not its accuracy, but its potentials for humankind.

This latter conclusion informed most of Gergen’s subsequent work. In one form or another, this work is concerned with transforming social life. For the most part, the preferred direction of change is toward more collaborative and participatory relationships. Writings in the areas of therapy and counseling, education, organizational change, technology, conflict reduction, civil society, and qualitative inquiry all bear this mark. Dialogues with practitioners have also been facilitated by Gergen’s popular volume for public consumption, The Saturated Self, and his work with the Taos Institute. Most of these developments are summarized in Relational Being, Beyond the Individual and Community. However, this volume opens up new territories both theoretically and practically. It attempts to rewrite psychology, in demonstrating that what are considered mental processes are not so much “in the head” as in relationships. It also attempts to answer charges of moral relativism with a non-foundational morality of collaborative practice. A way is also opened for bringing science together with concerns for the sacred.

Kenneth J. Gergen, Ph.D., is a founding member, President of the Taos Institute and Chair of the Board, and the Mustin Professor of Psychology at Swarthmore College. Gergen also serves as an Affiliate Professor at Tilburg University in the Netherlands, and an Honorary Professor at the University of Buenos Aires. Gergen received his BA from Yale University and his PhD from Duke University, and has taught at Harvard University and Heidelberg University. He has been the recipient of two Fulbright research fellowships, the Geraldine Mao fellowship in Hong Kong, along with Fellowships from the Guggenheim Foundation, the Japanese Society for the Promotion of Science, and the Alexander Humboldt Stiftung. Gergen has also been the recipient of research grants from the National Science Foundation, the Deutsche Forschungsgemeinschaft, and the Barra Foundation. He has received honorary degrees from Tilburg University and Saybrook Institute, and is a member of the World Academy of Art and Science.

Sources:

http://www.swarthmore.edu/x20604.xml

http://www.swarthmore.edu/x21149.xml

http://www.taosinstitute.net/

http://en.wikipedia.org/wiki/Kenneth_J._Gergen

http://www.museumstuff.com/learn/topics/Kenneth_Gergen::sub::Biography

Psych Bio: Steven J. Heine


Steven J. Heine was born in Edmonton, Alberta, Canada, in 1966, the second of two children, to Dorothy and Jerry Heine (a former pro football player turned watercolor artist). Heine enrolled in the University of Alberta, but being the first in his family to attend university, he was not sure what to do once he got there. His original thought was to study something practical, so he started out aiming for a degree in commerce. However, after a couple of years, he realized that although things business bored him dreadfully, he found the popular books on psychology that he was reading fascinating. Heine decided to switch his major to psychology, but this new major required him to take a second language in order to graduate. Having just read some books on Zen, he thought he might understand them better if he studied Japanese. The Japanese language courses were tremendously interesting, and Heine decided to make Japanese his minor.

As his graduation loomed, Heine realized he did not know what he wanted to do with his career. He liked his psychology courses and he liked traveling, and he thought it would be nice to find a way to do both. He heard about a field called cross-cultural psychology and figured that was the answer. But before applying to graduate school, Heine thought it would be fun to take an adventurous break, so he applied to the Japan Exchange of Teachers (JET) program to become an English teacher in Japan. The JET application permitted applicants to request a particular kind of placement, and Heine had stated that any placement would be fine as long as it was in a big city. The JET program responded by assigning him to an isolated town in Nagasaki prefecture called Obama, which was so tiny that it did not even have a train station. Heine had the distinction of being the first foreigner ever to live in Obama, and although there were many times that the isolation drove him to his wit’s end, he came to appreciate what an excellent opportunity he had to participate in such a traditional enclave of Japanese culture. Although his jobs as junior high school teacher and town mascot were not particularly inspiring, he did start to notice many aspects of his colleagues’ and students’ behaviors that seemed strikingly at odds with the views of human nature that he had learned from his psychology classes. In particular, he was struck by how the other Japanese teachers kept urging him to stop praising the students so much because it would lead them to stop trying.

After two years in Obama, Heine applied to graduate school. He had always wanted to live in Vancouver and quickly jumped at the opportunity when the University of British Columbia (UBC) accepted his application. His future advisor, Darrin Lehman, mailed him a copy of an inpress Psychological Review article by Markus and Kitayama (1991, Vol. 98, pp. 224–253) on cultural psychology. He read it at the junior high school one day and had an epiphany. All of the loose and contradictory scraps of ideas that Heine had been entertaining about cultural differences between Japanese and Westerners now had a theoretical framework that integrated them all. He moved to UBC in 1991 with a number of ideas that he could not wait to test.

Heine had a terrific time at graduate school. He loved the freedom to develop and test his own ideas, and he really responded to Lehman’s friendship and mentorship, which he continues to benefit from to this day. Lehman has an uncanny knack for recognizing good ideas, and he helped focus Heine’s efforts in the right directions. Together they collaborated on a number of cross-cultural projects. While at UBC, Heine met and fell in love with his future wife, Nariko Takayanagi. Takayanagi was in the graduate program in sociology and was also studying cross-cultural differences between Japan and the West. Heine’s research surely would not have been nearly as successful if he had not had Takayanagi’s insightful and critical feedback at every step along the way. Access to an insider’s perspective is crucial to good cultural research, and Heine learned how fortunate he was to have that insider’s perspective right inside his own house. While at UBC, Heine started to work on a project with Shinobu Kitayama, and he received a generous dissertation fellowship from the Japan Foundation to go to Kyoto to work with him. Heine learned an enormous amount from Kitayama, and he decided to stay at Kyoto University a second year as a postdoctoral student. Heine’s time in Kyoto was especially engaging and productive, and he benefited immensely from the relationships that he developed there with Yumi Endo, Taka Masuda, Hisaya Matsumoto, Beth Morling, Oto Okugawa, Toshitake Takata, and the many excellent undergraduate students with whom he worked.

In 1997, Heine received a job offer from the University of Pennsylvania and excitedly began his career there. The students and faculty at Penn were extremely stimulating, and Heine was greatly impressed with the deep respect that people there had for ideas. Heine felt that his time at Penn made him a much more careful thinker, and he was pleasantly surprised to find out that he could learn so much after he had already received his doctoral degree. He especially benefited from the mentorship of Rob Derubeis, Paul Rozin, and John Sabini there.

Three years later, UBC offered Heine a position in a department that was building a program in cultural psychology. The combination of excellent colleagues and students, a student body that was a living cross-cultural experiment, and Vancouver’s ski hills and beaches made this an offer that he could not refuse. Heine especially appreciates the stimulating discussions with the other faculty in the cultural area at UBC, including Darrin Lehman, Ara Norenzayan, and Mark Schaller. After a year at UBC, Heine’s cross-cultural son Seiji was born, and Heine’s life was very full.

Heine’s research has focused largely on the role that culture plays in people’s motivations to view themselvespositively. Although the idea that people have a need for self-esteem is a fundamental assumption of psychology, Heine was struck by how little evidence for this need he saw in his experiences in Japan. He began a program of research that consistently revealed that motivations to focus on positive aspects of the self were significantly attenuated among Japanese and in many cases were not evident at all.

The cultural differences were pronounced and emerged across a host of different experimental designs, and they suggested something that was potentially profound: One of the most basic motivations, a need for self-esteem, might better be understood as a strategy for achieving successful outcomes in some cultural environments but not in others. Moreover, these cultural differences have been resistant to a number of alternative explanations that Heine has explored: for example, that Japanese are more concerned with their group-self esteem, that Japanese are motivated to view themselves positively in different domains from Westerners, that Japanese are motivated to have self-esteem but that cultural norms for self-presentation prevent them from expressing these in questionnaires, or that such cultural differences are due to inherited dispositions rather than to learned strategies. Some other researchers continue to disagree about these points, and Heine has found himself engaging in a number of debates with them at various conferences and in a number of journals. The Japanese side of Heine’s self appreciates these opportunities for self-improvement.

Heine and his colleagues have proposed that rather than prioritizing motivations to enhance the self, Japanese tend to emphasize motivations to improve themselves. Heine reasoned that Westerners, striving to feel good about themselves, would devote their energies to tasks at which they were especially good, whereas Japanese, trying to correct their shortcomings, would work especially hard on tasks at which they were poor. This theorizing led to a line of research, funded by the National Institute of Mental Health, that revealed this intriguing pattern of results. One consequence of this motivational difference, it seems, is that self-enhancing Westerners should increasingly become specialists, whereas self-improving Japanese should tend to become generalists. Heine continues to explore self-improvement motivations across cultures, particularly in the context of face-maintenance strategies. He is also interested in pursuing the more general question of which aspects of our psychology are universal and which ones vary across cultures.

Sources:

http://www2.psych.ubc.ca/~heine/

http://www2.psych.ubc.ca/~heine/docs/apa-bio.pdf

http://www.vanmag.com/News_and_Features/The_EastWest_Mind_Divide

Psych Bio: Ed Diener


Ed Diener is the Joseph R. Smiley Distinguished Professor of Psychology at the University of Illinois. He received his doctorate at the University of Washington in 1974, and has been a faculty member at the University of Illinois for the past 36 years. Dr. Diener was the president of the International Society of Quality of Life Studies, the Society of Personality and Social Psychology and the International Positive Psychology Association. Diener was the editor of the Journal of Personality and Social Psychology, and the editor of Journal of Happiness Studies. He is the founding editor of Perspectives on Psychological Science. Diener has over 300 publications, with about 200 being in the area of the psychology of well-being.

Dr. Diener is a fellow of five professional societies. Professor Diener is listed as one of the most highly cited psychologists by the Institute of Scientific Information, with over 30,500 citations to his credit. He won the Distinguished Researcher Award from the International Society of Quality of Life Studies, the first Gallup Academic Leadership Award, and the Jack Block Award for Personality Psychology. Dr. Diener won several teaching awards, including the Oakley-Kundee Award for Undergraduate Teaching at the University of Illinois. With over 50 publications he is the most published author in the Journal of Personality and Social Psychology.

Professor Diener’s research focuses on the measurement of well-being; temperament and personality influences on well-being; theories of well-being; income and well-being; and cultural influences on well-being. He has edited three recent books on subjective well-being, and a 2005 book on multi-method measurement in psychology. Diener just published a popular book on happiness with his son Robert Biswas-Diener (Happiness: Unlocking the Mysteries of Psychological Wealth) as well as a book on policy uses of accounts of well-being with Richard Lucas, Ulrich Schimmack, and John F. Helliwell (Well-Being for Public Policy). A multivolume collection of his most influential works in the area of subjective well-being will be published this year (The Collected Works of Ed Diener) as well as a book on international differences in well-being, which he edited in conjunction with Daniel Kahneman and John F. Helliwell (International Differences in Well-Being).

Dr. Diener was born in 1946 in Glendale, California. He grew up on a tomato and cotton farm in the San Joaquin Valley of California, near Fresno. He attended San Joaquin Memorial High School in Fresno, where he met his wife, Carol. He received his bachelor’s degree from California State University at Fresno and his Ph.D. from the University of Washington. Ed and Carol met at age 16 and have been married for 40 years; she is a child clinical psychologist and attorney who recently retired from the University of Illinois. The Dieners’ twin daughters, Marissa and Mary Beth, teach psychology at the University of Utah and the University of Kentucky, respectively. Marissa is a developmental psychologist and Mary Beth is a clinical psychologist. The Dieners’ son Robert has collected well-being data in collaboration with Dr. Diener. Because of the exotic groups involved in Robert’s research, including the African Maasai, Greenlandic Inuit, the Amish, and slum dwellers in Calcutta, Robert has been called the Indiana Jones of well-being research. He was branded in a rite of manhood by the Maasai. Two other daughters, Kia and Susan, are not psychologists.

In his own words:

As appeared in Robert Levine, Lynnette Zelezny, and Aroldo Rodriques (Eds.), Journeys in Social Psychology (pp. 1-18). New York: Psychology Press.

Ed Diener: One Happy Autobiography

Ed Diener

University of Illinois

9/27/06

Abstract

In this autobiography, I discuss three aspects of my life – the stages of my research career, the personality characteristics and resources that made my success possible, and the challenges I faced. Thus, I give a motivational view of my life, not a narrative recounting couched in terms of dates and places. In my career as a scientist, the first stage focused on the study of deindividuation, the second on the research of subjective well-being, and the third is the future, from age 60 to 100. The 25 years I spent exploring subjective well-being have been wonderful ones, but I expect the next 40 years to be just as rewarding. The character traits that I describe are an insatiable curiosity and inveterate nonconformity. These personality proclivities were given direction by my upbringing, which included a strong and supportive family that emphasized hard work and high achievement. The crucial resources in my success were my family, colleagues, and graduate students who have worked with me on research. Together, the personality traits and social resources led me to explore unusual topics in new ways, and to analyze topics programmatically with diverse types of studies. Finally, in this chapter I describe the challenges I faced in life, beginning with the intense need to help the world and the personal struggle to discover whether psychological research could do that. The other two challenges were the dilemma about giving priority to my family or my research and the need to gain respect from a skeptical scientific community for the research area of subjective well-being. I conclude that although a career in research is not for everyone, finding the right work for one’s personality, in combination with supportive family and colleagues, leads to very high life satisfaction.

Life looked bright in 1946 when I arrived in Glendale, California, the youngest of six children, several weeks overdue and a fat little guy at over 9 pounds in weight. In the beginning, I knew very little about statistics and subjective well-being, but had a loving family that produced subjective well-being in me. At my baptism, two weeks after my arrival, my older brother got his head stuck in the communion railing at the church and stole the show. After that unfortunate incident, I have had the wind at my back through the rest of my life. In this accounting, I will present my life like a social psychology experiment: in a 3 by 3 design – three facets each for three major topics.  The three overarching domains are: 1) The three fun-filled stages of my professional career as a research psychologist, 2) The personality characteristics and resources that helped my success, and 3) The challenges I overcame. At age 60 I am hopeful that my life has another 30 or 40 years left to go, and therefore this report is a periodic update, not an autobiography per se, which will come much later.

Career Stages

My father was a successful farmer, who wanted nothing more than to produce more successful farmers.  So he sent me to Fresno State College to obtain a degree in agriculture.  Unfortunately for my father, the study of seeds and weeds bored me to death. He did not seem to realize that plants do the same thing year after year, whereas I noticed this early on and was not enthusiastic about the repetitive character of Mother Nature.  I was, however, drawn to anthropology and psychology, where the subject matter seemed less predictable.

My father was interested in concrete things such as tractors and tomatoes, not in something as ephemeral as the human mind.  My father loved numbers, as I do, but he loved numbers applied to the physical world, not to human behavior. He thought the world needed more weathermen, not psychologists.  For my dad, predictive validity meant accurately forecasting rain, not human behavior.  He told me that we would not need psychologists if only people worked harder, because then their mental problems would disappear. Nonetheless, my parents allowed me to follow my own interests and were supportive once it was clear that psychology was my passion.

In the standard research methods course required of all psychology majors at Fresno State, each student had to conduct his or her own study, and I proposed to the professor that I assess the happiness of migrant farm workers. After all, I had grown up with farm workers, and most of them appeared to me to be relatively happy, even though relatively poor.  The professor was not pleased with my proposal. He said: “Mister Diener, you are not doing that research project for two reasons. First, I know that farm workers are not happy, and second, there is no way to measure happiness.”  Ironically, I conducted my class project on conformity. Thus, I was temporarily diverted from studying happiness. It wouldn’t be until 1981, when I received tenure at Illinois, that I would finally become free to study what I wanted: happiness.  But in the interim, I needed a topic to fill the intervening 15 years; something to while away my time.

Stage 1: Deindividuation

After working in a psychiatric hospital for several years, I attended graduate school at the University of Washington. My wife, Carol, and I chose the university because Seattle was very green and pretty; we knew nothing about the school itself. When I see the effort students now put into choosing just the right graduate school, I am amazed at how nonchalant we were about this important decision. But this leads me to also wonder whether maybe finding the perfect graduate school is not as important as what you make of the experience once you arrive.

I was an eager-beaver during those graduate school years; I even wrote a history book while working on my dissertation. I think the secret was that I did not waste time.  I worked hard all day and a few evenings without interruption and therefore, had the weekends free for my family.  I came to grad school after being a hospital administrator, and so I was organized and efficient. While at Washington, the department of psychology moved to a new building, but I remained behind in the deserted Denny Hall because that allowed me to have an entire floor of the building to conduct my deindividuation studies.  I had a small army of undergraduate assistants, up to 20 per semester, to help conduct studies and code data.  We had a ball running those studies.

My major professors at the University of Washington were Irwin Sarason and Ronald E. Smith, who taught me the basics of personality psychology and the importance of multimethod measurement. Years later, I would edit a book on multimethod measurement, and I owe my interest in this area to my mentors in Seattle. An idea that I learned from my mentors at the home of the Huskies is that even when situations exert a powerful influence on behavior, personality can simultaneously produce strong effects.  We published a review study that showed personality, on average, predicted as much variance as did experimentally manipulated situational variables.

Another one of my professors in Seattle was Scott Fraser, with whom I and other graduate students began a series of unusual studies on deindividuation, the loss of self-regulation in groups.  Given the riots of the 60’s and the ongoing anti-Vietnam rallies, we were intrigued by crowd behavior.  In one series of deindividuation studies, we observed thousands of trick-or-treaters as they came on Halloween to dozens of homes around Seattle. We experimentally manipulated factors such as anonymity, arousal, and responsibility, and observed whether kids “stole” extra candy. In some situations, almost all trick-or-treaters made off with extra sweets, and in other conditions almost no children did so, thus demonstrating the power that situational factors sometimes exert on cute, costumed rule-breaking children. These studies made the national news, often repeating each year just before Halloween. These studies were fun because I conducted them with fellow graduate students, Art Beaman and Karen Endresen, with whom I became close friends. We worked hard for a common purpose and did not compete with each other. Notice to graduate students: though you need to advance your own career, cooperation with your fellow graduate students, not competition, is the way to achieve this.

While in graduate school, I employed a method for studying group aggression called the “beat the pacifist” paradigm. Our participants were asked to help us test the training of pacifists, to ensure that they would remain nonaggressive when faced with challenges to their beliefs.  The participants could do so by discussing pacifism with the target, or by harassing him to see how he would react, or even by attacking the victim with various implements.  Again, we manipulated factors such as arousal, anonymity, and responsibility.  The differences in aggression between conditions were dramatic. In some conditions, many participants would use rubber bats to hit the target hundreds of times in a short period.  In some instances, the study had to be halted because the participants were attacking the pacifist (often played bravely by me to spare my assistants from this unpleasant role) in a way that would injure him.

It may surprise some readers that we did not encounter problems in receiving ethics approval for these studies.  However, as I recall, the psychology department in those times was overshadowed by much more scandalous affairs.  One professor was fired for selling cocaine and justified his stash of drugs by claiming it was part of a psychology experiment.  A second young professor turned out not to actually have a Ph.D., because he attended graduate school without being enrolled as a student. Another professor was found to be having sex with the undergraduates in his class and used the defense that he was helping the women by moving them to a higher spiritual level by putting them in moral conflict.  Once, a female professor asked me whether I had an “open marriage,” and I naively responded “yes.” Only later did I realize that her inquiry was an invitation to sex rather than an inquiry about the honesty of my marriage. Once I understood the real question, I had to admit that my marriage was not open. Thus, although not many IRB’s today would approve the “beat the pacifist” studies, in the context of the 1970’s, they seemed unremarkable.

In the 1980s, I traveled to South Africa to serve as an expert witness, based on my deindividuation research, in a murder trial in which a huge crowd had murdered a woman.  An angry crowd of over ten-thousand beat and killed a woman who was believed to be a police informant.  The entire incident was captured by a television network, and fourteen of those involved in the murder were apprehended by the police. My role for the defense was to convince the judges that the crowd situation provided mitigating circumstances; without this defense, the defendants would all be hanged, because the death sentence was automatically imposed unless mitigating circumstances could be proven.  Most of the defendants were found guilty, but none were hanged.  My work with deindividuation ended on a high note.

The deindividuation studies were fun, but I was anxious to move on to new territory. Because I was granted tenure at Illinois in 1980, I was finally free to begin studying happiness.

Stage 2: Subjective Well-Being

In 1980, Carol and I spent our sabbatical year in the Virgin Islands.  While Carol taught nine psychology courses at the College of the Virgin Islands, I spent the year on the beach, reading the 18 books and 220 articles I could find that were related to subjective well-being.  One might think that the island setting was conducive to happiness, but a surprising thing we noticed was that many people who moved to this tropical setting did not find the happiness they sought.  Instead, their alcoholism, bad social skills, and chronic discontent often followed them to paradise. Living in paradise apparently does not guarantee high subjective well-being, and so I wondered, what does?  That year I wrote a basic introduction for psychologists to the field of subjective well-being, which appeared in Psychological Bulletin in 1984, and that early paper has been cited well over 1200 times.

Journalists ask why I decided to study happiness in those days, when it was a topic far from the beaten track. Although the works of the humanistic psychologists, such as Maslow, stimulated my interest in the ingredients of the good life, my parents also had a profound influence on me. They were happy people and believed in looking at the bright side of events. My mother presented me with books such as Norman Vincent Peale’s The Power of Positive Thinking, and this piqued my interest.  My mother told me that even criticism could be framed in a positive way.  No wonder I was drawn to happiness.

When I began to read the literature on subjective well-being, I realized that this was relatively unstudied terrain.  Yes, there were pioneers – such as Norman Bradburn and Marie Jahoda – but most topics in this area had not been analyzed in depth. Not only did the topic seem very important, but it seemed relatively easy to explore, because so little research had been done. What a happy decision for me.

In the 25 years since I entered this field, my laboratory has concentrated on several topics, including measurement.  Although measurement is boring to many, I believe that it is pivotal, forming the foundation of scientific work. Thus, I have worked to create new measures, validate measures, examine the structure of well-being, and analyze the relations between various types of assessment. Measurement issues are still understudied, and issues about defining and measuring well-being are among the most important questions in this area of study. Besides measurement, research from my laboratory has spanned topics from the influence personality and culture have on happiness to the effects of income and materialism.

Recently, as an extension of my measurement work, I have been exploring the idea of national indicators of well-being to aid policy makers. The idea is that national accounts of subjective well-being can be useful to policy makers by providing them with a metric for societal betterment that includes information beyond that obtained by economic indicators. I argue that we need a “Dow Jones of Happiness” that tells us how our nation is doing in terms of engagement at work, trust in our neighbors, life satisfaction, and positive emotions.  The proposed national accounts of well-being have been greeted by more acceptance than would have been possible a decade ago.  For example, the government of the United Kingdom is considering what well-being measures might be used on a systematic basis to inform policy, and the biennial survey of the European Union already includes a large number of questions about subjective well-being.

Another interest of mine is the outcomes of well-being – how does the experience of happiness and life satisfaction influence people’s behavior and success?  Sonja Lyubomirsky, Laura King, and I argue that happy people are likely to be successful people in all sorts of realms, such as on the job, in relationships, and in longevity and health.  Based on this work, my son, Robert, and I are developing a book for the public, in which we present the case that happiness means more than feeling good – it is one ingredient in the recipe for success.

When I entered the field of subjective well-being, a few facts were already known. Nonetheless, most of the territory was uncharted. Looking at the area, I felt that the first priority after the development of good measures was to discover some basic, replicable facts, to map the topography of who is happy and who is unhappy. My role models were not the great theorists of science such as Newton, Darwin, and Einstein.  I felt the field was much too primitive for even rudimentary theories. Instead, I looked to Karl von Frisch and Tycho Brahe as my two models for scientific work on subjective well-being.  I read von Frisch’s Dance of the Bees at age 14,and was awestruck by the genius of his simple experiments with bees. I had grown up on a farm where millions of domesticated honeybees were used for pollinating crops, and yet their behavior was inexplicable to me – they were a swarm of dangerous madness with a queen at the middle. But von Frisch discovered so much about bee’s frenetic behavior from his experiments, demonstrating that powerful observation and experimentation can lead to true advances in human knowledge even without elaborate theories.  Tycho Brahe, who wore an artificial silver nose because of a swordfight mishap, carefully mapped the heavens, and his maps provided the basis of the theoretical advances by Copernicus and Keppler.  Just as Tycho spent years of nights ensconced on a dark island recording the movements of the stars, I hoped to carefully chart who is happy and who is not, so that some later geniuses could produce Newtonian laws of happiness.

One of my goals for the field of subjective well-being was to develop other measures besides broad self-report scales, which suffer from certain limitations such as self-presentational differences between people. One method we began using in our earliest studies in 1981 was the experience-sampling method, in which we used alarm watches to signal people at random moments through the day. When their alarms sounded, participants rated their moods.  If they were involved in sex or some other absorbing activity where interruption might ruin the mood, they could wait up to 30 minutes to complete the mood scales.  We also developed informant report measures and memory measures of happiness.

Although I worked in relative obscurity in the early years, recently the topic has become popular.  Happiness has become a hot topic among television and documentary artists, as well as newspaper and magazine writers.  A problem is that many journalists have a message they wish to convey and are merely looking for experts to confirm their opinion. The media reports are sometimes barely recognizable from what I said to the journalist. Although it is exciting to be featured in prominent outlets such as Time magazine and documentary films, my feeling is that very often now the reporting is outstripping our knowledge.  As the field develops the dance with the media will be a continuing struggle between providing helpful information to the public and not getting caught in a trap of telling more than we know.

One question that is frequently asked by journalists is what I have learned from my studies about happiness that I can use in my own life. Many people think of me as the happiest person they know. My own assessment is that I am extremely high in life satisfaction, but I am only average in levels of positive moods. Studying happiness is not a guarantee of being happy, any more than being a biologist will necessarily make one healthier. One thing that is quite clear to me is that happiness is a process, not a place. No set of good circumstances will guarantee happiness. Although such circumstances (a good job, a good spouse, and so forth) are helpful, happiness requires fresh involvement with new activities and goals – even perfect life circumstances will not create happiness. For me this meant that I should not worry about getting to a sweet spot in my career where everything would be lined up just right. I realized that no amount of eminence, awards, desirable teaching load, a larger office, or whatever other thing I might want, would guarantee happiness, although these things might help. Instead, I discovered that continuing to have goals that I enjoyed working for was a key ingredient for happiness. People often think that once they obtain a lot of good things, they will thereafter be happy, without realizing they are, for the most part, likely to adapt to the circumstances.  On the other hand, fresh involvement with new goals and activities can continue to produce happiness.

Another fact that has been evident in my life is that all people experience some negative life events, and yet many people are nevertheless still happy. I found that tragic events in my own life led to temporary unhappiness, but that I bounced back. People do not necessarily bounce back completely from all negative events, but most humans are pretty resilient.  The major sources of happiness often reside in a person’s activities, social relationships, and attitudes towards life.

Stage 3: The Future

At age 60, some people believe they are entering the last phase of their lives. I consider 60 to be the half-way point of my productive years (from 30 to 90). Thus, I am exploring new avenues for the second half of life.  One project is a journal I have founded for the Association of Psychological Science, called Perspectives on Psychological Science. For four years, I was the associate editor of the Journal of Personality and Social Psychology and then served as the editor of the personality section of that journal for six years. Alex Michalos, Ruut Veenhoven, and I founded the Journal of Happiness Studies, for which I was the chief editor for several years. The 12 years of previous editing was my warm-up for editing “Perspectives.”  My goal is a lofty one – to make “Perspectives” the most interesting psychology journal in the world.

Another project for the next 30 years is to make Carol’s life as happy as it can be. I must remind myself that the good life is more than being a productive researcher; it includes being a good human as well. Early-career scientists should not forget this point. Although it may seem strange to mention Carol’s happiness in a professional biography, I want to ensure that young, ambitious psychologists do not forget the point that one should not excel at their jobs at the expense of being decent human beings.

On the whole, except for a few health problems relating to aging, I expect the next 30 years to be as good as the last 30 years!  Andrew Carnegie said that to die rich is to die disgraced.  Thus, Carol and I have plans to use our money before we die on projects related to helping people and advancing psychology, which will require our money, time, and energy.  This is yet another lesson for young readers – life is not over at 50. Or 60. Or 70.  Although I may slow down a bit after 60, scientists often continue productive careers into their 80’s.

Resources and Strengths

I believe that to understand people, we must consider their strengths and resources, not simply the problems they face. In my case, I have certain personality characteristics that have helped me succeed in the career path I chose, as well as abundant resources for which I am very grateful. I was fortunate to come from an affluent family, which allowed me fewer pressures when it came to money. I did not have to take added summer work if it interfered with my research, and I was able to fund much of my own research so that I did not have to spend time applying for grants. However, other resources were much more helpful than money.

Resource 1: Personality Characteristics

From an early age, I wondered about phenomena I observed. As a child, my curiosity sometimes got me in trouble. I once threw a rock at a swarm of bees to determine how they would react, and found out the painful answer. I also recall frustrating my 7th grade teacher with questions about math, such as how to compute cube roots.  My head still hurts, at times, from wondering about so many things.

I was a sickly child, and so I spent a lot of time at home. I would roll dice for hours and record the outcomes and eventually, figured out how to compute probabilities. I then turned to calculating the probabilities of poker and black-jack hands, a more challenging task for a sixth-grader. I feel that curiosity is one of my biggest assets as a researcher; I always seemed to be fascinated more by what I did not know than by what I already knew.  Engineering is probably a good field for those who like more certainties; psychology intrigues those who are drawn to uncertainties.

My intense curiosity about things has served me well. For example, I not only constantly wonder about what makes people happy (and it sometimes keeps me awake at nights) but I wonder how measures can be improved and what shortcomings there are in our current research. Many people think the core of a good scholar is intelligence; I think it is an intense sense of curiosity.

Although I was a high-achieving child, I was also always a sensation seeker and nonconformist. This sometimes resulted in danger-seeking, for example climbing the Golden Gate Bridge on several occasions.  As a teenager I experimented quite a bit with gasoline, gunpowder, and fire.  My parents gave me a car at age 12, for driving on dirt roads only, and I made good use of it with my friends – hunting birds from the windows as we drove.  I did quite a few nonconformist things, perhaps even some illegal ones (which I will leave to your imagination).  As an adult, I was known for parties at our house that featured events such as walking on broken glass, carving Spam into “art,” and seeing whose method worked best for removing red wine stains from our carpeting. Although I am embarrassed to provide more examples of my behaviors, I believe this playful attitude to life had positive effects on my scholarship. I was willing to take on new topics, even if they were not popular, and I was not much affected by what others thought, if I believed the topic was an interesting one. This nonconforming tendency led me to be attracted to topics that were not heavily worked by others, and continues to lead me to challenge conventional wisdom.

Resource 2: My Upbringing and Family

I possess personality characteristics that have aided my career, but by far the biggest resource in my life has been the help I have received from others, starting with my parents. My parents gave me a sense of security, and meaning in life. They were optimists, but also transmitted the idea that we must all work to improve the world. My four older sisters lavished attention on me, and made me think I was special. Because my parents almost never argued and never moved from their farm, the universe was a secure and benevolent place for me. Although I was no more special than anyone else, feeling secure and valued gave me a self-confidence that helped me take on new and big projects later in life.

I was the youngest of six children, but my siblings were much older and went away to high school, so I grew up much like an only-child. Because I was often sick in my early years, my mother read to me for hours. As I grew older, my mom was intent on me being a high achiever. I won dozens of merit badges in Boy Scouts, and many awards in 4-H. I also competed in many public speaking events even before I got to high school. While my mother focused on my accomplishments, my dad was a disciple of hard work. My 4-H projects were raising cattle, cotton, and sugar beets. I also did electrical and carpentry projects, and did welding in the farm shop. In the summer my dad directed me to drive a mammoth tractor, but I would do anything to escape that boring task. On the farm, I learned a high degree of self-reliance; I was expected to figure out how to do things and get them done.  No molly-coddling from my dad.  If I could have a car at age 12, I could figure out how to get things done too.  Thus, I grew up in a world of hard work, self-reliance, and achievement.  The things I learned growing up shaped the rest of my life, and many of the meta-skills readily transferred to the research arena.

I attended Westside Elementary School, which was a farm school with many students who had recently emigrated from Mexico and had trouble with English. Because of the difficulty of attracting teachers to such a remote area, many of our instructors possessed only provisional teaching certificates. I had a teacher in 4th grade who showed a huge number of movies and then showed them again in reverse. I was never assigned even one minute of homework in my first nine years in school. Dissatisfied with this state of affairs, my parents sent me to a high-powered Jesuit boarding school for high school. The curriculum was tough, but having never done homework, the three hours per day of study hall was traumatic for me.  In addition, we were given library assignments, and I had never used a library.  So I boarded a Greyhound bus and ran away. My parents told me I had to return to the school, but I refused. And so I went to live with an older sister closer to home, and I attended a Catholic school that did not have a study hall.

This was a fortunate turn of events for me, because it was in that high school that I met the love of my life, Carol.  Although we encountered the police and a lady with a shotgun on our first two dates, our relationship flourished from the outset.  We dated through two years of high school and two years of college, and finally got married at the advanced age of 20 in our junior year at Fresno State.  Carol was pregnant by our senior year in college, and we had our first children the fall after graduation.  I still recall Carol throwing up from morning sickness before each of her final exams during that last year in college.

Carol and I have had a wonderful family life. Rather than interfering with my research, it has provided the security and positive moods that have allowed me to be more successful in my research. Carol gave birth to our twins, Marissa and Mary Beth, when we were 22. In those days before sonograms, our twins came totally unexpectedly. We had Robert while I was a graduate student. Thus, when we moved to my first job, at the University of Illinois, we had three children.  As I began my tenure-track job, and Carol began her Ph.D. program in clinical psychology, the twins began first grade, and Robert was expelled from Montessori school for being too nonconforming.  My life proves that it is possible to combine an academic career with a family, although it is a lot of work.

Carol returned to school to obtain a law degree in 1994. She had mastered her job as a professor of clinical psychology and sought a new challenge. What made her first year in law school more difficult than usual was that she continued to teach in the psychology department part-time, and four of our children were all wed in that overly-full year. Most law students find the first year of law school to be quite challenging, but they usually don’t also have to contend with working and organizing weddings. Carol went to law school essentially for fun, an unusual motivation for most law students who find the law-school experience to be stressful.  And she did have fun. However, law school also helped her in forensic psychology work.  Carol has been teaching service-learning courses in the community with the police and the juvenile detention center, in which her law background is helpful.

Our experiences of parenting our three children were so rewarding that we decided to take in hard-to-place children when our biological children were in high school. We took in five foster children, all when they were about 10 years old, and ultimately adopted Kia and Susan.

In 1985 my father died, and this resulted in me becoming president of our large family farm. We grow processing tomatoes, cotton, lettuce, and other crops, and have over 70 employees. We grow over 100,000 tons of processing tomatoes each year, and so if you have ever eaten Mexican food, Italian food, tomato soup, or ketchup, you likely have partaken of some of our tomatoes.  This is why I founded the Psychology of Tomatoes Club of America, but so far only Paul Rozin of the University of Pennsylvania has joined.  Being president of the farm was a big job, requiring about two days a week of my time. Thus, I had to work very hard in those days, and there was little time for hobbies, television, or socializing with friends. The farm management was a nice break from academic work, and the farm provided income that meant we did not have any financial worries. At the same time, I was working days, nights, and weekends to keep up.

Resource 3: Colleagues and Students

On my curriculum vita, I have over 200 publications, but what I like about my publication record is that I have had over 100 different co-authors.  My C2 index for “collaboration” is 10, meaning that there are ten scientists with whom I have each produced ten or more publications.  In other words, it is my good fortune to have worked intensively with a large number of very talented individuals. I have been blessed with some of the best graduate students in all of psychology, and to them I am so grateful. The students who have worked with me have gone on to win many awards and acclamations, but these do not fully capture their enthusiasm, hard work, and creativity! They have made my career successful.

My first Ph.D. student was Randy Larsen, who went on to win the early scientific career award from the American Psychological Association. Robert Emmons came to my laboratory a few years later, and he was one of the most productive graduate students I have ever seen.  In our first years in the field of subjective well-being, we published 15 studies in 1984 and 1985 alone. Because of these outstanding students, I was off to a strong start. Over the years, my research has often moved in new directions because of the people working with me. Eunkook Suh and Shigehiro Oishi moved my work toward questions of culture and well-being, while Richard Lucas prompted greater exploration of the role of adaptation to well-being. Similarly, Ulrich Schimmack, Frank Fujita, and Bill Pavot explored the structure of well-being in my laboratory and then later on their own.  In the most recent years, I have had a new round of very talented students – Will Tov, Weiting Ng, Christie Scollon, Chu Kim-Prieto, Maya Tamir, and Derrick Wirtz.  I have published over 130 papers and books with 55 students and former students, and I have three students with whom I have published more than 20 papers each. I once sat in an auditorium with this very talented group of former graduate students, and someone walking by said “genius row.” They were not referring to me, but to the enormously gifted students with whom I have been so fortunate to work. As I will describe, I have also been fortunate to have my wife and three psychologist children work with me, and I continue to collaborate with them on a number of projects. This, too, is a very talented group.  My wife Carol has more insight into people than any psychologist I have ever met.

In my work on happiness, I also have been blessed with impressive co-authors such as David Myers, Martin Seligman, Laura King, Sonja Lyubomirsky, and Daniel Kahneman. As mentioned above, I have had many outstanding graduate students and post-docs working with me. There are so many that I can’t name them all here, but I should say that this group is responsible for most of the specific topics of my research. Excellent graduate students move their mentor’s research in new directions, and they influence their mentor as much as he or she influences them. I have been the president of several scientific societies and have received a distinguished scientist award. However, the award of which I am proudest was a teaching award that was bestowed on me for involving students in my research.

My advice to young people who are entering the field: Work with excellent mentors and fellow graduate students, and your career will be enormously enhanced. When you become a professor, do everything you can to attract the most outstanding students. Don’t compete with your colleagues and students; collaborate with them instead.

The Psychology Department at the University of Illinois

When I earned my doctoral degree, my first job offer came from Harvard University, and it was difficult to turn down this position. My parents taught me not to care about prestige, but I failed to completely learn the lesson. However, Carol was admitted to the clinical psychology graduate program at Illinois, making the decision easier, and so we headed to the Midwest. Although I really wanted to go to Harvard – everyone recognizes that prestigious institution – I did not realize at the time that the department of psychology at Illinois is truly outstanding. After 32 years at the University of Illinois, I realize that my parents were again correct – do not worry about prestige, but choose the place where you can do your best work, which for me has been at Illinois. Although not as high-profile as Harvard, our department has the most productive colleagues and students, and I have learned more psychology as a faculty member there than I did as a student. We never thought we would stay on the prairie in Illinois, but we are still there because of the excellence of the department and because it is such a wonderful scholarly environment. In every phase of my life I have been blessed with so many resources; I wonder what I did in the previous life to have deserved my good fortune.

Life Challenges

I have faced several challenges in my life and wrestling with these issues has energized my life.

Challenge 1: Making Time for Family and Research

My family was very close when I was a child, and I wanted a similar family when I became an adult. Carol and I met in high school and fell in love. We decided to have eight children, because we both enjoyed kids.  To be truthful, I wanted 8 and Carol wanted only 6.  However, when I became a researcher, and Carol became a clinical researcher with a university appointment, the issue was how to be good parents and also good psychologists. I recognized that to be outstanding at research required long hours – it is unlikely to make major contributions working normal 40-hour weeks.  Eighty hours is required.  How could I resolve the 24-hours in a day limitation, wanting to be outstanding both as a husband and father, as well as a researcher?

One part of the solution was to drop out superfluous activities from my life. I decided I would have to watch television and read novels after retirement. When friends might mention popular television programs such as Seinfeld or Cheers, I would have to admit I had never seen them. I did regret not being able to read novels but knew there was simply no time for hobbies. Of course one can be a good researcher without working night and day, but for those who hope to work at the forefront of science, sacrifices are usually needed. For me, these sacrifices were always worth it, because I can’t imagine an episode of Seinfeld that is as good as analyzing data or as spending time with my kids.  In a recent study, sex was the most rewarding activity for a group of Texas women.  I believe that is because they have never analyzed data.

Another part of my solution to the family-research dilemma was to frequently involve my family in my work. I often took our kids to work and discussed psychology with them. This had the unintended benefit of leading our three biological children into careers in psychology. Because our two adopted daughters did not go into psychology, we often joke that it must be genetic. But an alternative explanation is that we adopted our two daughters at age 10, and so they missed some of that early exposure to the discipline. Marissa became a developmental psychologist, and teaches at the University of Utah, and Mary Beth became a clinical psychologist and teaches at the University of Kentucky. I joke that genes are not destiny because although our twin daughters have virtually identical genes, their careers took different paths.

On the weekend and evenings, we sometimes carried out psychology projects with our children. For example, Robert did his science fair project on the relation of mood and weather.  When Robert was a baby, I trained him to “magically” turn the television on and off by waving his arms – just for fun (waving his arms actually completed a circuit for the electric eye above him).  We all tried receiving shocks from the shock-machine in my laboratory, and the kids helped me collect beer bottles to throw in deindividuation experiments.  At the dinner table, we often discussed the activities of the day like any other family, but we also discussed issues related to human behavior.  There was never any attempt to influence our children’s career choices; psychology was just something they learned is very interesting.

We traveled with our children every summer. Some of our trips were to visit standard destinations such as the Grand Canyon, while other travel was to more exotic locations. When we traveled in a dugout canoe to visit the Yagua Indians deep in the Amazon rainforest, they gave our son, Robert, a blow-gun with curare-tipped poison darts. Knowing that curare can induce respiratory failure and be fatal, Carol was a strict mom and would not let Robert keep the curare. But he did bring the blowgun and darts back from our travels; hopefully he did not use them on his friends.

To this day, Robert loves traveling to exotic places, and he has been a wonderful resource for me in collecting data from difficult sites. Few of my graduate students would be willing to live with the Maasai and be branded by them in a rite of manhood. Similarly, few of my graduate students could travel to Northern Greenland and live among the icebergs with Inuit in order to collect data. I am certain that none of my other assistants would want to collect data in the worst slums of Calcutta or among the homeless.  Thus, as the “Indiana Jones” of psychology, Robert has been a tremendous asset to me.

Challenge 2: How to Help the World?

My parents were very religious and built a Catholic church on their farm for their employees. They contributed their time and energy extensively to charities and were generous philanthropists. My mom and dad inculcated in me the idea that the most important goal in life is to improve the world. My mother once told me that some people believe they will get to heaven by faith, but she believed you have to earn heaven through good works. Although my parents were wealthy, making a lot of money was never their goal in life.

Despite my evolving views on religion, the motive to improve the world has stuck with me. But the question was always how best to help the world? I thought of becoming a priest, but meeting Carol interfered with that idea I thought of becoming a doctor and going to Africa like Albert Schweitzer, but my squeamishness seemed to be an impediment to a career in medicine. Finally, I settled on clinical psychology because that combined a topic I found to be fascinating with helping people in trouble. When a psychology professor asked me why I entered psychology, I replied, “To help the world.” He was crestfallen because he had hoped I would say my motivation was an interest in psychology for its own sake. My major motivation in those days, however, was to find a vocation that would improve the world. Only later did I come to terms with the idea that helping the world might come from doing what I did best, and what I enjoyed most.

After college graduation, I was called for the draft to go to Vietnam. I registered as a Conscientious Objector. My family was disappointed in this choice, but I persisted and was fortunate to be granted C.O. status by my draft board. When I told people I was a C.O., they assumed I meant Commissioned Officer, and they were shocked when I told them the real meaning of the letters. The draft board assigned me to two years of alternative service working in a psychiatric hospital to take the place of military service. This was wonderful for me because I thought I would get the needed experience to be accepted to a top program in clinical psychology. Little did I know that the experience would be very educational in another way – revealing to me that I hated working with patients.  I was perceptive enough to realize that if one hates working with clients, one is probably not cut out for being a therapist.

Through a number of promotions I became the administrator of a new, small psychiatric hospital in the system. This heavy responsibility at age 24 was a huge lesson in many aspects of life. What does not kill you will strengthen you.  Because of this intense experience in which I had to burden the heavy responsibilities of administering a hospital, I went to graduate school with a maturity beyond my years. The hospital also educated me regarding a future choice – I would never enter university administration because it turned out that I loved research much more than being an administrator.

Upon entering graduate school, I was still troubled by the issue of how I was going to help the world.  I thought I could perhaps accomplish this by teaching psychology in a small liberal arts college, but graduate school taught me that my first love is research, and that has been my life story since. What I came to realize is that most researchers do not change the world in a direct and concrete way, but that the fruits of science have the potential for changing history in profound ways. The “hard” sciences, including chemistry and biology, have dramatically changed our world. However, it seemed to me a difficulty is that the behavioral sciences have lagged behind, so that most of the major problems now facing humanity are in fact problems in human behavior. The disproportionate advances in the physical sciences compared to the behavioral sciences have produced some major problems. Yet, if the behavioral sciences were successful, we could potentially solve the most important problems facing humanity.

I also came to realize that people usually contribute to the world most in areas where they are talented and in activities that they love. When talent and passion are combined, we are most effective. My hope is that my research will in some ways benefit humanity so that my parents will smile when looking down from heaven. I am certain that research is not for everyone, but for me it is a vocation and a passion. So readers, help the world by doing what you do best and love most.

Challenge 3: Overcoming Opposition to Subjective Well-Being Research

When I began conducting research on well-being, many scientists were skeptical, including a few older professors in my department. For one thing, they thought that it would be impossible to define and measure “happiness.” It always puzzled me that psychologists believed that depression and anxiety were measurable, but that positive states were not. Because several high-status professors in my department thought that studying happiness was a flaky endeavor, they blocked my promotion to full professor for a year or two.

The skepticism within my own department was a microcosm of the skepticism in the wider world of scientific psychology. When researchers presented studies showing the difficulties with measures of well-being, the findings were greeted with enthusiasm, whereas my studies showing the relative validity of the measures were often ignored. However, whereas many economists were actively hostile to the field, many psychologists simply ignored it. Thus, for many years I worked with very capable graduate students and we published frequently, but the topic of subjective well-being was a research area well off of the beaten path. In those early years of my research the area was not given much attention in any of the core subdisciplines, such as personality or social psychology, and classes on it were virtually nonexistent.

Finally, in the late 1990’s, interest in subjective well-being exploded. Part of this change was due to the attention it received from Daniel Kahneman, a renowned experimental psychologist who has won the Nobel Prize in economics. When Kahneman began to publish in the field, this alone helped the area gain respect. Similarly, when Martin Seligman raised the banner of “positive psychology,” his stature in clinical psychology and the attention he brought to the field of happiness helped greatly. David Myers, one of the best writers in psychology, wrote a book on the science of happiness that further helped legitimize the field. In addition, some economists became increasingly disenchanted with the reigning behavioristic and materialistic paradigm in their discipline, and they did interesting studies using measures of well-being.  Hopefully, the research that we and others conducted on subjective well-being helped to bring respect to the field; our aim was to use rigorous methods so that the field would gain credibility and become more than another self-help “pop” area.

It appears that in this first decade of the 21st century, subjective well-being has become firmly established as a science. My citation count has grown to over 11,000, and I have a citation H score of 42. This means that 42 of my articles are cited 42 or more times, meaning that lots of researchers are citing lots of our articles.  The total number of publications in this area has grown rapidly. Figure 1 presents the number of publications on well-being (including topics such as life satisfaction, happiness, and positive emotions) over the last several decades (with the figure for this decade based on a projection from the first five years). As can be seen, there are now 2000 publications per year in the area and climbing quickly. I have contributed almost 200 articles and books to the scholarly literature on subjective well-being. In the references I list 10 broad theory and conceptual articles that I believe have made important contributions to the field. I also list 10 empirical articles that I believe represent significant advances in knowledge. Because I have published so many empirical articles, it was difficult to select those that are most important.

Recently, my former students and research associates, headed by Randy Larsen and Michael Eid, wanted to plan a “Festschrift” for me – a celebration of my career at age 60. My response was, No Festschrift – those events are for old people. So they hosted a celebration with wonderful talks and a book, and they called it a non-Festschrift.  The non-Festschrift was one of the high points of my life, because it was so clear to me that important work was going on in the field, and that excellent scholarship will continue when I retire from the field.

Conclusion

I am one of the luckiest individuals in the world, because I discovered work I love, and found wonderful people with whom to share this work. On Fridays, I can say TGIF, because I look forward to spending a bit more time with my family, but on Mondays, I can say TGIM, “Thank goodness it is Monday,” because I love to conduct research and analyze data. In truth, there is no difference for me between weekdays and weekends because both include time with family and students, and both include research activities. Research is not a career for everyone, and not everyone need be a “maniac” researcher like I am.  There is ample room in the field for scientists who work at a much less intense level. However, I am positive that the most fulfilling life, whatever the particulars may be, is one in which a person can use his or her skills in activities he or she enjoys, and with supportive people with similar values and goals. May all of you find such a life!

Readers need one caveat in evaluating my autobiography. I know the results of the nun study showing that Catholic sisters who wrote more positive autobiographies lived longer than less happy nuns. Sarah Pressman has now replicated this finding with the autobiographies of psychologists, and found that the mention of activated positive feelings predicted a 6-year longer life. Therefore, I have written the most positive of autobiographies in hopes that I will live a very long life. However, writing such a positive autobiography has itself made me happy, and I hope others enjoy reading it, so they, too, can have a long and happy life.

References

10 Broad Theory and Review Articles on Well-Being

Diener, E. (1984). Subjective well-being. Psychological Bulletin, 95, 542-575.

Diener, E., Lucas, R., & Scollon, C. N. (2006). Beyond the hedonic treadmill: Revising the adaptation theory of well-being. American Psychologist, 61, 305-314.

Diener, E., Sandvik, E., & Pavot, W. (1991). Happiness is the frequency, not the intensity, of positive versus negative affect. In F. Strack, M. Argyle, & N. Schwarz (Eds.), Subjective well-being: An interdisciplinary perspective (pp. 119-139). New York: Pergamon.

Diener, E., & Seligman, M. E. P. (2004). Beyond money: Toward an economy of well-being. Psychological Science in the Public Interest, 5, 1-31.

Diener, E., Suh, E. M., Lucas, R. E., & Smith, H. L. (1999). Subjective well-being: Three decades of progress. Psychological Bulletin, 125, 276-302.

Diener, E., & Tov, W. (in press). Culture and subjective well-being. In S. Kitayama & D. Cohen (Eds.), Handbook of cultural psychology. New York: Guilford.

Kahneman, D., Diener, E., & Schwarz, N. (Eds.). (1999). Well-being: The foundations of hedonic psychology. New York: Sage.

Larsen, R. J., & Diener, E. (1987). Affect intensity as an individual difference characteristic: A review. Journal of Research in Personality, 21, 1-39.

Lyubomirsky, S., King, L., & Diener, E. (2005). The benefits of frequent positive affect: Does happiness lead to success? Psychological Bulletin, 131, 803-855.

Pavot, W., & Diener, E. (1993). Review of the Satisfaction with Life Scale.

Psychological Assessment, 5, 164-172.

10 Significant Empirical Articles on Well-Being

Biswas-Diener, R., & Diener, E. (2006). The subjective well-being of the homeless, and lessons for happiness.  Social Indicators Research, 76, 185-205.

Diener, E., & Diener, C. (1996). Most people are happy. Psychological Science, 7, 181-185.

Diener, E., & Diener, M. (1995). Cross-cultural correlates of life satisfaction and self-esteem. Journal of Personality and Social Psychology, 68, 653-663.

Diener, E., & Emmons, R. A. (1985). The independence of positive and negative affect. Journal of Personality and Social Psychology, 47, 1105-1117.

Eid, M., & Diener, E. (2001). Norms for experiencing emotions in different cultures: Inter- and intranational differences. Journal of Personality and Social Psychology, 81, 869-885.

Lucas, R. E., Clark, A. E., Georgellis, Y., & Diener, E. (2003). Reexamining adaptation and the set point model of happiness: Reactions to changes in marital status. Journal of Personality and Social Psychology, 84, 527-539.

Oishi, S., & Diener, E. (2001). Re-examining the general positivity model of subjective well-being: The discrepancy between specific and global domain satisfaction. Journal of Personality, 69, 641-666.

Sandvik, E., Diener, E., & Seidlitz, L. (1993). Subjective well-being: The convergence and stability of self-report and non-self-report measures. Journal of Personality, 61, 317-342.

Schimmack, U., Diener, E., & Oishi, S. (2002). Life-satisfaction is a momentary judgment and a stable personality characteristic: The use of chronically accessible and stable sources. Journal of Personality, 70, 345-384.

Wirtz, D., Kruger, J., Scollon, C. N., & Diener, E. (2003). What to do on spring break? The role of predicted, on-line, and remembered experience in future choice. Psychological Science, 14, 520-524.

Ed Diener
Email address: ediener@s.psych.uiuc.edu
Faculty webpage: http://www.psych.uiuc.edu/~ediener/index.html
Interests: Subjective Well-Being

Sources:

http://internal.psychology.illinois.edu/~ediener/bio.html

http://internal.psychology.illinois.edu/~ediener/Documents/autobiography%2009-27-06%20One%20happy%20autobiography.doc