Saturday, November 14, 2009

Steven Pinker Reviews Malcolm Gladwell

One "all knowing" social scientist reviews another - partial truths all around. Pinke ris not kind his assessment - a smell a feud brewing.

WHAT THE DOG SAW
And Other Adventures
By Malcolm Gladwell
410 pp. Little, Brown & Company. $27.99

Malcolm Gladwell, Eclectic Detective

Todd Heisler/The New York Times -Malcolm Gladwell

Published: November 7, 2009

Have you ever wondered why there are so many kinds of mustard but only one kind of ketchup? Or what Cézanne did before painting his first significant works in his 50s? Have you hungered for the story behind the Veg-O-Matic, star of the frenetic late-night TV ads? Or wanted to know where Led Zeppelin got the riff in “Whole Lotta Love”?

Neither had I, until I began this collection by the indefatigably curious journalist Malcolm Gladwell. The familiar jacket design, with its tiny graphic on a spare background, reminds us that Gladwell has become a brand. He is the author of the mega-best sellers “The Tipping Point,” “Blink” and “Out­liers”; a popular speaker on the Dilbert circuit; and a prolific contributor to The New Yorker, where the 19 articles in “What the Dog Saw” were originally published. This volume includes prequels to those books and other examples of Gladwell’s stock in trade: counterintuitive findings from little-known experts.

A third of the essays are portraits of “minor geniuses” — impassioned oddballs loosely connected to cultural trends. We meet the feuding clan of speed-talking pitchmen who gave us the Pocket Fisherman, Hair in a Can, and other it-slices!-it-dices! contraptions. There is the woman who came up with the slogan “Does she or doesn’t she?” and made hair coloring (and, Gladwell suggests, self-invention) respectable to millions of American women. The investor Nassim Taleb explains how markets can be blindsided by improbable but consequential events. A gourmet ketchup entrepreneur provides Gladwell the opportunity to explain the psychology of taste and to recount the history of condiments.

Another third are on the hazards of statistical prediction, especially when it comes to spectacular failures like Enron, 9/11, the fatal flight of John F. Kennedy Jr., the explosion of the space shuttle Challenger, the persistence of homelessness and the unsuccessful targeting of Scud missile launchers during the Persian Gulf war of 1991. For each debacle, Gladwell tries to single out a fallacy of reasoning behind it, such as that more information is always better, that pictures offer certainty, that events are distributed in a bell curve around typical cases, that clues available in hindsight should have been obvious before the fact and that the risk of failure in a complex system can be reduced to zero.

The final third are also about augury, this time about individuals rather than events. Why, he asks, is it so hard to prognosticate the performance of artists, teachers, quarterbacks, executives, serial killers and breeds of dogs?

The themes of the collection are a good way to characterize Gladwell himself: a minor genius who unwittingly demonstrates the hazards of statistical reasoning and who occasionally blunders into spectacular failures.

Gladwell is a writer of many gifts. His nose for the untold back story will have readers repeatedly muttering, “Gee, that’s interesting!” He avoids shopworn topics, easy moralization and conventional wisdom, encouraging his readers to think again and think different. His prose is transparent, with lucid explanations and a sense that we are chatting with the experts ourselves. Some chapters are master­pieces in the art of the essay. I particularly liked “Something Borrowed,” a moving examination of the elusive line between artistic influence and plagiarism, and “Dangerous Minds,” a suspenseful tale of criminal profiling that shows how self-anointed experts can delude their clients and themselves with elastic predictions.

An eclectic essayist is necessarily a dilettante, which is not in itself a bad thing. But Gladwell frequently holds forth about statistics and psychology, and his lack of technical grounding in these subjects can be jarring. He provides misleading definitions of “homology,” “saggital plane” and “power law” and quotes an expert speaking about an “igon value” (that’s eigenvalue, a basic concept in linear algebra). In the spirit of Gladwell, who likes to give portentous names to his aperçus, I will call this the Igon Value Problem: when a writer’s education on a topic consists in interviewing an expert, he is apt to offer generalizations that are banal, obtuse or flat wrong.

The banalities come from a gimmick that can be called the Straw We. First Gladwell disarmingly includes himself and the reader in a dubious consensus — for example, that “we” believe that jailing an executive will end corporate malfeasance, or that geniuses are invariably self-made prodigies or that eliminating a risk can make a system 100 percent safe. He then knocks it down with an ambiguous observation, such as that “risks are not easily manageable, accidents are not easily preventable.” As a generic statement, this is true but trite: of course many things can go wrong in a complex system, and of course people sometimes trade off safety for cost and convenience (we don’t drive to work wearing crash helmets in Mack trucks at 10 miles per hour). But as a more substantive claim that accident investigations are meaningless “rituals of reassurance” with no effect on safety, or that people have a “fundamental tendency to compensate for lower risks in one area by taking greater risks in another,” it is demonstrably false.

The problem with Gladwell’s generalizations about prediction is that he never zeroes in on the essence of a statistical problem and instead overinterprets some of its trappings. For example, in many cases of uncertainty, a decision maker has to act on an observation that may be either a signal from a target or noise from a distractor (a blip on a screen may be a missile or static; a blob on an X-ray may be a tumor or a harmless thickening). Improving the ability of your detection technology to discriminate signals from noise is always a good thing, because it lowers the chance you’ll mistake a target for a distractor or vice versa. But given the technology you have, there is an optimal threshold for a decision, which depends on the relative costs of missing a target and issuing a false alarm. By failing to identify this trade-off, Gladwell bamboozles his readers with pseudoparadoxes about the limitations of pictures and the downside of precise information.

Another example of an inherent trade-off in decision-making is the one that pits the accuracy of predictive information against the cost and complexity of acquiring it. Gladwell notes that I.Q. scores, teaching certificates and performance in college athletics are imperfect predictors of professional success. This sets up a “we” who is “used to dealing with prediction problems by going back and looking for better predictors.” Instead, Gladwell argues, “teaching should be open to anyone with a pulse and a college degree — and teachers should be judged after they have started their jobs, not before.”

But this “solution” misses the whole point of assessment, which is not clairvoyance but cost-effectiveness. To hire teachers indiscriminately and judge them on the job is an example of “going back and looking for better predictors”: the first year of a career is being used to predict the remainder. It’s simply the predictor that’s most expensive (in dollars and poorly taught students) along the accuracy-­cost trade-off. Nor does the absurdity of this solution for professional athletics (should every college quarterback play in the N.F.L.?) give Gladwell doubts about his misleading analogy between hiring teachers (where the goal is to weed out the bottom 15 percent) and drafting quarterbacks (where the goal is to discover the sliver of a percentage point at the top).

The common thread in Gladwell’s writing is a kind of populism, which seeks to undermine the ideals of talent, intelligence and analytical prowess in favor of luck, opportunity, experience and intuition. For an apolitical writer like Gladwell, this has the advantage of appealing both to the Horatio Alger right and to the egalitarian left. Unfortunately he wildly overstates his empirical case. It is simply not true that a quarter­back’s rank in the draft is uncorrelated with his success in the pros, that cognitive skills don’t predict a teacher’s effectiveness, that intelligence scores are poorly related to job performance or (the major claim in “Outliers”) that above a minimum I.Q. of 120, higher intelligence does not bring greater intellectual achievements.

The reasoning in “Outliers,” which consists of cherry-picked anecdotes, post-hoc sophistry and false dichotomies, had me gnawing on my Kindle. Fortunately for “What the Dog Saw,” the essay format is a better showcase for Gladwell’s talents, because the constraints of length and editors yield a higher ratio of fact to fancy. Readers have much to learn from Gladwell the journalist and essayist. But when it comes to Gladwell the social scientist, they should watch out for those igon values.

Talking About My Generation

Gen X has gotten a bad rap, says this author - and I agree.

More Than Zero
Often derided as cynical and disengaged, Generation Xers have plenty of public spirit.
13 November 2009

Pity Generation X, the Americans born between 1965 and 1981 who have been described for years as “apathetic,” “cynical,” and “disengaged.” The greatness of the Greatest Generation is clear in its very name. Much laudatory ink has been spilled on the Baby Boomers—usually by Boomers themselves. As for the “Millennials,” those born between 1982 and 1998, the quantity of reportage lauding their public-spiritedness has quickly become tiresome. But a new report casts doubt on the widely accepted stereotype of Gen Xers as inferior to these other groups.

Sociologists Morley Winograd and Michael D. Hais, authors of Millennial Makeover: MySpace, YouTube, and the Future of American Politics, offer a good example of the usual attitude. “Millennials are sharply distinctive from the divided, moralistic Baby Boomers (born 1946–1964) and the cynical, individualistic Gen-Xers (born 1965–1981), the two generations that preceded them and who are their parents,” they write. “Millennials have a deep commitment to community and helping others, putting this belief into action with community service activities.” In a Boston Globe op-ed prior to last year’s presidential election, Harvard’s Robert Putnam took things a step further, comparing Millennials to the earlier cohort that survived the Depression and fought World War II: “The 2008 elections are thus the coming-out party of this new Greatest Generation. Their grandparents of the original Greatest Generation were the civic pillars of American democracy for more than a half-century, and at long last, just as that generation is leaving the scene, reinforcements are arriving.” Would it be unseemly at this point to groan, “Gag me with a spoon”?

All of this stereotyping might be more bearable if it were true, but the latest Civic Health Index study from the congressionally chartered National Conference on Citizenship (NCOC) puts both the Millennials and Generation X in a different light. The report, entitled Civic Health in Hard Times, focuses on the impact of the economic downturn on nationwide civic participation. From the NCOC’s survey results, organized by generational cohort, it appears that much of the derision heaped on Generation X’s withdrawal from the public square has been misplaced.

While Gen Xers fall slightly behind Millennials in volunteering (42.6 percent to 43 percent), the narrowness of the gap is surprising, given the vastly greater number of volunteering opportunities available to (and sometimes mandated for) Millennials in high school and college. Gen Xers far outdistance Baby Boomers (35 percent) in volunteering and even outperform the real “Greatest Generation” of retired seniors (42 percent). And when asked whether they had increased their participation in the past year, Gen-X respondents scored highest, with 39 percent answering yes. This far surpassed Millennials (29 percent), Boomers (26), and seniors (25).

Certainly this outcome might partly reflect small changes in already low engagement levels among Gen Xers, but a deeper look reveals that they flex considerable civic muscle. Boomer respondents took the top spot when asked whether they “had given food or money to someone who isn’t a relative” in the last year, with 52.9 percent responding affirmatively, but Gen Xers finished second (51.2 percent), followed by seniors, with Millennials placing last. When asked about a range of civic involvement activities, from “giving money, food, shelter” to more direct “volunteering,” Gen Xers finished second to seniors in stating that they had done “all of the above”—ahead of both Boomers and Millennials.

As for more general political participation, recently released data from the U.S. Census Bureau show that while Millennials had a statistically significant uptick in voter participation in the November 2008 elections, they still trailed every other generation in percentage turnout. In fact, 18-, 19-, and 20-year-olds voted at the lowest percentages of any age surveyed—39.8 percent, 40.1 percent, and 43 percent, respectively. Gen Xers far surpassed Millennials in percentage voting in 2008, 52.1 percent to 44.5. And the voter turnout for Millennials in 2008 was less than 1 percentage point higher than Gen-Xer turnout figures for 1992—the comparable election, age-wise, for the older cohort. These are hardly numbers befitting a “new Greatest Generation.”

Moving from participation to trust in our largest governing institutions, the NCOC survey shows that there is some truth to the characterization of Gen Xers as cynical. When asked, “Do you trust the government in Washington to do what is right?,” Gen Xers were the most dubious of all generations, with only 20.7 percent responding that they trusted the feds either “most of the time” or “just about always.” Compare this with Millennials (27.9 percent), Boomers (27.8), and seniors (26.2). The same trends pertain to questions about state government, though Gen Xers’ level of trust in their local government was higher. Thirty-six percent of Gen Xers viewed these governments positively—exactly the same fraction as Millennials, and just a percentage point behind seniors.

This divergence between trust and participation makes sense when understood as a rational civic reaction to what are perceived as broken or distant political institutions. Gen Xers, cautious about whether our politics can ameliorate significant societal ills, are nonetheless “voting” with their money and time to address these challenges. We may be skeptical, but we’re not apathetic.

So why all the gushing about the Millennials? Part of the reason is the necessary examination of a new (and very large) generation’s coming of age and of its participation in a democratic society. But it’s also difficult not to see a partisan element. In 2007, as researchers Winograd and Hais point out, Millennials self-identified as Democrats over Republicans by a margin of 52 percent to 30 percent. But Winograd—a former adviser to Vice President Al Gore—uses this snapshot to forecast a Democratic “historic opportunity to become the majority party for at least four more decades.” Michael Connery, author of the recent Youth to Power: How Today’s Young Voters Are Building Tomorrow’s Progressive Majority, recently suggested: “If a ‘post-partisan’ politics is going to be ushered in on a wave of Millennial support, it will have a distinctly progressive character.” Connery concludes that it is “this optimism and belief in their own power to make positive change in country—reflected in many polls and surveys of Millennials taken in the past few years—more than anything that accounts for the incredible surge in youth participation that we are seeing today.” These pundits should be careful, though, for while Millennials have registered predominantly Democratic, they’ve also shown a libertarian streak, expressing significant support for fiscally conservative policies.

Winograd, Connery, and others (like the Center for American Progress’s Ruy Teixeira) seem less interested in touting the Millennials’ civic engagement than in celebrating their political leanings and what these might mean for the Democratic Party. In contrast, Gen Xers began to participate politically as the youngest members of the Reagan Revolution, with most research finding us (to this day) more politically conservative than our Boomer parents. One wonders how much applause we would hear for Millennials if their current affiliation were reversed.

As this year’s Civic Health Index demonstrates, Gen Xers are proving deeply involved in civil society, even as they continue to be suspicious of big government. So while pundits keep handing out participation trophies to the Millennials, maybe this year they should save a few for the enlightened skeptics of Generation X.

Pete Peterson is executive director of Common Sense California, a multipartisan organization that supports citizen participation in policymaking (his views do not necessarily represent those of CSC). He also lectures on State and Local Governance at Pepperdine’s School of Public Policy.


Friday, November 13, 2009

Steve Hagen - Looking for Meaning

Mining the treasures of the Tricycle archive.

Looking for Meaning

A reader from Woodstock, New York, writes: “Is it possible to have a meaningful meditation practice in the absence of a living teacher?”

By Steve Hagen

Sure. Why not? We can find meaning in anything. But no matter what meaning we look for or find, it’s delusion - and the surest way to implant feelings of meaninglessness deep within our minds. As long as we insist that meditation must be meaningful, we fail to understand it. We meditate with the idea that we’re going to get something from it—that it will lower our blood pressure, calm us down, or enhance our concentration. And, sure. Why not? We can find meaning in anything. But no matter what meaning we look for or find, it’s delusion—and the surest way to implant feelings of meaninglessness deep within our minds.

As long as we insist that meditation must be meaningful, we fail to understand it. We meditate with the idea that we’re going to get something from it—that it will lower our blood pressure, calm us down, or enhance our concentration. And, we believe, if we meditate long enough, and in just the right way, it might even bring us to enlightenment.

All of this is delusion.

As my teacher (and many teachers before him) used to say, zazen is useless. By the same token, it’s also meaningless.

I sit in meditation every day. I’ve been doing this for over thirty years. I have no reason to do it, and I feel no need to justify or explain to anyone why I do it, because I know that whatever I would say would be false.

It wasn’t always this way, of course. I had plenty of reasons to meditate when I began this practice back in the mid-sixties. But then I met a good teacher, and with his help I was able to learn the more subtle and profound aspects of this practice. Until I met Katagiri Roshi in 1975, it never occurred to me to look at the mind with which I approached meditation practice. I didn’t notice how greedy it was, or that it was the antithesis of the mind I thought I was seeking. Nor did it occur to me that such a mind was already the very source of the dissatisfaction and confusion I sought to free myself from through meditation.

So why meditate? If it’s not to get some benefit, what’s the point?

We have to look at the mind we bring to this practice. Though we go through the motions of sitting in meditation, generally it’s not the mind of meditation at all. It’s the mind of getting somewhere - which is obviously not the mind of enlightenment. It’s the mind of ego, the mind that seeks gain and keeps coming up short.

This is why zazen is useless and without meaning: meditation is, finally, just to be here. Not over there. Not longing for something else. Not trying to be, or to acquire something new or different.

If you’re sitting in meditation to get something—whether it’s peace, tranquility, low blood pressure, concentration, psychic powers, meaningfulness, or even enlightenment—you’re not here. You’re off in a world of your own mental fabrication, a world of distraction, daydreaming, confusion, and preoccupation. It’s anything but meditation.

It’s probably true, for the most part, that certain health benefits can be found through meditation. But if you’re doing this for that - for some reason, purpose, end, or goal—then you are not actually doing this. You’re distracted and divided.

Meditation is just to be here. This can mean doing the dishes, writing a letter, driving a car, or having a conversation - if we’re fully engaged in this activity of the moment, there is no plotting or scheming or ulterior purpose. This full engagement is meditation. It doesn’t mean anything but itself.

To look for meaning is to look for a model, a representation, an explanation, a justification for something other than this, what’s immediately at hand. Meditation is releasing whatever reasons and justifications we might have, and taking up this moment with no thought that this can or should be something other than just this.

It’s only because we look for meaning—for what we think we can hold in our hands, or in our minds—that we feel dissatisfaction and meaninglessness.

So, is it possible to have a meaningful meditation practice in the absence of a living teacher?

A teacher who truly understands meditation would make every effort to disabuse us of all such gaining ideas.

Steve Hagen is head teacher at Dharma Field Meditation and Learning Center (www.dharmafield.org) in Minneapolis. His new book, Buddhism Is Not What You Think, will be published in September.


David Dobbs - The Science of Success

Interesting article from The Atlantic that suggests some of the genes that can make us "self-destructive and antisocial" can also make us adaptable and creative.

Most of us have genes that make us as hardy as dandelions: able to take root and survive almost anywhere. A few of us, however, are more like the orchid: fragile and fickle, but capable of blooming spectacularly if given greenhouse care. So holds a provocative new theory of genetics, which asserts that the very genes that give us the most trouble as a species, causing behaviors that are self-destructive and antisocial, also underlie humankind’s phenomenal adaptability and evolutionary success. With a bad environment and poor parenting, orchid children can end up depressed, drug-addicted, or in jail—but with the right environment and good parenting, they can grow up to be society’s most creative, successful, and happy people.

by David Dobbs

The Science of Success

In 2004, Marian Bakermans-Kranenburg, a professor of child and family studies at Leiden University, started carrying a video camera into homes of families whose 1-to-3-year-olds indulged heavily in the oppositional, aggressive, uncooperative, and aggravating behavior that psychologists call “externalizing”: whining, screaming, whacking, throwing tantrums and objects, and willfully refusing reasonable requests. Staple behaviors in toddlers, perhaps. But research has shown that toddlers with especially high rates of these behaviors are likely to become stressed, confused children who fail academically and socially in school, and become antisocial and unusually aggressive adults.

At the outset of their study, Bakermans-Kranenburg and her colleagues had screened 2,408 children via parental questionnaire, and they were now focusing on the 25 percent rated highest by their parents in externalizing behaviors. Lab observations had confirmed these parental ratings.

Bakermans-Kranenburg meant to change the kids’ behavior. In an intervention her lab had developed, she or another researcher visited each of 120 families six times over eight months; filmed the mother and child in everyday activities, including some requiring obedience or cooperation; and then edited the film into teachable moments to show to the mothers. A similar group of high-externalizing children received no intervention.




Video: Watch an interview with Stephen Suomi, one of the researchers featured in this story

To the researchers’ delight, the intervention worked. The moms, watching the videos, learned to spot cues they’d missed before, or to respond differently to cues they’d seen but had reacted to poorly. Quite a few mothers, for instance, had agreed only reluctantly to read picture books to their fidgety, difficult kids, saying they wouldn’t sit still for it. But according to Bakermans-Kranenburg, when these mothers viewed the playback they were “surprised to see how much pleasure it was for the child—and for them.” Most mothers began reading to their children regularly, producing what Bakermans-Kranenburg describes as “a peaceful time that they had dismissed as impossible.”

And the bad behaviors dropped. A year after the intervention ended, the toddlers who’d received it had reduced their externalizing scores by more than 16 percent, while a nonintervention control group improved only about 10 percent (as expected, due to modest gains in self-control with age). And the mothers’ responses to their children became more positive and constructive.

Few programs change parent-child dynamics so successfully. But gauging the efficacy of the intervention wasn’t the Leiden team’s only goal, or even its main one. The team was also testing a radical new hypothesis about how genes shape behavior—a hypothesis that stands to revise our view of not only mental illness and behavioral dysfunction but also human evolution.

Of special interest to the team was a new interpretation of one of the most important and influential ideas in recent psychiatric and personality research: that certain variants of key behavioral genes (most of which affect either brain development or the processing of the brain’s chemical messengers) make people more vulnerable to certain mood, psychiatric, or personality disorders. Bolstered over the past 15 years by numerous studies, this hypothesis, often called the “stress diathesis” or “genetic vulnerability” model, has come to saturate psychiatry and behavioral science. During that time, researchers have identified a dozen-odd gene variants that can increase a person’s susceptibility to depression, anxiety, attention-deficit hyperactivity disorder, heightened risk-taking, and antisocial, sociopathic, or violent behaviors, and other problems—if, and only if, the person carrying the variant suffers a traumatic or stressful childhood or faces particularly trying experiences later in life.

This vulnerability hypothesis, as we can call it, has already changed our conception of many psychic and behavioral problems. It casts them as products not of nature or nurture but of complex “gene-environment interactions.” Your genes don’t doom you to these disorders. But if you have “bad” versions of certain genes and life treats you ill, you’re more prone to them.

Recently, however, an alternate hypothesis has emerged from this one and is turning it inside out. This new model suggests that it’s a mistake to understand these “risk” genes only as liabilities. Yes, this new thinking goes, these bad genes can create dysfunction in unfavorable contexts—but they can also enhance function in favorable contexts. The genetic sensitivities to negative experience that the vulnerability hypothesis has identified, it follows, are just the downside of a bigger phenomenon: a heightened genetic sensitivity to all experience.

The evidence for this view is mounting. Much of it has existed for years, in fact, but the focus on dysfunction in behavioral genetics has led most researchers to overlook it. This tunnel vision is easy to explain, according to Jay Belsky, a child-development psychologist at Birkbeck, University of London. “Most work in behavioral genetics has been done by mental-illness researchers who focus on vulnerability,” he told me recently. “They don’t see the upside, because they don’t look for it. It’s like dropping a dollar bill beneath a table. You look under the table, you see the dollar bill, and you grab it. But you completely miss the five that’s just beyond your feet.”

Though this hypothesis is new to modern biological psychiatry, it can be found in folk wisdom, as the University of Arizona developmental psychologist Bruce Ellis and the University of British Columbia developmental pediatrician W. Thomas Boyce pointed out last year in the journal Current Directions in Psychological Science. The Swedes, Ellis and Boyce noted in an essay titled “Biological Sensitivity to Context,” have long spoken of “dandelion” children. These dandelion children—equivalent to our “normal” or “healthy” children, with “resilient” genes—do pretty well almost anywhere, whether raised in the equivalent of a sidewalk crack or a well-tended garden. Ellis and Boyce offer that there are also “orchid” children, who will wilt if ignored or maltreated but bloom spectacularly with greenhouse care.

At first glance, this idea, which I’ll call the orchid hypothesis, may seem a simple amendment to the vulnerability hypothesis. It merely adds that environment and experience can steer a person up instead of down. Yet it’s actually a completely new way to think about genetics and human behavior. Risk becomes possibility; vulnerability becomes plasticity and responsiveness. It’s one of those simple ideas with big, spreading implications. Gene variants generally considered misfortunes (poor Jim, he got the “bad” gene) can instead now be understood as highly leveraged evolutionary bets, with both high risks and high potential rewards: gambles that help create a diversified-portfolio approach to survival, with selection favoring parents who happen to invest in both dandelions and orchids.

In this view, having both dandelion and orchid kids greatly raises a family’s (and a species’) chance of succeeding, over time and in any given environment. The behavioral diversity provided by these two different types of temperament also supplies precisely what a smart, strong species needs if it is to spread across and dominate a changing world. The many dandelions in a population provide an underlying stability. The less-numerous orchids, meanwhile, may falter in some environments but can excel in those that suit them. And even when they lead troubled early lives, some of the resulting heightened responses to adversity that can be problematic in everyday life—increased novelty-seeking, restlessness of attention, elevated risk-taking, or aggression—can prove advantageous in certain challenging situations: wars, tribal or modern; social strife of many kinds; and migrations to new environments. Together, the steady dandelions and the mercurial orchids offer an adaptive flexibility that neither can provide alone. Together, they open a path to otherwise unreachable individual and collective achievements.

This orchid hypothesis also answers a fundamental evolutionary question that the vulnerability hypothesis cannot. If variants of certain genes create mainly dysfunction and trouble, how have they survived natural selection? Genes so maladaptive should have been selected out. Yet about a quarter of all human beings carry the best-documented gene variant for depression, while more than a fifth carry the variant that Bakermans-Kranenburg studied, which is associated with externalizing, antisocial, and violent behaviors, as well as ADHD, anxiety, and depression. The vulnerability hypothesis can’t account for this. The orchid hypothesis can.

This is a transformative, even startling view of human frailty and strength. For more than a decade, proponents of the vulnerability hypothesis have argued that certain gene variants underlie some of humankind’s most grievous problems: despair, alienation, cruelties both petty and epic. The orchid hypothesis accepts that proposition. But it adds, tantalizingly, that these same troublesome genes play a critical role in our species’ astounding success.

The orchid hypothesis—sometimes called the plasticity hypothesis, the sensitivity hypothesis, or the differential-susceptibility hypothesis—is too new to have been tested widely. Many researchers, even those in behavioral science, know little or nothing of the idea. A few—chiefly those with broad reservations about ever tying specific genes to specific behaviors—express concerns. But as more supporting evidence emerges, the most common reaction to the idea among researchers and clinicians is excitement. A growing number of psychologists, psychiatrists, child-development experts, geneticists, ethologists, and others are beginning to believe that, as Karlen Lyons-Ruth, a developmental psychologist at Harvard Medical School, puts it, “It’s time to take this seriously.”

With the data gathered in the video intervention, the Leiden team began to test the orchid hypothesis. Could it be, they wondered, that the children who suffer most from bad environments also profit the most from good ones? To find out, Bakermans-Kranenburg and her colleague Marinus van Ijzendoorn began to study the genetic makeup of the children in their experiment. Specifically, they focused on one particular “risk allele” associated with ADHD and externalizing behavior. (An allele is any of the variants of a gene that takes more than one form; such genes are known as polymorphisms. A risk allele, then, is simply a gene variant that increases your likelihood of developing a problem.)

Bakermans-Kranenburg and van Ijzendoorn wanted to see whether kids with a risk allele for ADHD and externalizing behaviors (a variant of a dopamine-processing gene known as DRD4) would respond as much to positive environments as to negative. A third of the kids in the study had this risk allele; the other two-thirds had a version considered a “protective allele,” meaning it made them less vulnerable to bad environments. The control group, who did not receive the intervention, had a similar distribution.

Both the vulnerability hypothesis and the orchid hypothesis predict that in the control group the kids with a risk allele should do worse than those with a protective one. And so they did—though only slightly. Over the course of 18 months, the genetically “protected” kids reduced their externalizing scores by 11 percent, while the “at-risk” kids cut theirs by 7 percent. Both gains were modest ones that the researchers expected would come with increasing age. Although statistically significant, the difference between the two groups was probably unnoticeable otherwise.

The real test, of course, came in the group that got the intervention. How would the kids with the risk allele respond? According to the vulnerability model, they should improve less than their counterparts with the protective allele; the modest upgrade that the video intervention created in their environment wouldn’t offset their general vulnerability.

As it turned out, the toddlers with the risk allele blew right by their counterparts. They cut their externalizing scores by almost 27 percent, while the protective-allele kids cut theirs by just 12 percent (improving only slightly on the 11 percent managed by the protective-allele population in the control group). The upside effect in the intervention group, in other words, was far larger than the downside effect in the control group. Risk alleles, the Leiden team concluded, really can create not just risk but possibility.

Can liability really be so easily turned to gain? The pediatrician W. Thomas Boyce, who has worked with many a troubled child in more than three decades of child-development research, says the orchid hypothesis “profoundly recasts the way we think about human frailty.” He adds, “We see that when kids with this kind of vulnerability are put in the right setting, they don’t merely do better than before, they do the best—even better, that is, than their protective-allele peers. “Are there any enduring human frailties that don’t have this other, redemptive side to them?”

As I researched this story, I thought about such questions a lot, including how they pertained to my own temperament and genetic makeup. Having felt the black dog’s teeth a few times over the years, I’d considered many times having one of my own genes assayed—specifically, the serotonin-transporter gene, also called the SERT gene, or 5-HTTLPR. This gene helps regulate the processing of serotonin, a chemical messenger crucial to mood, among other things. The two shorter, less efficient versions of the gene’s three forms, known as short/short and short/long (or S/S and S/L), greatly magnify your risk of serious depression—if you hit enough rough road. The gene’s long/long form, on the other hand, appears to be protective.

In the end, I’d always backed away from having my SERT gene assayed. Who wants to know his risk of collapsing under pressure? Given my family and personal history, I figured I probably carried the short/long allele, which would make me at least moderately depression-prone. If I had it tested I might get the encouraging news that I had the long/long allele. Then again, I might find I had the dreaded, riskier short/short allele. This was something I wasn’t sure I wanted to find out.

But as I looked into the orchid hypothesis and began to think in terms of plasticity rather than risk, I decided maybe I did want to find out. So I called a researcher I know in New York who does depression research involving the serotonin-transporter gene. The next day, FedEx left a package on my front porch containing a specimen cup. I spat into it, examined what I’d produced, and spat again. Then I screwed the cap tight, slid the vial into its little shipping tube, and put it back on the porch. An hour later, the FedEx guy took it away.

Of all the evidence supporting the orchid-gene hypothesis, perhaps the most compelling comes from the work of Stephen Suomi, a rhesus-monkey researcher who heads a sprawling complex of labs and monkey habitats in the Maryland countryside—the National Institutes of Health’s Laboratory of Comparative Ethology. For 41 years, first at the University of Wisconsin and then, beginning in 1983, in the Maryland lab the NIH built specifically for him, Suomi has been studying the roots of temperament and behavior in rhesus monkeys—which share about 95 percent of our DNA, a number exceeded only in apes. Rhesus monkeys differ from humans in obvious and fundamental ways. But their close resemblance to us in crucial social and genetic respects reveals much about the roots of our own behavior—and has helped give rise to the orchid hypothesis.

Read the rest of the article.


ScienceNow - New Neurons Make Room for New Memories

Very cool - the brain is far more resilient than we ever thought it was.

New Neurons Make Room for New Memories

Enlarge Image

Picture of neurons

Out with the old. New neurons in the hippocampus (pink label, top panel) may help clear out old memory traces, according to research with rodents with impaired neurogenesis (bottom).

Credit: Adapted from T. Kitamura et al., Cell (13 November 2009)

By Greg Miller

ScienceNOW Daily News
12 November 2009

The discovery that new neurons are born in the adult brain overturned decades-old dogma in neuroscience. But it also raised a host of questions about what exactly these neurons do (Science, 17 February 2006, p. 938). Now the authors of a new study suggest that the newcomers clear away the remnants of old memories to make room for new ones.

The brain's hippocampus is a bit like a secretary's inbox: Although many memories start out here, they eventually get filed to the neocortex for permanent storage. That's why the famous patient H.M., who had his hippocampus removed in experimental surgery for epilepsy, could remember events prior to his operation despite being unable to form any new memories afterward (Science, 26 June, p. 1634).

To investigate whether newly born neurons play a role in memory transfer to the neocortex, researchers led by Kaoru Inokuchi of the University of Toyama in Japan examined rats and mice with impaired hippocampal neurogenesis. The researchers trained the rodents to associate a particular chamber with a mild shock. Like normal animals, they remembered the association for weeks, freezing up any time they were placed in the chamber. This type of memory usually hangs out in the hippocampus for less than a month: When the researchers injected the brains of normal rodents with a drug that essentially turned off the hippocampus after 28 days, it had little to no effect on their freezing behavior--presumably because the memory had already moved on to the neocortex. But in the rats and mice with impaired neurogenesis, the same treatment substantially reduced freezing behavior, suggesting that the fearful memory had lingered longer than usual in the hippocampus instead of being transferred to the neocortex. A similar set of experiments with mice that exercised on a running wheel--an activity previously shown to boost neurogenesis--bolstered the idea that neurogenesis plays a role in transferring memories. In that case, memories appeared to shift from the hippocampus to the neocortex faster than usual.

Finally, recordings of neural activity indicated that that long-term potentiation, a physiological strengthening of neural connections thought to underlie this type of learning and memory, persists longer than usual in the hippocampus of the neurogenesis-impaired rodents. Altogether, the findings—reported tomorrow in Cell--suggest that new neurons act like an efficient secretary, making sure the physiological traces of old memories are promptly removed from the hippocampus inbox to make room for new ones.

"The authors went through a lot of experiments to prove their case," says Gerd Kempermann, an expert on neurogenesis at the Center for Regenerative Therapies Dresden in Germany. But Kempermann is not quite convinced that the specific job of new neurons is to clear the hippocampus for new information. An alternative explanation, he says, is that new neurons simply enable the hippocampus to work more quickly. "But their conclusion is certainly interesting and great food for thought."


Probing into Depression - Research Blogging / by Dave Munger

Cool - there has been research into deep brain stimulation for quite some time, and it looks like it's starting to show promise for clinical depression that does not respond to drugs and might otherwise require ECT.

Probing into Depression

Research Blogging / by Dave Munger / November 11, 2009

Deep brain stimulation, already established as a treatment for stubborn Parkinson’s disease, may also be useful as a therapy for drug-resistant clinical depression.

What would it take for you to allow a surgeon to probe deep into your brain to implant permanent electrodes that would administer behavior-altering electric shocks? Anyone undergoing brain surgery risks stroke and possibly death, and even if the surgery is successful there is the potential for infection, which would require even more surgery with all its attendant risks.

Tens of thousands already have electrostimulation devices implanted in their brains, and millions more may join them if the technique, called “deep brain stimulation” (DBS), gains wider acceptance. DBS was originally developed as a treatment for Parkinson’s disease, and it has been remarkably effective. The primary symptom of Parkinson’s is uncontrollable body tremors that can make it nearly impossible to perform basic daily functions like eating and drinking, writing, and even walking. An acquaintance of mine who has Parkinson’s opted for the DBS procedure and now functions perfectly normally—it’s impossible for the casual observer to notice anything unusual about how he moves. He went from being nearly incapacitated to being renewed as a healthy, fully functional person. Perhaps it’s no wonder that he was willing to submit to such an invasive procedure.

In DBS therapy, one or more electrodes the size of a spaghetti strand are precisely positioned in the patient’s brain, then connected by wire around the skull and through the neck to a pacemaker-like device, a neurostimulator, just below the collarbone. The neurostimulator is activated and deactivated by a magnet that the patient carries, so if a tremor is beginning to become disruptive, DBS can be self-administered in an instant, with near-instantaneous results. A video provided by the manufacturer of a DBS device shows how it works in ideal cases.

Now new uses for the treatment are being tested. One observed side effect of DBS for Parkinson’s is excessive happiness, to the point of uncontrollable elation—the sort of unhealthy, personality-changing reaction that everyone fears when they think of electrodes being implanted in their brain. Tuning the device can minimize this side effect, but its very existence suggests that DBS might be a useful therapy for clinical depression.

Read the rest of the article.


Thursday, November 12, 2009

Precision Nutrition - What’s So Healthy About Basil?

With cold and flu season upon us, having another all natural supplement in our arsenal can only be a good thing. Enter Basil.

What’s So Healthy About Basil?

Basil (Ocimum basilicum), an aromatic herb belonging to the mint family, is perhaps best known as the key ingredient in pesto – that savoury Italian sauce made from olive oil, garlic, crushed pine nuts and loads of fresh basil leaves.

The type of basil used in Mediterranean cooking – Italian large-leaf – pairs well with tomato flavours and consequently appears in a wide range of dishes from Caprese salad to marinara sauce. Other common basil varieties like sweet, lemon, Thai and holy basil are used judiciously in Thai, Vietnamese and Indian cuisine.

There are more than 40 cultivars of this pungent plant, each with its own characteristic colour and aroma. Depending on the variety, basil can be green, white or purple with a scent reminiscent of lemon, cloves, cinnamon, anise, camphor or thyme. Some non-edible kinds are cultivated for ornamental purposes or to ward off garden pests.

But it is basil’s medicinal properties, rather than its culinary value, that extend the herb’s uses far beyond the humble pesto. Like other aromatic plants, basil contains essential oils and phytochemicals in the leaves, stem, flowers, roots and seeds that have biological activity in the body.

Throughout history, ancient cultures have used herbal remedies to prevent and treat illness and disease. Basil is just one example of the wide range of medicinal flora historically used in plant-based tinctures, compresses, syrups and ointments.

For instance, holy basil (known as tulsi in Hindi) has been used for centuries in Ayurveda, a traditional Indian system of medicine, as a treatment for gastric, hepatic, respiratory and inflammatory disorders as well as a remedy for headache, fever, anxiety, convulsions, nausea and hypertension. (See Kyra de Vreeze’s article “Holy… Tulsi!”, elsewhere in this issue.)

Fresh roots and leaves of holy basil were prepared as a tea, or sometimes as a topical treatment to speed wound healing. There is also evidence that traditional Chinese medicine used basil. (See Paul O’Brien’s article on TCM and basil, elsewhere in this issue.)

Even though basil has been used therapeutically for many years, are its healing properties simply hearsay or have the herb’s health effects been substantiated by modern science?

- – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – -

From Lab to Lunch: The Benefits of Basil

Read this full color, PDF, straight out of the pages of Spezzatino Magazine. Articles like this are the hallmark of Spezzatino Magazine, a food magazine in which all of the proceeds go directly to the Healthy Food Bank charity.In other words, a subscription to Spezzatino means that not only you eat better, someone else in your community does too.

– - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – - – -

From garden to medicine chest

In recent years increased scientific interest in plant phytochemicals (plant chemicals) has brought numerous vegetables, herbs and spices – including basil – to the forefront of nutritional research. Although the study of plant compounds is not new, scientists are only now beginning to characterize the wide range of biologically active components in our food plants and investigate their impact on human health and disease.

In cell culture and animal studies basil has been found to exhibit antimicrobial, anti-inflammatory, anti-diabetic, antioxidant and anti-cancer activity. But how does basil – which nowadays is used as little more than a cooking herb – defend our bodies against chronic disease and illness?

Read more.


A Drop of Water - At 2,000 Frames per Second

Freaking awesome - via 3 Quarks Daily.
In this video, water droplets falling at normal speed look fairly ordinary. But slowed to 2,000 frames per second, H20 reveals something astonishing. Mathematician John Bush explains what‘s going on.





Marsha Lucas - Nine Ways That a Meditating Brain Creates Better Relationships

Another good article . . . As near as I can tell, the more mindful we are in relationships, the better those relationships will be. And meditation is the best way I know to improve mindfulness skills.

Nine Ways That a Meditating Brain Creates Better Relationships

by Marsha Lucas, Ph.D.

Nine Ways That a Meditating Brain Creates Better Relationships

It's never too late to have a (brain that's wired as if it had a) happy childhood2

Therapists get this question a lot: "Okay, so now that I understand how my history made me a mess when it comes to relationships, what now? It's not like I can go back in time and change my childhood."

The "what now" is that there's increasing evidence that the simple practice of mindfulness meditation can re-wire your brain. In key areas, you can literally change and grow neural connections which support finding and creating better relationships. And in nine different ways, your brain can become more like those who grew up knowing how to love and be loved in healthy, sustainable ways.

As a psychologist helping others find their way to greater emotional well-being, I find that the most compelling benefits of a regular mindfulness meditation practice are a set of nine documented results.1 (I mentioned them in my previous post, Mindfulness Meditation + Neuroscience = Healthier Relationships.) I've seen the results confirmed through my psychology practice, in myself, and in the lives of my friends and colleagues.

At least seven of these nine benefits bear a remarkable resemblance to the characteristics of people who grew up with healthy, attuned attachments. Childhood attachment experiences have a huge impact on how we are wired for relationships, throughout our lives.

So, if we can change our brain to work more like those people with healthy attachment histories, we too can have a brain that's wired as if it had a happy childhood.

Go read the rest of e article to see the nine ways meditation helps in our relationships.

Marina Warner - On Myth

Cool article from The Liberal (what a brave name for a journal in this political climate).

On Myth

by Marina Warner

WRITERS don’t make up myths; they take them over and recast them. Even Homer was telling stories that his audience already knew. If some individuals present weren’t acquainted with Odysseus’s wanderings or the Trojan War, and were listening in for the first time (as I was when a child, enthralled by the gods and goddesses in H.A. Guerber’s classic retelling), they were still aware that this was a common inheritance that belonged to everyone. Its single author – if Homer was one at all – acted as a conduit of collective knowledge, picking up the thread and telling it anew.

In an inspired essay on ‘The Translators of The Arabian Nights’, Jorge Luis Borges praises the murmuring exchanges of writers across time and cultures, and points out that the more literature talks to other literatures, and reweaves the figures in the carpet, the richer languages and expression, metaphors and stories become. Borges wasn’t a believer in anything – not even magic – but he couldn’t do without the fantastic and the mythological. He compiled a wonderfully quixotic and useful bestiary, The Book of Imaginary Beings, to include the fauna of world literature: chimeras and dragons, mermaids and the head-lolling catoblepas whose misfortune is to scorch the earth on which he tries to graze with his pestilential breath. But Borges also included some of his own inventions – The Creatures who Live in Mirrors, for example, a marvelous twist on the idea of the ghostly double.

Borges liked myth because he believed in the principle of ‘reasoned imagination’: that knowing old stories, and retrieving and reworking them, brought about illumination in a different way from rational inquiry. Myths aren’t lies or delusions: as Hippolyta the Amazon queen responds to Theseus’ disparaging remarks about enchantment: ‘But all the story of the night told o’er, / And all their minds transfigured so together, / More witnesseth than fancy’s images / And grows to something of great constancy’ (A Midsummer Night’s Dream, V.i.24-7). One of Borges’s famous stories, ‘The Circular Ruins’, unfolds a pitch-perfect fable of riddling existence in the twentieth century: a magician dreams a child into being, and then discovers, as he walks unscathed through fire in the closing lines of the tale, that he himself has been dreamed.

Borges here annexed and revisioned accounts of shamanic trance voyaging that had been noted down in the depths of the Siberian winter wastelands and transmitted by ethnographers to the great Parisian school of scholars of the sacred (Georges Dumézil, Marcel Mauss, Marcel Granet). Borges translates his protagonist to a ruined temple in a South American jungle, thus grafting the shamans from Siberia onto closer, Latin American Indian counterparts who also held that men and women could metamorphose in their sleep and travel out of their bodies and out of time. Myths are not only held in common; they connect disparate communities over great distances through our common fabulist mental powers – what Henri Bergson called the ‘fonction fabulatrice’: the myth-making faculty.

The word ‘myth’ is usually used to evoke a dead religion (the Greeks’ Olympians, the Norse pantheon) but it’s also applied rather heedlessly to the sacred stories of peoples who are still unconsciously counted as primitive, and therefore somehow unadulteratedly ancient (the Sanskrit epics of the Hindus, Australian aborigines’ tales, Brazilian Indians’ myths). Both Jung and Freud’s diagnostic uses of myth make this assumption – that pure, pre-historical human tendencies, drives and fears, will be detectable through myths. For Freud, the savage story of Zeus castrating and deposing Kronos to become ruler of Olympus illuminated the conflict that besets all fathers and sons in historical time. The way Freud told and retold this story has become so entrenched that few people still know that the same myth also relates how Kronos’s own father Ouranos was deposed without any bloodshed – he went glumly, cast out of heaven by his son as a punishment for exceeding his authority. In the months following Brown’s coming into his own after Blair’s stuttering abdication, the Greek story again demonstrates myth’s inexhaustible illuminating powers. As the Roman poet Sallust wrote about such tales: “These things never happened but are always”. The question is only which story to pick.

“Myths are definitely not guardrails, set up at each dangerous curve to prolong the life of the individual or of the human species”, wrote Roger Caillois, a friend of Borges. Yet very occasionally, a writer like Mary Shelley seizes upon a story and issues a warning that spreads from the page to the world almost instantly. Her Dr. Frankenstein takes up on the myth of Faust, himself a figure of human presumption from the lineage of Prometheus and Lucifer and other rebels against gods and the limits they impose. But in a brilliant innovation of compassionate thinking, Shelley focused instead on the Creature, and her Creature – especially through the pathos of Boris Karloff’s’ filmic incarnation – has migrated into ever more popular festivities and rituals (fancy dress parties, horror video sleepovers, Hallowe’en), as well as into dozens of nightmares about genetic engineering. But if Frankenstein’s Creature embodied for the early Romantics the victims of unchained rational science, what myth could be re-awakened and re-cast as a warning by reasoning imaginations today?

It seems to me that Erichsychthon makes a strong candidate in the world of eco-disaster: he’s the tycoon in Ovid who cuts down a whole forest even after he has been warned of the consequences, and is then cursed by the outraged goddess of nature with unappeasable hunger; he ends up selling his daughter for food, and when that no longer works, consuming himself bite by bite.

Other myths of our time could be the wanderers and fugitives – Io chased from country to country; Leto forbidden from resting anywhere to give birth to her children; Aeneas leaving Troy in burning ruins with his father on his back, like Dido leaving Tyre, both of them fleeing westwards.

Last year, the most recently discovered planet, ‘2003-UB313’, was renamed Eris after the Goddess of Strife, whose actions catalyse the Trojan War. The matter of Troy never goes away. However, it turns out that astronomers weren’t inspired to this choice by the state of the world, but by the state of their profession. In a spirit of resistance to Eris’s planetary hold, I hope another body is orbiting into view, dreamed up by a fabulist’s reasoned imagination and bringing with it new creatures out of the mirror of myth.

Marina Warner is a mythographer and cultural historian.

Her most recent book, Phantasmagoria: Spirit Visions, Metaphors, and Media,

is published by Oxford University Press.


Roger Walsh - Current Status [of the Intgeral Enterprise] and Potential Traps

Excellent article from Roger Walsh, which appears in the new issue of Fall 2009 edition of the Journal of Integral Theory and Practice - yours for only $29.95. (I remember when this journal was free to paying members of I-I. Good times.)

THE STATE OF THE INTEGRAL ENTERPRISE


ARTICLE REMOVED AT THE REQUEST OF THE MANAGING EDITOR OF JITP.


A Dream Interpretation: Tuneups for the Brain

Interesting article about the purpose of dreams that appeared recently in the New York Times. This article is obviously biased toward the research article, which itself is biased toward an objective analysis of dreams, ignoring their subjective value.

They even reduced lucid dreaming to mere "warm up" for the coming day. BS.

A Dream Interpretation: Tuneups for the Brain

Published: November 9, 2009

It’s snowing heavily, and everyone in the backyard is in a swimsuit, at some kind of party: Mom, Dad, the high school principal, there’s even an ex-girlfriend. And is that Elvis, over by the piñata?

Uh-oh.

Dreams are so rich and have such an authentic feeling that scientists have long assumed they must have a crucial psychological purpose. To Freud, dreaming provided a playground for the unconscious mind; to Jung, it was a stage where the psyche’s archetypes acted out primal themes. Newer theories hold that dreams help the brain to consolidate emotional memories or to work though current problems, like divorce and work frustrations.

Yet what if the primary purpose of dreaming isn’t psychological at all?

In a paper published last month in the journal Nature Reviews Neuroscience, Dr. J. Allan Hobson, a psychiatrist and longtime sleep researcher at Harvard, argues that the main function of rapid-eye-movement sleep, or REM, when most dreaming occurs, is physiological. The brain is warming its circuits, anticipating the sights and sounds and emotions of waking.

“It helps explain a lot of things, like why people forget so many dreams,” Dr. Hobson said in an interview. “It’s like jogging; the body doesn’t remember every step, but it knows it has exercised. It has been tuned up. It’s the same idea here: dreams are tuning the mind for conscious awareness.”

Drawing on work of his own and others, Dr. Hobson argues that dreaming is a parallel state of consciousness that is continually running but normally suppressed during waking. The idea is a prominent example of how neuroscience is altering assumptions about everyday (or every-night) brain functions.

“Most people who have studied dreams start out with some predetermined psychological ideas and try to make dreaming fit those,” said Dr. Mark Mahowald, a neurologist who is director of the sleep disorders program at Hennepin County Medical Center, in Minneapolis. “What I like about this new paper is that he doesn’t make any assumptions about what dreaming is doing.”

The paper has already stirred controversy and discussion among Freudians, therapists and other researchers, including neuroscientists. Dr. Rodolfo Llinás, a neurologist and physiologist at New York University, called Dr. Hobson’s reasoning impressive but said it was not the only physiological interpretation of dreams.

“I argue that dreaming is not a parallel state but that it is consciousness itself, in the absence of input from the senses,” said Dr. Llinás, who makes the case in the book “I of the Vortex: From Neurons to Self” (M.I.T., 2001). Once people are awake, he argued, their brain essentially revises its dream images to match what it sees, hears and feels — the dreams are “corrected” by the senses.

These novel ideas about dreaming are based partly on basic findings about REM sleep. In evolutionary terms, REM appears to be a recent development; it is detectable in humans and other warm-blooded mammals and birds. And studies suggest that REM makes its appearance very early in life — in the third trimester for humans, well before a developing child has experience or imagery to fill out a dream.

In studies, scientists have found evidence that REM activity helps the brain build neural connections, particularly in its visual areas. The developing fetus may be “seeing” something, in terms of brain activity, long before the eyes ever open — the developing brain drawing on innate, biological models of space and time, like an internal virtual-reality machine. Full-on dreams, in the usual sense of the word, come much later. Their content, in this view, is a kind of crude test run for what the coming day may hold.

None of this is to say that dreams are devoid of meaning. Anyone who can remember a vivid dream knows that at times the strange nighttime scenes reflect real hopes and anxieties: the young teacher who finds himself naked at the lectern; the new mother in front of an empty crib, frantic in her imagined loss.

But people can read almost anything into the dreams that they remember, and they do exactly that. In a recent study of more than 1,000 people, researchers at Carnegie Mellon University and Harvard found strong biases in the interpretations of dreams. For instance, the participants tended to attach more significance to a negative dream if it was about someone they disliked, and more to a positive dream if it was about a friend.

In fact, research suggests that only about 20 percent of dreams contain people or places that the dreamer has encountered. Most images appear to be unique to a single dream.

Scientists know this because some people have the ability to watch their own dreams as observers, without waking up. This state of consciousness, called lucid dreaming, is itself something a mystery — and a staple of New Age and ancient mystics. But it is a real phenomenon, one in which Dr. Hobson finds strong support for his argument for dreams as a physiological warm-up before waking.

In dozens of studies, researchers have brought people into the laboratory and trained them to dream lucidly. They do this with a variety of techniques, including auto-suggestion as head meets pillow (“I will be aware when I dream; I will observe”) and teaching telltale signs of dreaming (the light switches don’t work; levitation is possible; it is often impossible to scream).

Lucid dreaming occurs during a mixed state of consciousness, sleep researchers say — a heavy dose of REM with a sprinkling of waking awareness. “This is just one kind of mixed state, but there are whole variety of them,” Dr. Mahowald said. Sleepwalking and night terrors, he said, represent mixtures of muscle activation and non-REM sleep. Attacks of narcolepsy reflect an infringement of REM on normal daytime alertness.

In study published in September in the journal Sleep, Ursula Voss of J. W. Goethe-University in Frankfurt led a team that analyzed brain waves during REM sleep, waking and lucid dreaming. It found that lucid dreaming had elements of REM and of waking — most notably in the frontal areas of the brain, which are quiet during normal dreaming. Dr. Hobson was a co-author on the paper.

“You are seeing this split brain in action,” he said. “This tells me that there are these two systems, and that in fact they can be running at the same time.”

Researchers have a way to go before they can confirm or fill out this working hypothesis. But the payoffs could extend beyond a deeper understanding of the sleeping brain. People who struggle with schizophrenia suffer delusions of unknown origin. Dr. Hobson suggests that these flights of imagination may be related to an abnormal activation of a dreaming consciousness. “Let the dreamer awake, and you will see psychosis,” Jung said.

For everyone else, the idea of dreams as a kind of sound check for the brain may bring some comfort, as well. That ominous dream of people gathered on the lawn for some strange party? Probably meaningless.

No reason to scream, even if it were possible.


Wednesday, November 11, 2009

More on the Big 5 Personality Model

This morning I posted an article from World of Psychology on the Big 5 Personality Model. My friend John, over at Facebook, commented on its lack of multiperspectival validation. He's right, of course. Most typologies are not very accurate, often even in the 3rd person perspective.

Here is what my abnormal psych textbook says on the subject (not at all integral, but interesting). This is a slightly different and more reliable model than the one presented this morning. First a table giving the five factors, which are also different from this morning's model.

Table 12.4 Sample Items from the Revised NEO Personality Inventory Assessing the Five-Factor Model Personality Trait Sample Items

Neuroticism: I often feel tense or jittery
Extraversion/introversion: I really like most people I meet
Openness to experience: I have a very active imagination
Agreeableness/antagonism: I tend to be cynical and skeptical of others’ intentions (reverse scored)
Conscientiousness: I often come into situations without being prepared (reverse scored)

[Reproduced by special permission of the Publisher, Psychological Assessment Resources, Inc., 16204 North Florida Avenue, Lutz, FL 33549, from the NEO Personality Inventory-Revised by Paul Costa and Robert McCrae, Copyright 1978, 1985, 1989, 1991, 1992 by Psychological Assessment Resources, Inc. (PAR). Further reproduction is prohibited without permission of PAR.]


In contemporary research, a major focus is on the five-factor model (McCrae & Costa, 1990), in which the five factors, or major dimensions, of personality are neuroticism, extraversion/introversion, openness to experience, agreeableness/antagonism, and conscientiousness. Table 12.4 presents questionnaire items that assess each of these dimensions; by reading the items, you can get a sense of what each dimension means. One interesting set of findings has shown that these dimensions of personality are moderately heritable ( Jang et al., 2002).

Researchers have summarized the results of several studies linking these personality traits to schizoid, borderline, and avoidant personality disorders (Widiger & Costa, 1994). For example, people with schizoid personality disorder and those with avoidant personality disorder (two disorders that involve social aloofness) tend to be high in introversion but to differ in neuroticism—people with avoidant personality disorder tend to be higher in neuroticism than those with schizoid personality disorder. Rather than forcing each person into a discrete diagnostic category and then struggling with how to distinguish between the two disorders, the dimensional approach would simply describe each person’s levels of neuroticism and introversion.

A recent meta-analysis shows that findings are fairly consistent across a range of studies that have mapped personality disorder diagnoses onto the dimensions of the five-factor model. Most personality disorders are characterized by high neuroticism and antagonism. High extraversion was tied to histrionic and narcissistic disorders (two disorders that involve dramatic behavior), whereas low extraversion was tied to schizoid, schizotypal, and avoidant disorders (Saulsman & Page, 2004).

The five-factor model is not without its critics, however. In a study in which people with personality disorders completed a questionnaire assessing them on the basis of the five factors, the profiles of the various personality disorders turned out to be rather similar to one another (Morey et al., 2000). Some might say this is fine, and that fewer dimensions would simplify things. But proponents of the need to be more specific have responded to this difficulty by claiming that differentiating among the different personality disorders requires breaking down the five factors into their “facets” (Lynam & Widiger, 2001). Each of the five factors has six facets, or components; for example, the extraversion factor includes the facets of warmth, gregariousness, assertiveness, activity, excitement seeking, and positive emotionality. Differentiating among the personality disorders might require a more detailed assessment that includes these specific personality facets. Beyond the need to consider facets, it appears that some disorders, such as schizotypal personality disorder, are more distinct than being just extreme points along a dimension; statistical analyses suggest that people with these disorders tend to be qualitatively different from other people. For example, people with schizotypal personality disorder tend to experience perceptual oddities that others don’t experience even in mild degrees (Haslam & Kim, 2002).

The five-factor model is certainly not a total solution to the problem of classifying personality disorders, but the important point is that a dimensional model has several distinct advantages. Most importantly, it handles the comorbidity problem, because comorbidity is a difficulty only in a categorical classification system like the one used in DSM-IV-TR. A dimensional system also links normal and abnormal personality, so findings on personality development in general become relevant to the personality disorders.

Problems with classifying personality disorders should not lead us to underestimate the importance of being able to identify them. Personality disorders are prevalent, and they cause severe impairments. Some of the problems with classification and diagnosis stem from the fact that these disorders have been the subject of serious research for less time than have most of the other disorders considered in this book. As research continues, the diagnostic categories will most likely be refined, and many of these problems might be solved. Bear these issues in mind, though, as we now turn to a review of the clinical description and etiology of the personality disorders in cluster A, cluster B, and cluster C. (pg. 390-391)
Kring, Davison, Neale, Johnson. (2007). Abnormal Psychology. 10th Edition. NY: Wiley & Sons.