Saturday, March 22, 2014

Giovanni B Caputo - Archetypal-Imaging and Mirror-Gazing


Interesting article from Behavioral Sciences that uses Carl G. Jung's investigation into mirrors in relation to the unconscious (see Psychology and Alchemy) as a jumping off point for research into the use of mirrors to understand the possible "psychodynamic projection of the subject’s unconscious archetypal contents into the mirror image."

Full Citation:
Caputo, GB. (2013, Dec 24). Archetypal-Imaging and Mirror-Gazing. Behavioral Sciences; 4(1), 1-13; doi:10.3390/bs4010001


Archetypal-Imaging and Mirror-Gazing

Giovanni B. Caputo

(This article belongs to the Special Issue Analytical Psychology: Theory and Practice)

Abstract: 
Mirrors have been studied by cognitive psychology in order to understand self-recognition, self-identity, and self-consciousness. Moreover, the relevance of mirrors in spirituality, magic and arts may also suggest that mirrors can be symbols of unconscious contents. Carl G. Jung investigated mirrors in relation to the unconscious, particularly in Psychology and Alchemy. However, the relationship between the conscious behavior in front of a mirror and the unconscious meaning of mirrors has not been clarified. Recently, empirical research found that gazing at one’s own face in the mirror for a few minutes, at a low illumination level, produces the perception of bodily dysmorphic illusions of strange-faces. Healthy observers usually describe huge distortions of their own faces, monstrous beings, prototypical faces, faces of relatives and deceased, and faces of animals. In the psychiatric population, some schizophrenics show a dramatic increase of strange-face illusions. They can also describe the perception of multiple-others that fill the mirror surface surrounding their strange-face. Schizophrenics are usually convinced that strange-face illusions are truly real and identify themselves with strange-face illusions, diversely from healthy individuals who never identify with them. On the contrary, most patients with major depression do not perceive strange-face illusions, or they perceive very faint changes of their immobile faces in the mirror, like death statues. Strange-face illusions may be the psychodynamic projection of the subject’s unconscious archetypal contents into the mirror image. Therefore, strange-face illusions might provide both an ecological setting and an experimental technique for “imaging of the unconscious”. Future researches have been proposed.

Download PDF Full-Text

Tanya Luhrmann - The Quest for Heaven is Local: How Spiritual Experience is Shaped by Social Life


Interesting talk. Tanya Marie Luhrmann is currently the Watkins University Professor in the Anthropology Department at Stanford University. She has been elected to the American Academy of Arts and Sciences, and has been the recipient of a John Simon Guggenheim Fellowship. She is the author of When God Talks Back: Understanding the American Evangelical Relationship with God
(2012).

The Quest for Heaven is Local: How Spiritual Experience is Shaped by Social Life

Published on Feb 21, 2014


(Visit: http://www.uctv.tv) Drawing on fieldwork in new charismatic evangelicals churches in the Bay Area and in Accra, Ghana, Tanya Luhrmann, Stanford University, explores the way that cultural ideas about mind and person alter prayer practice and the experience of God. Luhrmann's work focuses on the way that objects without material presence come to seem real to people, and the way that ideas about the mind affect mental experience. Recorded on 11/12/2013.

Friday, March 21, 2014

Michel Maharbiz - Cyborg Insects and Other Things: Building Interfaces Between the Synthetic and Multicellular


Via UCTV and the University of California at Berkeley, this video talk by Michel Maharbiz (faculty webpage) takes a look at the future of cyborg technology, especially in insects. His work is in developing electronic interfaces to cells, to organisms, and to brains.

Professor Maharbiz (personal webpage) is:
Associate professor of Electrical Engineering and Computer Sciences at UC Berkeley. His current research centers on building micro/nano interfaces to cells and organisms and exploring bio-derived fabrication methods. His research group is also known for developing the world’s first remotely radio-controlled cyborg beetles; this was named one of the top 10 emerging technologies of 2009 by MIT’s Technology Review (TR10) and was among Time magazine’s Top 50 Inventions of 2009. His long-term goal is understanding developmental mechanisms as a way to engineer and fabricate machines. He received his Ph.D. in 2003 from UC Berkeley for his work on microbioreactor systems, which led to the foundation of Microreactor Technologies Inc., which was recently acquired by Pall Corporation.
This technology is both very cool and kind of creepy. I really hate flying beetle-type insects - and now we can turn them into drones I'm guessing.

Cyborg Insects and Other Things: Building Interfaces Between the Synthetic and Multicellular

Published on Mar 10, 2014


Prof. Michel Maharbiz presents an overview of ongoing exploration of the remote control of insects in free flight via implantable radio-equipped miniature neural stimulating systems; recent results with pupally-implanted neural interfaces and extreme miniaturization directions.

'Follow Your Passion' Is Wrong: Cal Newport speaks at World Domination Summit 2012


Cal Newport is the author of So Good They Can't Ignore You: Why Skills Trump Passion in the Quest for Work You Love (2012). In the video from the 2012 World Domination Summit, he dispels the "follow your passion" advice so many of us have been given and passed on to others. This video made the rounds on Facebook for a while - I am just now getting around to sharing it here.

Here is the blurb for his book:
In this eye-opening account, Cal Newport debunks the long-held belief that "follow your passion" is good advice. Not only is the cliché flawed-preexisting passions are rare and have little to do with how most people end up loving their work-but it can also be dangerous, leading to anxiety and chronic job hopping.

After making his case against passion, Newport sets out on a quest to discover the reality of how people end up loving what they do. Spending time with organic farmers, venture capitalists, screenwriters, freelance computer programmers, and others who admitted to deriving great satisfaction from their work, Newport uncovers the strategies they used and the pitfalls they avoided in developing their compelling careers.

Matching your job to a preexisting passion does not matter, he reveals. Passion comes after you put in the hard work to become excellent at something valuable, not before.
In other words, what you do for a living is much less important than how you do it.

With a title taken from the comedian Steve Martin, who once said his advice for aspiring entertainers was to "be so good they can't ignore you," Cal Newport's clearly written manifesto is mandatory reading for anyone fretting about what to do with their life, or frustrated by their current job situation and eager to find a fresh new way to take control of their livelihood. He provides an evidence-based blueprint for creating work you love.

SO GOOD THEY CAN'T IGNORE YOU will change the way we think about our careers, happiness, and the crafting of a remarkable life.
Cal Newport is an Assistant Professor of Computer Science at Georgetown University, who specializes in the theory of distributed algorithms. He previously earned his Ph.D. from MIT in 2009 and graduated from Dartmouth College in 2004.

In addition to his academic work, Newport is a writer who focuses on contrarian, evidence-based advice for building a successful and fulfilling life in school and after graduation.

'Follow Your Passion' Is Wrong: Cal Newport speaks at World Domination Summit 2012

Published on Jan 29, 2013


"The path to a passionate life is often way more complex than the simple advice 'follow your passion' would suggest."
You've been told you should follow your passion, to do what you love and the money will follow. But how sound is this advice? Cal Newport argues that it's astonishingly wrong.

You can find out more in his book, So Good They Can't Ignore You: Why Skills Trump Passion in the Quest for Work You Love.

Rose Eveleth - The Ancient, Peaceful Art of Self-Generated Hallucination (Nautilus)


This is an interesting article to me because I have experienced chemically induced hallucinations, sensory deprivation induced hallucinations, and the much more subtle distraction of lights and images during meditation. They are all three qualitatively different in my experience.

In the Zen tradition, this "self-generated hallucination" is called makyo. Here is a definition from Wikipedia:
The term makyo (魔境 makyō?) is a Zen term that means “ghost cave” or “devil’s cave.” It is a figurative reference to the kind of self-delusion that results from clinging to an experience and making a conceptual “nest” out of it for oneself. Makyo is essentially synonymous with illusion, but especially in reference to experiences that can occur within meditation practice.
I have always understood these as experiences to be ignored, as mere distractions along the path. The warning is that it can be very enticing to get caught up in visual pyrotechnics in meditation, but that is simply another form of attachment.

The Ancient, Peaceful Art of Self-Generated Hallucination

Posted By Rose Eveleth on Mar 19, 2014


Cornelia Kopp via Flickr

After five years of practicing meditation, subject number 99003 began to see the lights. “My eyes were closed,” he reported, “[and] there would be what appeared to be a moon-shaped object in my consciousness directly above me, about the same size as the moon if you lay down on the ground and look into the night sky. It was white. When I let go I was totally enveloped inside this light… I was seeing colors and lights and all kinds of things going on… Blue, purple, red. They were globes; they were kind of like Christmas-tree lights hanging out in space, except they were round.”

Subject 99003 described these experiences to Jared Lindahl, a researcher from Warren Wilson College in Asheville, North Carolina, who has spent years scientifically studying meditation. He and his team are the midst of a large study on meditators and their experiences, and in a recent paper they homed in on a peculiar experience many of them share: mysterious lights that appear in their mind’s eyes as they practice.

To figure out just where these lights might be coming from, Lindahl and his team talked to 28 meditators for an average of 77 minutes each. Nine of them reported “light experiences,” with descriptions much like subject 99003’s. “Sometimes there were, oftentimes, just a white spot, sometimes multiple white spots,” one said. “Sometimes the spots, or ‘little stars’ as I called them, would float together in a wave, like a group of birds migrating, but I would just let those things come and go.”

Another said: “In concentration I’ve had rays of white light that go through everything. They’re either coming from behind me somewhere or coming out of the object that I was concentrating on… I saw it with my eyes open and it wasn’t really seeing it was something else, even though I still was perceiving that I was there.”

Buddhist literature refers to lights and visions in myriad ways. The Theravada tradition refers to nimitta, an vision of a series of lights seen during meditation that can be taken to represent everything from the meditator’s pure mind to a visual symbol of a real object. In one Buddhist text, called The Path of Purification, the nimitta is described this way:
It appears to some as a star or cluster of gems or a cluster of pearls, […] to others like a long braid string or a wreath of flowers or a puff of smoke, to others like a stretched-out cobweb or a film of cloud or a lotus flower or a chariot wheel or the moon’s disk or the sun’s disk.
Other Buddhist traditions also refer to lights during meditation, but Lindahl points out in the paper that “there is no single, consistent interpretation of meditation-induced light experiences in Buddhist traditions.” And yet the appearance of lights isn’t a fluke occurrence—it’s something that many meditators experience, and that many traditions have tried to incorporate and explain.
“Sometimes the spots, or ‘little stars’ as I called them, would float together in a wave, like a group of birds migrating, but I would just let those things come and go.”
So where are these lights coming from? They’re clearly not real, physical lights dancing in front of the meditator’s face, but rather a construction of the idle, meditating brain. What is it about meditation that opens the brain up to these kinds of hallucinations?

To answer that question, Lindahl and his team looked for occasions where the descriptions he gathered from meditators intersected with descriptions of neurophysiological disorders. They found that both the first-person accounts and the Buddhist literary descriptions of these lights intersected pretty well with the experiences of people undergoing the intentional practice of sensory deprivation.

Hallucinations are relatively well-documented in the world of sensory deprivation, and they dovetail with the lights seen by meditators. Where meditators describe jewel lights, white spots and little stars, those under sensory deprivation sometimes describe dots and points of light. Where meditators see shimmering ropes, electrical sparks, and rays of light that go through everything, the sensory deprived might see visual snow, bright sunsets, and shimmering, luminous fog. Neuroscientists think that when the eyes and ears are deprived of input, the brain becomes hypersensitive and neurons may fire with little provocation, creating these kinds of light shows. Lindahl suspects that the lights that meditators see are the result of the same phenomenon—that meditating is itself a mild form of sensory deprivation.

In some ways, this is not surprising. Meditation often involves being alone, in a quiet, dimly lit room. Some Tibetan Buddhists practice what’s called “mun mtshams,” or “dark retreat,” in which they close themselves off in the dark. And it’s not just about the physical spaces where meditation happens—many forms of meditation are focused on isolating a single stimulus and shutting out everything else, a kind of mental sensory deprivation. By focusing on breath, a specific vision, a single object, or something else as they get into the zone, meditators are “guarding the sense doors” from the rest of the world. This may be an ancient trick for creating a space of intentional sensory deprivation and opening oneself up to the dazzling light show that often follows.


~ Rose Eveleth is Nautilus’ special media manager.

Thursday, March 20, 2014

Big Bang Discovery Opens Doors to the "Multiverse" (National Geographic)

This short article (considering the subject matter) from National Geographic Daily News is a good explainer about Monday's announcement of gravitational waves and how that discovery opens the door even wider for multiverse theories (that our universe is only one of MANY universes separated by vast distances of space).

Big Bang Discovery Opens Doors to the "Multiverse"

Gravitational waves detected in the aftermath of the Big Bang suggest one universe just might not be enough.


This illustration depicts a main membrane out of which individual universes arise; they then expand in size through time. 
Written by Dan Vergano
National Geographic
Published March 18, 2014

Bored with your old dimensions—up and down, right and left, and back and forth? So tiresome. Take heart, folks. The latest news from Big Bang cosmologists offers us some relief from our humdrum four-dimensional universe.

Gravitational waves rippling through the aftermath of the cosmic fireball, physicists suggest, point to us inhabiting a multiverse, a universe filled with many universes. (See: "Big Bang's 'Smoking Gun' Confirms Early Universe's Exponential Growth.")

That's because those gravitational wave results point to a particularly prolific and potent kind of "inflation" of the early universe, an exponential expansion of the dimensions of space to many times the size of our own cosmos in the first fraction of a second of the Big Bang, some 13.82 billion years ago.

"In most models, if you have inflation, then you have a multiverse," said Stanford physicist Andrei Linde. Linde, one of cosmological inflation's inventors, spoke on Monday at the Harvard-Smithsonian Center for Astrophysics event where the BICEP2 astrophysics team unveiled the gravitational wave results.

Essentially, in the models favored by the BICEP2 team's observations, the process that inflates a universe looks just too potent to happen only once; rather, once a Big Bang starts, the process would happen repeatedly and in multiple ways. (Learn more about how universes form in "Cosmic Dawn" on the National Geographic website.)

"A multiverse offers one good possible explanation for a lot of the unique observations we have made about our universe," says MIT physicist Alan Guth, who first wrote about inflation theory in 1980. "Life being here, for example."

Lunchtime

The Big Bang and inflation make the universe look like the ultimate free lunch, Guth has suggested, where we have received something for nothing.

But Linde takes this even further, suggesting the universe is a smorgasbord stuffed with every possible free lunch imaginable.

That means every kind of cosmos is out there in the aftermath of the Big Bang, from our familiar universe chock full of stars and planets to extravaganzas that encompass many more dimensions, but are devoid of such mundane things as atoms or photons of light.

In this multiverse spawned by "chaotic" inflation, the Big Bang is just a starting point, giving rise to multiple universes (including ours) separated by unimaginable gulfs of distance. How far does the multiverse stretch? Perhaps to infinity, suggests MIT physicist Max Tegmark, writing for Scientific American.

That means that spread across space at distances far larger than the roughly 92 billion light-year width of the universe that we can observe, other universes reside, some with many more dimensions and different physical properties and trajectories. (While the light from the most distant stuff we can see started out around 14 billion light-years away, the universe is expanding at an accelerating rate, stretching the boundaries of the observable universe since then.)

Comic Mismatches

"I'm a fan of the multiverse, but I wouldn't claim it is true," says Guth. Nevertheless, he adds, a multiverse explains a lot of things that now confuse cosmologists about our universe.

For example, there is the 1998 discovery that galaxies in our universe seem to be spreading apart at an accelerating rate, when their mutual gravitational attraction should be slowing them down. This discovery, which garnered the 2011 Nobel Prize in physics, is generally thought to imply the existence of a "dark energy" that counteracts gravity on cosmic scales. Its nature is a profound mystery. About the only thing we understand about dark energy, physicists such as Michael Turner of the University of Chicago have long said, is its name.

"There is a tremendous mismatch between what we calculate [dark energy] ought to be and what we observe," Guth says. According to quantum theory, subatomic particles are constantly popping into existence and vanishing again in the vacuum of space, which should endow it with energy—but that vacuum energy, according to theoretical calculations, would be 120 orders of magnitude (a 1 followed by 120 zeroes) too large to explain the galaxy observations. The discrepancy has been a great source of embarrassment to physicists.

A multiverse could wipe the cosmic egg off their faces. On the bell curve of all possible universes spawned by inflation, our universe might just happen to be one of the few universes in which the dark energy is relatively lame. In others, the antigravity force might conform to physicists' expectations and be strong enough to rip all matter apart.

A multiverse might also explain away another embarrassment: the number of dimensions predicted by modern "superstring" theory. String theory describes subatomic particles as being composed of tiny strings of energy, but it requires there to be 11 dimensions instead of the four we actually observe. Maybe it's just describing all possible universes instead of our own. (It suggests there could be a staggeringly large number of possibilities—a 1 with 500 zeroes after it.)

Join the "multiverse club," Linde wrote in a March 9 review of inflationary cosmology, and what looks like a series of mathematical embarrassments disappears in a cloud of explanation. In a multiverse, there can be more things dreamt of in physicists' philosophy than happen to be found in our sad little heaven and earth.

Life, the Universe, and Everything

The multiverse may even help explain one of the more vexing paradoxes about our world, sometimes called the "anthropic" principle: the fact that we are here to observe it.

To cosmologists, our universe looks disturbingly fine-tuned for life. Without its Goldilocks-perfect alignment of the physical constants—everything from the strength of the force attaching electrons to atoms to the relative weakness of gravity—planets and suns, biochemistry, and life itself would be impossible. Atoms wouldn't stick together in a universe with more than four dimensions, Guth notes.

If ours was the only cosmos spawned by a Big Bang, these life-friendly properties would seem impossibly unlikely. But in a multiverse containing zillions of universes, a small number of life-friendly ones would arise by chance—and we could just happen to reside in one of them.

"Life may have formed in the small number of vacua where it was possible, in a multiverse," says Guth. "That's why we are seeing what we are seeing. Not because we are special, but because we can."


Learn more about the birth of our universe in our April issue.

Follow Dan Vergano on Twitter
ART BY MOONRUNNER DESIGN 

The Future of Brain Implants - Gary Marcus and Christof Koch

From the Wall Street Journal, this is an interesting article on the state and future of brain implants. As I joked on Facebook, "I would like an implant that accesses the Library of Congress . . . oh, and please make it so that I can search the entire library and sift results subconsciously."

Is this the first step toward a Borg-like future?

The Future of Brain Implants

How soon can we expect to see brain implants for perfect memory, enhanced vision, hypernormal focus or an expert golf swing?


By Gary Marcus and Christof Koch
March 14, 2014

Brain implants today are where laser eye surgery was several decades ago, fraught with risk, applicable only to a narrowly defined set of patients – but a sign of things to come. NYU Professor of Psychology Gary Marcus discusses on Lunch Break. Photo: Getty.

What would you give for a retinal chip that let you see in the dark or for a next-generation cochlear implant that let you hear any conversation in a noisy restaurant, no matter how loud? Or for a memory chip, wired directly into your brain's hippocampus, that gave you perfect recall of everything you read? Or for an implanted interface with the Internet that automatically translated a clearly articulated silent thought ("the French sun king") into an online search that digested the relevant Wikipedia page and projected a summary directly into your brain?

Science fiction? Perhaps not for very much longer. Brain implants today are where laser eye surgery was several decades ago. They are not risk-free and make sense only for a narrowly defined set of patients—but they are a sign of things to come.

Unlike pacemakers, dental crowns or implantable insulin pumps, neuroprosthetics—devices that restore or supplement the mind's capacities with electronics inserted directly into the nervous system—change how we perceive the world and move through it. For better or worse, these devices become part of who we are.

Neuroprosthetics aren't new. They have been around commercially for three decades, in the form of the cochlear implants used in the ears (the outer reaches of the nervous system) of more than 300,000 hearing-impaired people around the world. Last year, the Food and Drug Administration approved the first retinal implant, made by the company Second Sight.

Both technologies exploit the same principle: An external device, either a microphone or a video camera, captures sounds or images and processes them, using the results to drive a set of electrodes that stimulate either the auditory or the optic nerve, approximating the naturally occurring output from the ear or the eye.



Another type of now-common implant, used by thousands of Parkinson's patients around the world, sends electrical pulses deep into the brain proper, activating some of the pathways involved in motor control. A thin electrode is inserted into the brain through a small opening in the skull; it is connected by a wire that runs to a battery pack underneath the skin. The effect is to reduce or even eliminate the tremors and rigid movement that are such prominent symptoms of Parkinson's (though, unfortunately, the device doesn't halt the progression of the disease itself). Experimental trials are now under way to test the efficacy of such "deep brain stimulation" for treating other disorders as well.

Electrical stimulation can also improve some forms of memory, as the neurosurgeon Itzhak Fried and his colleagues at the University of California, Los Angeles, showed in a 2012 article in the New England Journal of Medicine. Using a setup akin to a videogame, seven patients were taught to navigate a virtual city environment with a joystick, picking up passengers and delivering them to specific stores. Appropriate electrical stimulation to the brain during the game increased their speed and accuracy in accomplishing the task.

But not all brain implants work by directly stimulating the brain. Some work instead by reading the brain's signals—to interpret, for example, the intentions of a paralyzed user. Eventually, neuroprosthetic systems might try to do both, reading a user's desires, performing an action like a Web search and then sending the results directly back to the brain.

How close are we to having such wondrous devices? To begin with, scientists, doctors and engineers need to figure out safer and more reliable ways of inserting probes into people's brains. For now, the only option is to drill small burr-holes through the skull and to insert long, thin electrodes—like pencil leads—until they reach their destinations deep inside the brain. This risks infection, since the wires extend through the skin, and bleeding inside the brain, which could be devastating or even fatal.

External devices, like the brainwave-reading skull cap made by the company NeuroSky (marketed to the public as "having applications for wellness, education and entertainment"), have none of these risks. But because their sensors are so far removed from individual neurons, they are also far less effective. They are like Keystone Kops trying to eavesdrop on a single conversation from outside a giant football stadium.


A boy wearing a cochlear implant for the hearing-impaired. A second portion is surgically implanted under the skin. Barcroft Media/Getty Images
Today, effective brain-machine interfaces have to be wired directly into the brain to pick up the signals emanating from small groups of nerve cells. But nobody yet knows how to make devices that listen to the same nerve cells that long. Part of the problem is mechanical: The brain sloshes around inside the skull every time you move, and an implant that slips by a millimeter may become ineffective.

Another part of the problem is biological: The implant must be nontoxic and biocompatible so as not to provoke an immune reaction. It also must be small enough to be totally enclosed within the skull and energy-efficient enough that it can be recharged through induction coils placed on the scalp at night (as with the recharging stands now used for some electric toothbrushes).

These obstacles may seem daunting, but many of them look suspiciously like the ones that cellphone manufacturers faced two decades ago, when cellphones were still the size of shoeboxes. Neural implants will require even greater advances since there is no easy way to upgrade them once they are implanted and the skull is sealed back up.

But plenty of clever young neuro-engineers are trying to surmount these problems, like Michel Maharbiz and Jose Carmena and their colleagues at the University of California, Berkeley. They are developing a wireless brain interface that they call "neural dust." Thousands of biologically neutral microsensors, on the order of one-tenth of a millimeter (approximately the thickness of a human hair), would convert electrical signals into ultrasound that could be read outside the brain.

The real question isn't so much whether something like this can be done but how and when. How many advances in material science, battery chemistry, molecular biology, tissue engineering and neuroscience will we need? Will those advances take one decade, two decades, three or more? As Dr. Maharbiz said in an email, once implants "can be made 'lifetime stable' for healthy adults, many severe disabilities…will likely be chronically treatable." For millions of patients, neural implants could be absolutely transformative.

Assuming that we're able to clear these bioengineering barriers, the next challenge will be to interpret the complex information from the 100 billion tiny nerve cells that make up the brain. We are already able to do this in limited ways.

Based on decades of prior research in nonhuman primates, John Donoghue of Brown University and his colleagues created a system called BrainGate that allows fully paralyzed patients to control devices with their thoughts. BrainGate works by inserting a small chip, studded with about 100 needlelike wires—a high-tech brush—into the part of the neocortex controlling movement. These motor signals are fed to an external computer that decodes them and passes them along to external robotic devices.

Almost a decade ago, this system was used by a tetraplegic to control an artificial hand. More recently, in a demonstration of the technology's possibilities that is posted on YouTube, Cathy Hutchinson, paralyzed years earlier by a brainstem stroke, managed to take a drink from a bottle of coffee by manipulating a robot arm with only her brain and a neural implant that literally read (part of) her mind.

For now, guiding a robot arm this way is cumbersome and laborious, like steering a massive barge or an out-of-alignment car. Given the current state of neuroscience, even our best neuroscientists can read the activity of a brain only as if through a glass darkly; we get the gist of what is going on, but we are still far from understanding the details.

In truth, we have no idea at present how the human brain does some of its most basic feats, like translating a vague desire to return that tennis ball into the torrent of tightly choreographed commands that smoothly execute the action. No serious neuroscientist could claim to have a commercially ready brain-reading device with a fraction of the precision or responsiveness of a computer keyboard.

In understanding the neural code, we have a long way to go. That's why the federally funded BRAIN Initiative, announced last year by President Barack Obama, is so important. We need better tools to listen to the brain and more precise tools for sending information back to the brain, along with a far more detailed understanding of different kinds of nerve cells and how they fit together in complex circuits.

The coarse-grained functional MRI brain images that have become so popular in recent years won't be enough. For one thing, they are indirect; they measure changes not in electrical activity but in local blood flow, which is at best an imperfect stand-in. Images from fMRIs also lack sufficient resolution to give us true mastery of the neural code. Each three-dimensional pixel (or "voxel") in a brain scan contains a half-million to one million neurons. What we really need is to be able to zero in on individual neurons.

Zooming in further is crucial because the atoms of perception, memory and consciousness aren't brain regions but neurons and even finer-grained elements. Chemists turned chemistry into a quantitative science once they realized that chemical reactions are (almost) all about electrons making and breaking bonds among atoms. Neuroscientists are trying to do the same thing for the brain. Until we do, brain implants will be working only on the logic of forests, without sufficient understanding of the individual trees.

One of the most promising tools in this regard is a recently developed technique called optogenetics, which hijacks the molecular machinery of the genes found inside every neuron to directly manipulate the brain's circuitry. In this way, any group of neurons with a unique genetic ZIP Code can be switched on or off, with unparalleled precision, by brief pulses of different colored light—effectively turning the brain into a piano that can be played. This fantastic marriage of molecular biology with optics and electronics is already being deployed to build advanced retinal prosthetics for adult-onset blindness. It is revolutionizing the whole field of neuroscience.

Advances in molecular biology, neuroscience and material science are almost certainly going to lead, in time, to implants that are smaller, smarter, more stable and more energy-efficient. These devices will be able to interpret directly the blizzard of electrical activity inside the brain. For now, they are an abstraction, something that people read about but are unlikely to experience for themselves. But someday that will change.

Consider the developmental arc of medical technologies such as breast surgery. Though they were pioneered for post-mastectomy reconstruction and for correcting congenital defects, breast augmentation and other cosmetic procedures such as face-lifts and tummy tucks have become routine. The procedures are reliable, effective and inexpensive enough to be attractive to broad segments of society, not just to the rich and famous.

Eventually neural implants will make the transition from being used exclusively for severe problems such as paralysis, blindness or amnesia. They will be adopted by people with less traumatic disabilities. When the technology has advanced enough, implants will graduate from being strictly repair-oriented to enhancing the performance of healthy or "normal" people. They will be used to improve memory, mental focus (Ritalin without the side effects), perception and mood (bye, bye Prozac).

Many people will resist the first generation of elective implants. There will be failures and, as with many advances in medicine, there will be deaths. But anybody who thinks that the products won't sell is naive. Even now, some parents are willing to let their children take Adderall before a big exam. The chance to make a "superchild" (or at least one guaranteed to stay calm and attentive for hours on end during a big exam) will be too tempting for many.

Even if parents don't invest in brain implants, the military will. A continuing program at Darpa, a Pentagon agency that invests in cutting-edge technology, is already supporting work on brain implants that improve memory to help soldiers injured in war. Who could blame a general for wanting a soldier with hypernormal focus, a perfect memory for maps and no need to sleep for days on end? (Of course, spies might well also try to eavesdrop on such a soldier's brain, and hackers might want to hijack it. Security will be paramount, encryption de rigueur.)

An early generation of enhancement implants might help elite golfers improve their swing by automating their mental practice. A later generation might allow weekend golfers to skip practice altogether. Once neuroscientists figure out how to reverse-engineer the end results of practice, "neurocompilers" might be able to install the results of a year's worth of training directly into the brain, all in one go.

That won't happen in the next decade or maybe even in the one after that. But before the end of the century, our computer keyboards and trackpads will seem like a joke; even Google Glass 3.0 will seem primitive. Why would you project information onto your eyes (partly occluding your view) when you could write information into your brain so your mind can directly interpret it? Why should a computer wait for you to say or type what you mean rather than anticipating your needs before you can even articulate them?

By the end of this century, and quite possibly much sooner, every input device that has ever been sold will be obsolete. Forget the "heads-up" displays that the high-end car manufactures are about to roll out, allowing drivers to see data without looking away from the road. By the end of the century, many of us will be wired directly into the cloud, from brain to toe.

Will these devices make our society as a whole happier, more peaceful and more productive? What kind of world might they create?

It's impossible to predict. But, then again, it is not the business of the future to be predictable or sugarcoated. As President Ronald Reagan once put it, "The future doesn't belong to the fainthearted; it belongs to the brave."

The augmented among us—those who are willing to avail themselves of the benefits of brain prosthetics and to live with the attendant risks—will outperform others in the everyday contest for jobs and mates, in science, on the athletic field and in armed conflict. These differences will challenge society in new ways—and open up possibilities that we can scarcely imagine.

Dr. Marcus is professor of psychology at New York University and often blogs about science and technology for the New Yorker. Dr. Koch is the chief scientific officer of the Allen Institute for Brain Science in Seattle.

New Issue - Integral Review: Volume 10, No. 1, March 2014


A new issue of Integral Review is online and free to read. This issue features articles by Sara Nora Ross, Bonnitta Roy, and a book review by Zak Stein. An article by Kevin J. Bowman, "Correcting Improper Uses of Perspectives, Pronouns, and Dualities in Wilberian Integral Theory: An Application of Holarchical Field Theory," sounds particularly interesting.

Integral Review
Volume 10, No. 1
March 2014



Page 
Links
Editorial
Jonathan Reams
1
Peer Reviewed


The Complexity of the Practice of Ecosystem-Based Management
Verna G. DeLauer, Andrew A. Rosenberg, Nancy C. Popp, David R. Hiley, and Christine Feurt
4



Correcting Improper Uses of Perspectives, Pronouns, and Dualities in Wilberian Integral Theory: An Application of Holarchical Field Theory
Kevin J. Bowman

Beyond Social Exchange Theory: An Integrative Look at Transcendent
Mental Models for Engagement
Latha Poonamallee and Sonia Goltz

63
A Developmental Behavioral Analysis of Dual Motives’ Role in Political Economies of Corruption
Sara Nora Ross

91
Editorially Reviewed

A Brief Overview of Developmental Theory, or What I Learned in the FOLA Course
Jonathan Reams

122
An Exploration of the Meaning-making of Vehement Hardliners in Controversial Social Issues: Reactions to Youth Unrest in Suburbs of Gothenburg Sweden.
Thomas Jordan

154
Book Review: On Spiritual Books and their Readers: A Review of Radical Kabbalah by Marc Gafni, 2012   
Reviewed by Zachary Stein

168
Book Review: Business Secrets of the Trappist Monks: One CEO's Quest for Meaning and Authenticity, by August Turak, 2013
Reviewed by Jonathan Reams

179
Born in the Middle: The Soteriological Streams of Integral Theory and Meta-Reality
Bonnitta Roy

187

Wednesday, March 19, 2014

Robert Hass, Eva Saulitis & Gary Snyder: Writing Nature


The 2014 AWP Conference and Bookfair was held in Seattle, WA, from February 27 - March 1, 2014. Among the panel discussions, one feature two of my favorites talking about "writing nature," Robert Hass and Gary Snyder. They were joined by poet and non-fiction author Eva Saulitis, with Peggy Shumaker acting as host and moderator.

Robert Hass, Eva Saulitis, & Gary Snyder: Writing Nature

Event Date: 02.28.14



Robert Hass, Eva Saulitis, & Gary Snyder: Writing Nature from Association of Writers and Writing Programs on FORA.tv

Author and marine biologist Eva Saulitis joins legendary poets Robert Hass and Gary Snyder for a reading followed by a conversation, moderated by Peggy Shumaker, about the task of writing about nature in a culture that often prizes easily commodifiable academic achievement over messier ways of knowing: the lyric, the spiritual, the sublime.

Bio


Robert L. Hass (born March 1, 1941, San Francisco) is an American poet. He served as Poet Laureate of the United States from 1995 to 1997. He won the 2007 National Book Award and shared the 2008 Pulitzer Prize for the collection Time and Materials: Poems 1997-2005.

Eva Saulitis, a writer and marine biologist, has studied the killer whales of Prince William Sound, Alaska for twenty-five years. She is the author of a book of essays Leaving Resurrection: Chronicles of a Whale Scientist, the poetry collection Many Ways to Say It, and Into Great Silence: A Memoir of Discovery and Loss among Vanishing Orcas. She has received fellowships from the Rasmuson Foundation and the Alaska State Council on the Arts and is an associate professor in the University of Alaska Low-Residency MFA program.

Peggy Shumaker's newest book is Toucan Nest: Poems of Costa Rica. Her memoir is Just Breathe Normally. A former Alaska State Writer Laureate, she edits the Alaska Literary Series and Boreal Books, publishing literature and fine art from Alaska. She teaches at the Rainier Writing Workshop and the MFA at Pacific Lutheran University.

Gary Snyder, best known as a poet, is an essayist, lecturer, and environmental activist. He is the author of over twenty books, including Turtle Island, winner of the 1975 Pulitzer Prize for Poetry. He served for many years as a faculty member at the University of California, Davis and has been a translator of ancient Chinese and modern Japanese literary texts into English.

Related Links



George Atwood - The Abyss of Madness

[NOTE: I originally posted this in August of 2012. However, I am seeing clients who would fall into the definition Atwood uses for "madness."]



I am reading George Atwood's The Abyss of Madness (Psychoanalytic Inquiry Book Series) (2011) as part of an intersubjective, relational psychoanalytic study group I have been a part of for the last two and a half years. Aside from a few essays, this is is the first work I have read that is specifically Atwood's own thinking (most of the other books he had co-authored with Robert Stolorow).

Here is the description of the book from Amazon:
Despite the many ways in which the so-called psychoses can become manifest, they are ultimately human events arising out of human contexts. As such, they can be understood in an intersubjective manner, removing the stigmatizing boundary between madness and sanity. Utilizing the post-Cartesian psychoanalytic approach of phenomenological contextualism, as well as almost 50 years of clinical experience, George Atwood presents detailed case studies depicting individuals in crisis and the successes and failures that occurred in their treatment. Topics range from depression to schizophrenia, bipolar disorder to dreams, dissociative states to suicidality. Throughout is an emphasis on the underlying essence of humanity demonstrated in even the most extreme cases of psychological and emotional disturbance, and both the surprising highs and tragic lows of the search for the inner truth of a life – that of the analyst as well as the patient.
I very much like the way he conceptualizes these issues, even when I do not agree with his perspective on the mind. The way he talks about mental illness feels right in terms of how the client experiences it, and in the relational/intersubjective model, meeting the client in his or her own reality is essential.

When he speaks of madness in these passages, he is referring to psychosis and schizophrenia, or even bipolar disorder in its manic stage. These are not cases of simple depression, although there is certainly some similarity at a much lower intensity. And all these terms are things that he rejects as scientific defense mechanisms against our own fears of the abyss and what that means for our shared sense of being human.

It's worth bearing in mind that the psychoanalytic school refused to treat the "psychotics" for decades following Freud based on his assumption that they were not amenable to treatment. Harry Stack Sullivan, in the late 1920s, was one of the first psychoanalytically trained therapists to work with schizophrenics, and he did so based on his "problems with living" definition of mental illness. Sullivan was also one of the first psychoanalysts to focus on the "self system" as the outcome of relational patterns in the child's life (eventually giving rise to attachment theory). Atwood is definitely a lineage holder in the tradition Sullivan created that has been expanded upon by Stolorow, Donna Orange, and others.

Here a few quotes that I have highlighted in the text that I think are illustrative of his thinking.
Phenomenologically, going mad is a matter of the fragmentation of the soul, of a fall into nonbeing, of becoming subject to a sense of erasure and annihilation. The fall into the abyss of madness, when it occurs, is felt as something infinite and eternal. One falls away, limitlessly, from being itself,  into utter nonbeing.  (p. 40)

* * * *

Madness is not an illness, and it is not a disorder. Madness is the abyss. It is the experience of utter annihilation. Calling it a disease and distinguishing its forms, arranging its manifestations in carefully assembled lists and charts, creating scientific-sounding pseudo-explanations for it--all of these are intellectually indefensible, and I think they occur because of the terror. What is the terror I am speaking of? It is the terror of madness itself, which is the anxiety that one may fall into nonbeing.

The abyss lies on or just beyond the horizon of every person's world, and there is nothing more frightening. Even death does not hold a terror for us comparable to the one associated with the abyss. (p. 41)
He suggests that death offers a potential reunion with loved ones, or conversely, a release or relief from the sorrows and pains of our lives. We can rage against the dying of the light, or marvel at our capacity to contemplate our own demise, or even imagine the world without us.

But the descent into madness, into the abyss, offers no such relief.
It is the end of all possible responses and meanings, the erasure of a world in which there is anything coherent to respond to, the melting away of anyone to engage in a response. It is much more scary than death, and this proven by the fact that people in fear of annihilation--the terror of madness--so often commit suicide rather than continue with it. (p. 42)

* * * *

People often fall not because the bad happens, but rather because the good stops happening. Sanity is sustained by a network of validating, affirming connections that exist in a person's life: connections to other beings. If those connections fail, one falls. The beings on whom one relies include, obviously, other people, sometimes animals, often beings known only through memory and creative imagination. It some instances it is the connection to God that protects a person against madness. Strip any person of his or her sustaining links to others, and that person falls. No one is immune, because madness is a possibility of every human life. (p. 43)

* * * *

What a person in the grip of annihilation needs, above all else, is someone's understanding of the horror, which will include a human response assisting in the journey back to some sort of psychological survival. A person undergoing an experience of the total meltdown of the universe, when told that his or her suffering stems from a mental illness, will generally feel confused, invalidated, and undermined. Because there are no resources to fight against such a view, its power will have a petrifying effect on subjectivity and deepen the fall into the abyss. (p. 45)
Atwood contends that an objectified psychiatric diagnosis is the antithesis of what is needed - essentially mirroring and validation. He offers a thought experiment: Imagine a young man, maybe in his early 20s, who is in the midst of a fall into the abyss. This young man finds himself committed to an in-patient psych ward where he is given the diagnosis of a brain disease called schizophrenia.
The annihilating impact of such a view then becomes symbolized in the patient's unfolding experience that vicious, destructive voices are speaking to him over invisible wires and saying repeatedly that he should die. In this way a spiraling effect occurs, wherein the operation of the medical model further injures the already devastated patient, whose reactions to the new injuries in turn reconfirm the correctness of the diagnosis. (p. 45-46)
He prefers to be with the client in whatever space they inhabit, to show them that he is listening and trying to comprehend their experiences as much as he is able - and, above all else, that he is prepared to do whatever is necessary to help.

I have had clients in the past who I felt unable to help, because I was unable to be with them in their abyss, to extend my empathy into what I experiences as their delusional states. I failed them. Even as I sat with them and tried to understand what they were telling me, I did not understand that their delusions were their psychic organizing principles, were their symbolic truth of how the world has betrayed them.

Atwood, in the many case studies he presents, is revealed as someone who can feel into the annihilation his clients present him with, but he also acknowledges how challenging it is:
Working in the territory of annihilated souls is never easy. To really listen to someone, anyone, to hear the depth of what he or she may have felt, to work one's way into realms of experience never before perceived by anyone and therefore never articulated--all of this is as hard a task as one may undertake. (p. 51)
It is indeed. And it is also rewarding when the therapist can do so successfully and allow the client to feel heard and validated - maybe for the first tine in their lives.

I want to wrap up this post with a few more passages that deal with etiology. I posted some thoughts recently on a more relationally based diagnostic manual for counselors and therapists - Atwood conceptualizes cases in a way that fits with what I would like to see.
Those who feel they are not present, and who affirm the existence of machine that controls their minds and bodies, are often the products of profound enmeshment with their caregivers in childhood. An accommodation has taken place at a very young age in which the agenda of the caregiver--it can be the mother, the father, or both--becomes the supreme principle defining the child's developing sense of personal identity. The experience of the child as an independent person in his or her own right is nullified, so that they child the parents wish for can be brought into being. Very often thee are no outward signs of anything amiss, as family life unfolds in a seeming harmony. Somewhere along the way, however , the false self begins to crumble, and a sense of the degree to which the child has been absent from life arises. This emerging sense of never having been there, of having been controlled and regulated by outside forces, is so unstable and fragmentary that it is given concrete form. What is seen from the viewpoint of others as a delusion then begins to crystallize, for example in the image of an influencing machine (Tausk, 1917; Orange et al., 1997, chap. 4). Within the world of the child, now perhaps chronologically an adult, the so-called delusion is a carrier of truth that has up until then been entirely hidden and erased. What looks like a breakdown into psychosis and delusion thus may represent an attempted breakthrough, but the inchoate "I" does require an understanding and responsive "Thou" in order to have a chance to consolidate itself. (p. 60-61)
 That last sentence is the essence of the relational model - we are relational beings, the damage to our sense of self that we experience is nearly always relational, and if there is to be healing of that damage, that too much be relational - it requires mirroring, validation, and the sense of human connection that is vital to sanity for all of us.

Omnivore - Society in a Globalizing World

From Bookforum's Omnivore blog, this new collection of links examines a variety of social issues as the world becomes increasingly globalized.

Society in a globalizing world


Mar 17 2014
3:00PM

Tuesday, March 18, 2014

Steven Pinker, Rebecca Newberger Goldstein: The Long Reach of Reason

The Long Reach of Reason - Steven Pinker, Rebecca Newberger Goldstein



Here's a TED first: an animated Socratic dialog! In a time when irrationality seems to rule both politics and culture, has reasoned thinking finally lost its power? Watch as psychologist Steven Pinker is gradually, brilliantly persuaded by philosopher Rebecca Newberger Goldstein that reason is actually the key driver of human moral progress, even if its effect sometimes takes generations to unfold. The dialog was recorded live at TED, and animated, in incredible, often hilarious, detail by Cognitive.

This talk was presented at an official TED Conference. TED's editors featured it among our daily selections on the home page.


Steven Pinker - Linguist
Linguist Steven Pinker questions the very nature of our thoughts — the way we use words, how we learn, and how we relate to others. In his best-selling books, he has brought sophisticated language analysis to bear on topics of wide general interest. Full bio


Rebecca Newberger Goldstein - Philosopher and writer
Rebecca Newberger Goldstein writes novels and nonfiction that explore questions of philosophy, morality and being. Full bio
* * * * *

Why this might just be the most persuasive TED Talk ever posted


Posted by: Chris Anderson
March 17, 2014


In today’s talk, “The Long Reach of Reason,” Steven Pinker and Rebecca Newberger Goldstein have been animated by RSA.
I want to give you the back story behind today’s TED Talk and make the case that it’s one of the most significant we’ve ever posted. And I’m not just talking about its incredible animation. I’m talking about its core idea.

Two years ago the psychologist Steven Pinker and the philosopher Rebecca Newberger Goldstein, who are married, came to TED to take part in a form of Socratic dialog.

She sought to argue that Reason was a much more powerful force in history than it’s normally given credit for. He initially defended the modern consensus among psychologists and neurologists, that most human behavior is best explained through other means: unconscious instincts of various kinds. But over the course of the dialog, he is persuaded by her, and together they look back through history and see how reasoned arguments ended up having massive impacts, even if those impacts sometimes took centuries to unfold.

The script was clever, the argument powerful. However on the day, they bombed. And I’m mainly to blame.

You see, we gambled that year on seeking to expand our repertoire of presentation formats. Their dialog appeared in a session we called “The Dinner Party.” The idea was that all the speakers at the session would be seated around a table. They would individually give their talks, then come sit back down with the others to debate the talk, and everyone would end up the wiser. Seemed like an interesting idea at the time. But it didn’t work. Somehow the chemistry of the dinner guests never ignited. And perhaps the biggest reason for that was that I, as head of the table trying to moderate the conversation, had my back to the audience. The audience disengaged, the evening fell flat, and Steve and Rebecca’s dialog, which also suffered from some audio issues, was rated too low for us to consider posting it online.


At TED2012, Steven Pinker and Rebecca Newberger Goldstein explored how reason shaped human history. We’ve animated the talk to bring new life to this important idea. Photo: James Duncan Davidson
That would normally have been the end of it. Except that a strange thing happened. I could not get their core idea out of my head. The more I thought about it, the more I realized that TED’s entire mission rested on the premise that ideas really matter. And unless reasoned argument is the prime tool shaping those ideas, they can warp into pretty much anything, good or bad.

And so I tried to figure if there was a way to rescue the talk. And it turned out that there was. It came in the shape of Andrew Park, who, in my humble-but-true opinion is the world’s greatest animator of concepts. His RSA Animate series has notched up millions of views for sometimes difficult topics, and we have worked with him before to animate talks from Denis Dutton and some of our TED-Ed lessons (including one from yours truly on Questions No One Knows the Answer To.) If he could make me interesting, he sure as hell could do so for Pinker and Goldstein.

And so it turned out. Andrew and his amazing team at Cognitive fixed the audio issue and turned the entire talk into an animated movie of such imagination, humor and, most of all, explanatory power, it took my breath away.

And so here it is. The Long Reach of Reason. A talk in animated dialog form, arguing that Reason is capable of extending its influence across centuries, making it the single most powerful driver of long-term change. Please watch it. A) you’ll be blown away by how it’s animated. B) it may change forever how you think about Reason. And that’s a good thing.

It is a delicious example in favor of the talk’s conclusions that it was the power of its own arguments that kept it alive and turned it into a animation capable of far greater reach than the original.

For me, the argument in this talk is ultimately a profoundly optimistic one. If it turns out to be valid, then there really can be such a thing in the world as moral progress. People are genuinely capable of arguing each other into new beliefs, new mindsets that ultimately will benefit humanity. If you think that’s unlikely, watch the talk. You might just find yourself reasoned to a different opinion.


An experiment I will never try again: hosting a session with my back to the audience. Photo: James Duncan Davidson

Song of the Reed: The Poetry of Rumi


The 2014 AWP Conference and Bookfair was held in Seattle, WA, from February 27 - March 1, 2014. Among the panel discussions, one focused and the life and poetry of Rumi, featuring Coleman Barks (one of the best known translators), Brad Gooch (author of a forthcoming Rumi biography), and Buddhist poet Anne Waldman (another of my favorites poets).

Song of the Reed: The Poetry of Rumi

Event Date: 03.01.14
Speakers: Coleman Barks, Brad Gooch, Anne Waldman


Song of the Reed: The Poetry of Rumi from Association of Writers and Writing Programs on FORA.tv

Thirteenth-century Persian poet Rumi is now the most popular poet in the United States. In this event, leading Rumi interpreter, Coleman Barks, reads his beloved versions of the Sufi poet’s verse, biographer Brad Gooch shares research into Rumi’s lived experience, and poet Anne Waldman reflects on Rumi’s contribution to poetry’s ecstatic tradition.

Bio


Coleman Barks has taught poetry and creative writing at the University of Georgia for thirty years. He is the author of numerous Rumi translations. His work with Rumi was the subject of an hour-long segment in Bill Moyers' Language of Life series on PBS, and he is a featured poet and translator in Bill Moyers' poetry special, "Fooling with Words." His own books of poetry include Winter Sky: Poems 1968-2008.

Brad Gooch’s Flannery: A Biography of Flannery O’Connor was a 2010 National Book Critics Circle Award finalist and a New York Times notable book. His short story collection Jailbait and Other Stories won the 1985 Writer’s Choice Award, sponsored by the Pushcart Foundation and National Endowment for the Arts. A Guggenheim fellow in biography, he has received a National Endowment for the Humanities fellowship and is a professor of English at William Paterson University. He is currently at work on a biography and translations of Rumi.

Anne Waldman
is the author of more than forty books, including Fast Speaking Woman and Vow to Poetry, a collection of essays, and The Iovis Trilogy: Colors in the Mechanism of Concealment, an epic poem and twenty-five-year project. With Allen Ginsberg she co-founded the Jack Kerouac School of Disembodied Poetics at Naropa University, where she is a Distinguished Professor of Poetics. She received a 2013 Guggenheim Fellowship, the Poetry Society of America’s Shelley Memorial Award, and has recently been appointed a Chancellor of the Academy of American Poets.