The New Yorker:

In order to do science, we’ve had to dismiss the mind. This was, in any case, the bargain that was made in the seventeenth century, when Descartes and Galileo deemed consciousness a subjective phenomenon unfit for empirical study. If the world was to be reducible to physical causation, then all mental experiences—intention, agency, purpose, meaning—must be secondary qualities, inexplicable within the framework of materialism. And so the world was divided in two: mind and matter. This dualistic solution helped to pave the way for the Enlightenment and the technological and scientific advances of the coming centuries. But an uneasiness has always hovered over the bargain, a suspicion that the problem was less solved than shelved. At the beginning of the eighteenth century, Leibniz struggled to accept that perception could be explained through mechanical causes—he proposed that if there were a machine that could produce thought and feeling, and if it were large enough that a person could walk inside of it, as he could walk inside a mill, the observer would find nothing but inert gears and levers. “He would find only pieces working upon one another, but never would he find anything to explain Perception,” he wrote.

Today we tend to regard the mind not as a mill but as a computer, but, otherwise, the problem exists in much the same way that Leibniz formulated it three hundred years ago. In 1995, David Chalmers, a shaggy-haired Australian philosopher who has been called a “rock star” of the field, famously dubbed consciousness “the hard problem,” as a way of distinguishing it from comparatively “easy” problems, such as how the brain integrates information, focusses attention, and stores memories. Neuroscientists have made significant progress on the easier problems, using fMRIs and other devices. Engineers, meanwhile, have created impressive simulations of the brain in artificial neural networks—though the abilities of these machines have only made the difference between intelligence and consciousness more stark. Artificial intelligence can now beat us in chess and Go; it can predict the onset of cancer as well as human oncologists and recognize financial fraud more accurately than professional auditors. But, if intelligence and reason can be performed without subjective awareness, then what is responsible for consciousness? Answering this question, Chalmers argued, was not simply a matter of locating a process in the brain that is responsible for producing consciousness or correlated with it. Such a discovery still would fail to explain why such correlations exist or why they lead to one kind of experience rather than another—or to nothing at all.

One line of reductionist thinking insists that the hard problem is not really so hard—or that it is, perhaps, simply unnecessary. In his new book, “Rethinking Consciousness: A Scientific Theory of Subjective Experience,” the neuroscientist and psychologist Michael Graziano writes that consciousness is simply a mental illusion, a simplified interface that humans evolved as a survival strategy in order to model the processes of the brain. He calls this the “attention schema.” According to Graziano’s theory, the attention schema is an attribute of the brain that allows us to monitor mental activity—tracking where our focus is directed and helping us predict where it might be drawn in the future—much the way that other mental models oversee, for instance, the position of our arms and legs in space. Because the attention schema streamlines the complex noise of calculations and electrochemical signals of our brains into a caricature of mental activity, we falsely believe that our minds are amorphous and nonphysical. The body schema can delude a woman who has lost an arm into thinking that it’s still there, and Graziano argues that the “mind” is like a phantom limb: “One is the ghost in the body and the other is the ghost in the head.”

Go to link