When we watch a movie or hear an orchestra playing, it seems that we perceive images and sounds as a continuous stream of information. But a new study suggests that the brain makes information conscious only at certain moments of time, which are preceded by intervals of unconscious processing that can last up to half a second.
The model, detailed in Trends in Cognitive Sciences, resolves longstanding debates about how consciousness arises and offers a new picture of how the brain becomes aware of information.
The question of when consciousness arises has puzzled philosophers, psychologists, and neuroscientists for centuries. One hypothesis proposes that consciousness is a continuous stream of percepts. “When we’re riding a bike, we feel that we’re moving at each moment of time,” says Michael Herzog, head of the Psychophysics Laboratory at EPFL School of Life Sciences, who led the new study. However, he says, this theory has serious limitations. For example, studies have shown that if a red dot appears on a screen for a fraction of a second, followed by a green dot at the same location for another brief moment of time, a person will perceive only a single yellow dot. If the hypothesis of continuous consciousness were true, one would perceive first the red dot and then the green dot, Herzog says. “But this is not true, you merge the dots and you see a yellow dot,” he says.
Another hypothesis suggests that consciousness happens only at discrete time-points, like a camera taking snapshots. But this idea has also drawn criticism: if our brain would process information every half a second, it would impossible to do even simple tasks such as riding a bike, Herzog says.
By analyzing data from previous studies that aimed to test whether or not consciousness is continuous, Herzog and his team came up with a new model, according to which the brain processes and integrates information almost continuously during intervals of unconsciousness that last up to 500 milliseconds. During this time, the brain processes the different elements of a scene and analyzes them across many different regions. Some brain areas examine the colors, others the shape and position of objects. These brain regions will then share that information and combine the different features—and when unconscious processing is complete, the conscious experience of all that is in front of us pops out.
“Up to now, some people have believed that when we look at the world, we look at just a series images. But now we say, what the brain analyzes as a basic unit of perception is not an image, it’s an entire scene that includes features such as motion,” Herzog says. It’s as if the brain stores short movies during the periods of unconscious processing and then calls up the information during the conscious moments, he adds.
Herzog says it’s unclear how multiple units of perception are stitched together in the brain, for example when we listen to a symphony. He also acknowledges that the new model is somewhat counterintuitive because it is at odds with the feeling that the world is continuously unfolding in front of our eyes. However, he adds, the model provides useful insights about how people experience reality. “We need to change our view on perception,” he says.
The new model could also open up avenues to manipulate the way the brain perceives information. During intervals of unconscious processing, when details about the surrounding world are stored in the brain, it’s possible to change those details using techniques that control brain activity with magnetic pulses, Herzog says.
Next, his team plans to find out what starts and terminates the unconscious processing intervals, and whether stress or other environmental factors can influence the durations of such intervals.