Temporal Realizations – A Neuroscientific Understanding of Conscious Processing

The term “human consciousness” is, by nature, weighty. Humanity has attempted to take global approaches to explaining the essence of this seemingly innate peculiarity about our experience, with interpretations intertwined into cultural notions and delineated in religious scripture. We’ve been fascinated by this capability for awareness of our existence, and with the advent of noninvasive brain imaging technology over the past century, we’ve been able to sharpen our analyses of consciousness through a neuroscientific lens.

Previously, neuroanatomical discovery relied on the deficit-lesion method, a technique that attempted to draw parallels between certain lesion sites in the brain (that were observed in patients post-mortem) and particular behaviors exhibited by a patient. And this method yielded promising generalizations – deficit-lesion studies have shown that patients with a unilateral lesion to their prefrontal cortex exhibit deficiencies in visual tasks or impaired visual perception (Nani et al., 2019).

This method was especially useful to gain insight into particular types of aphasias – issues with language processing brought about by brain damage that could potentially offer researchers information about a patient’s altered conscious processing – though it wasn’t until the advent of neuroimaging technology that the deficit-lesion method deemed an unreliable measure. It depends upon nonspecific damage to certain cortical areas that cannot be controlled by the researcher – we can’t replicate this exact lesion for a whole slew of ethical and practical reasons. Additionally, it’s impossible to recognize whether a certain area of the brain is enough for, or just involved in, perceptual awareness – a key tenet of conscious processing. Further, these lesions provide no temporal information. For researchers curious about whether “consciousness” was a process that involved tethering together little chunks of percepts – objects of perception – over time, or if it was more like one fluid experience with no “time slices” of percepts tethered together, their questions cannot be answered with the deficit-lesion method (Pylkkänen 2020 [1]).

The lack of specificity with the deficit-lesion method left us unable to answer some of the meatier questions in the field. To speculate that a patient has an impairment with something like “visual perception” is a loaded suggestion, especially without any precise knowledge of exactly what stage of processing stimuli is affected. Is the issue with the visual cortices? Or is it with the subcortical structures, like the thalamus, that the signals elicited by visual stimuli travel through before they reach the cortices? Or, is there something about the specific activation patterns of these areas that are the root cause of the impairment? If the latter was true, deficit-lesion method researchers would have never been able to figure that out.

Clearly, identifying the neural correlates of conscious perception has been a historically difficult question to address, but there are a couple of questions we can ask to guide our understanding: (1) What’s the difference between consciousness, perception, and attention, and what roles do perception and attention play in conscious processing? (2) How are these processes related over the time course of conscious processing? (3) Now that we know that the deficit-lesion method is insufficient for our needs, how can we use neuroimaging techniques to gain a more nuanced temporal and spatial perspective of conscious processing in the brain, and what have they shown us thus far? (4) What are the medical implications of this research, and how would we be able to utilize this knowledge to optimize patient care?

Addressing (1): Bernard J. Baars of the Wright Institute (1997) discussed how our colloquial usage of the terms attention, perception, and consciousness may skew our cognitive associations between these vocabulary terms and particular behaviorisms. He discusses attention as “something more obviously active and controllable than consciousness, while consciousness itself seems to be viewed as a receptive taking in of information from the world.” To solidify this distinction, think of the difference between looking and seeing, hearing and listening, touching and feeling. The former terms involve the intake of stimuli by our nervous system, where we gain access to a potential conscious perceptual experience. The latter terms, however, highlight the resulting experience itself, where we become conscious of this aforementioned accessed experience.

The prime reason for this distinction lies in the “access control mechanisms” that determine what will end up conscious and what will not. These mechanisms can be best understood using the example of eye movements, which are part of an innate system that controls our direction of gaze by integrating tiny elements of the visual field together. These movements allow us to become conscious of our surrounding scene, but it is not fair to call the process of eye movement control visual consciousness. Not only are they governed by different brain areas, but while eye movements involve specific regions in the visual field for analysis, it is not without “color perception, edge detection, motion analysis, and the like–” that conscious visual experience can be brought about. It is as though “attention is broadly defined as those operations [access control mechanisms] that select and maintain conscious events…”

Now, how do perception and consciousness differ from one another? This distinction is likely a bit more muddled than that between attention and consciousness, as it is not uncommon for “consciousness” and “conscious perception” to be used interchangeably. It is first important to highlight that our interest in perception is from a sensory perspective: we want Baars proposes a general argument to differentiate the two terms – he states that there are conscious elements that are abstractions and not inherently perceptual, such as inner speech, mental imagery, beliefs, and intentions, which do not fall underneath the umbrella of sensory qualities, or qualia, that Baar deems an integral element of perception. Thus, for us to have the conscious experience of, say, intention, we do not also require sensory perception.

Essentially, perception can be relevant to conscious processing, but doesn’t need to be. But because a lot of literature on the neurobiology of consciousness deals with exposing research subjects to auditory or visual stimuli, perceptual awareness ends up an integral part of conscious processing. Attention, too, while not the same as consciousness, is a prerequisite, especially in the realm of auditory and visual stimuli. Subjects need to first take in the stimuli before they can process it, and the acquisition stage of consciousness is what we think of as attention.

Addressing (2) and (3): Different schools of thought have questioned the temporal structure of our perception – it has been thought that consciousness could be a long, unmodulated stream of information dependent solely on our external experience, or that our processing of information occurs through more discrete, tethered-together time chunks. Still further, theories have arisen that attempt to combine elements of both of these theories to describe the relationship between the process of taking in stimulus and the process of translating it into conscious perception. It helps now to clarify that this approach is a Westernized one, which attempts to slice the experience of consciousness into digestible stages, though this is not the only sentiment that brain-based consciousness research has been founded on.

Today, there are a handful of theories of the temporal structure of conscious perception that are backed by controlled laboratory research conducted by a whole slew of measuring techniques, such as fMRI, transcranial magnetic stimulation (TMS), magnetoencephalography (MEG), electroencephalography (EEG), and more. While I will only discuss a few of the more recent findings, and current relevant research that follows-up on the gaps in understanding of conscious processing, this is a vibrant and fluid area of study that has puzzled researchers for decades.

While we understand the world and take in stimuli in a relatively stable and constant stream, the translation of these stimuli into conscious perception is not as steady as it may have been thought previously. Intuitively, it would seem as though sensory information is immediately and continuously converted into conscious perception: as we watch a dog run across a field, we can pay attention to its movement across time and translate its trajectory into our conscious experience. However, using EEG-sourced alpha rhythm data to observe the brain’s relaxed electrical activity, Kristofferson (1967) shows that two successive stimuli shown at different, albeit very close, times are perceived simultaneously, as opposed to perception occurring successively in the order of stimulus onset. This suggests the lack of some innate, pre-set processing time frame that is required between the onset of a certain stimulus and our perception of it. If this were the case, then we would not expect simultaneous perception, but instead a temporal gap between the two perception events that mirrors the length of time between when the two successive stimuli were shown to the subjects.

Other literature has shown that with TMS – a neuroimaging technique that allows us to watch changes in neural activity over time, and to localize these changes to certain areas of the brain – the unconscious processing that we would assume to normally occur before the onset of conscious perception can actually temporally extend beyond the beginning of conscious perception event. This was explored through a “feature fusion” task, in which a red circle stimulus was immediately followed by a green circle; when participants were asked to name what they had seen, they stated that their percept was a yellow disk, as opposed to them experiencing both the red disk and green disk separately. These results indicated the occurrence of unconscious integration of visual stimuli, and further rejecting the model of individually consecutive conscious perceptions (Herzog et al., 2016). Even the popular illusion of Nobuyuki Kayahara’s spinning silhouette of a female dancer – specifically, the ambiguity about the direction she’s spinning in – is further support for the “discrete theories” that view consciousness as a series of separate moments that are tethered together in some sort of complicated way.

However, Michael Herzog, in his paper Time Slices: What Is the Duration of a Percept?, brings up that although these lines of reasoning argue against continuous conscious perception, they also reject what he calls “simple snapshot theories”, in which “the brain collects visual information only at certain discrete points in time, like a camera” (Herzog et al., 2016). He states that humans can perceive differences in motion between two stimuli that are presented three millisecond apart, but snapshot models sample human responses at a whole range of sampling rates (in an attempt to highlight the discrete points in time aspect of simple snapshot theories), and a sampling rate of every 40 milliseconds wouldn’t even be able to pick up the second stimulus. Additionally, at this sampling rate, Herzog believes that we cannot explain why the red and green circles were perceived as a singular yellow one, even though they were displayed 40 milliseconds apart.

Herzog, who refuses both the theory of continuous conscious perception and simple snapshot theories, proposed the two-stage model. This model states that unconscious processing of visual stimuli – somewhere within the “paying attention” phase – occurs with high temporal resolution. Conscious perception then follows in discrete moments, slowly “representing” the percepts yielded by unconscious processing. This two-stage model separates the unconscious analysis of sensory information and the meaningful, post-hoc representation of the event that occurred during the unconscious processing period. The model doesn’t agree with the standard snapshot model that frames of the world are pictured at discrete times, but that the snapshots themselves represent ongoing, quasi-continuous outputs of unconscious processing. These outputs are called attractor states, and once these states are reached (essentially signaling the completion of the unconscious processing stage / integration window), only then is a conscious percept formed. And these conscious percepts do occur in discrete moments of time.

Current research conducted at the Neuroscience Institute at NYU Langone attempts to build upon this two-stage proposal by taking advantage of a motion-induced blindness (MIB) experimental design. MIB is a phenomenon “in which a small but salient object surrounded by a global moving pattern disappears from visual awareness, only to reappear after several seconds” (Bonneh & Donner, 2011). Essentially, it’s an illusion that allows a very obvious target stimulus, usually a small shape of some kind, to disappear from awareness due to the movement of the pattern surrounding the shape. It highlights the concept of multistable perception, which involves the “spontaneous alternation between two or more perceptual states that occurs when sensory information is ambiguous” (Sterzer et al., 2009). It’s especially useful in experimental environments, because it allows researchers to study alterations in subjects’ visual awareness without changing the stimuli itself – all that changes is the subject’s perception of the scene (Gage & Baars, 2018). By taking advantage of the MIB phenomenon, researchers hope to gain insight into the closure of the unconscious integration window – focusing on details such as frequency, length, and whether these closures occur in a particular pattern – which would signify that an attractor state was reached, per Herzog’s two-stage model.

Different paradigms can be designed to test the modularity and flexibility of the unconscious integration window. For example, by changing the color of the target stimulus at various delays relative to its disappearance, researchers can question what the temporal illusion stimulus depends on – whether it is the “ result of the temporal mismatch between its representations at the conscious and unconscious levels”, or if it is instead a “mismatch between the perceptual time courses of two distinct objects (the ‘‘old’’ and the ‘‘new’’)” (Wu et al., 2009). Testing the flexibility and manipulability of the integration windows brings researchers one step closer to understanding how our brains structure and consciously process stimuli.

MEG is the generally preferred neuroimaging technique when conducting this type of research due to its optimal temporal resolution (on the order of milliseconds), which is necessary for projects relying on minute temporal differences and delays. MEG measures the magnetic waves outside of the head through several tiny sensors located in a tank of liquid helium, and allows researchers to analyze neural activity patterns based on the sensor data received. Thus, it is possible to visualize these miniscule changes in when exactly the attractor state is reached based on stimuli manipulation (Ahlfors & Mody, 2016).

Addressing (4): The medical implications of this research, which may initially seem abstract and irrelevant to primary care, can allow physicians insight into particular psychiatric and neurological conditions. The auditory verbal hallucinations experienced by schizophrenics can be characterized as “perceptual distortions lateralized to the left hemisphere” . While literature has been primarily focused on utilizing visual stimuli to address the temporal structure of conscious processing, those findings could translate over into the realm of conscious processing in response to auditory stimuli as well. Researchers have characterized autistic perception, too, by “…a lack of central coherence”, and autistic individuals have been shown to display behaviors associated with tasks that “require the integration of global attributes — such as global motor coherence” (Lawson et al., 2014). Temporal knowledge of the processing cascade of stimuli can allow care providers the knowledge to best treat and support these individuals with neuropsychiatric disorders characterized by aberrant percepts, which are invasive, spontaneously occurring perceptual experiences that have no real-world grounds but are still experiences as realistic sensory experiences (van de Ven & Linden, 2012).

Over the past few decades, the cognitive neuroscience community has attempted to chip away at the age-old conundrum of our conscious processing. Further research in the realm of understanding our perceptual capabilities may involve the integration of audiovisual stimuli, perhaps taking advantage of the ambiguity of the McGurk effect in the same way that the ambiguity of the MIB phenomenon is being utilized (Pylkkänen, 2020 [2]). Regardless, the fascinating findings provided by researchers using cutting-edge neuroimaging techniques can allow us to shift our perspectives on particular neuropsychiatric disorders to enhance and further specialize patient care.

References

Ahlfors, S. P., & Mody, M. (2016). Overview of Meg. Organizational Research Methods,  22(1), 95–115. https://doi.org/10.1177/1094428116676344 

Bonneh, Y., & Donner, T. (n.d.). Motion induced blindness. Scholarpedia. Retrieved March 30, 2022, from http://www.scholarpedia.org/article/Motion_induced_blindness

Gage, N. M., & Baars, B. J. (2018). The art of seeing. Fundamentals of Cognitive Neuroscience, 99–141. https://doi.org/10.1016/b978-0-12-803813-0.00004-0 

Herzog, M. H., Kammer, T., & Scharnowski, F. (2016). Time slices: What is the duration of a percept? PLOS Biology, 14(4). https://doi.org/10.1371/journal.pbio.1002433

Kristofferson, A. B. (1967). Successiveness discrimination as a two-state, Quantal process. Science, 158(3806), 1337–1339. https://doi.org/10.1126/science.158.3806.1337

Lawson, R. P., Rees, G., & Friston, K. J. (2014). An aberrant precision account of autism. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00302 

Nani, A., Manuello, J., Mancuso, L., Liloia, D., Costa, T., & Cauda, F. (2019, October 16). The neural correlates of consciousness and attention: Two sister processes of the brain. Frontiers. Retrieved March 19, 2023, from https://www.frontiersin.org/articles/10.3389/fnins.2019.01169/full 

[1] Pylkkänen, L. (2020, August 31). Neural Bases of Language: METHODS 1. Before Neuroimaging. Retrieved February 24, 2023, from https://www.youtube.com/watch?v=n7pI8L8Z8xs 

[2] Pylkkänen, L. (2020, September 4). Neural Bases of Language: SPEECH. Ambiguous Stimuli. Retrieved February 24, 2023, from https://www.youtube.com/watch?v=n7pI8L8Z8xs 

Sterzer, P., Kleinschmidt, A., & Rees, G. (2009). The neural bases of multistable perception. Trends in Cognitive Sciences, 13(7), 310–318. https://doi.org/10.1016/j.tics.2009.04.006  

Van de Ven, V., & Linden, D. E. (2012). The role of mental imagery in aberrant perception: A neurobiological perspective. Journal of Experimental Psychopathology, 3(2), 274–296. https://doi.org/10.5127/jep.017511 

Wu, C.-T., Busch, N. A., Fabre-Thorpe, M., & VanRullen, R. (2009). The temporal interplay between conscious and unconscious perceptual streams. Current Biology, 19(23), 2003–2007. https://doi.org/10.1016/j.cub.2009.10.017

Input your search keywords and press Enter.