Sorry, what?! Imaging the hippocampus and neocortex to understand the formation of speech memories.

Last Summer, I had the most formative and invaluable research experience thanks to the travel grant I received from LuCiD. I worked as a research intern for two months at the Psychology department of the University of Liverpool under the supervision of Dr Emmanuel Biau. I worked on Dr Biau’s project “How does the hippocampus integrate audiovisual theta synchrony in speech to form new memories?”.

During the internship, I had the opportunity to pilot the functional magnetic resonance imaging (fMRI) experiment I worked on, learn the code and collect data. I also had the opportunity to design the advertising material, document the experimental protocol, and to create a system for interest and participant registration. Additionally, I got to meet other researchers and PhD students in the department and learn from their research. I hope you find the research just as interesting as I did!

A SUMMER (INTERNSHIP) FULL OF MEMORIES

We hear and see people speak every day. We map auditory cues, like phonemes and tone, to visual cues, like lip and throat movement, most of the time without any awareness. When we watch films, we can easily detect if the sound and video are out of sync. We get annoyed but we don’t really stop to think why. Underlying that annoyance is a message from our brain, which is telling us that there is something wrong with the speech event we are witnessing. Look at the images below - can you tell which speakers are actually saying the word in their corresponding speech bubble?

How could you tell? Did you just intuitively know? Now imagine watching and hearing people speak, where the sound is delayed by only a fraction of a second. It’s mindboggling that in just a few milliseconds, our brain can detect asynchronous visual and auditory stimuli, inform us of the discrepancy and subsequent encoding abnormality, but still encode and form a memory of that same speech event.

Dr Biau’s project explores this phenomenon: how is the formation of episodic speech memories affected by multisensory speech perception? In the experiment I worked on, we imaged the hippocampus and sensory areas of the neocortex during an audiovisual perception task. The hypothesis?: The role of audiovisual synchrony on theta oscillations extends to speech domains and enhances the association of coherent information during speech encoding.

A SNAPSHOT OF THE RESEARCH

We conducted the experiment at the Liverpool Magnetic Resonance Imaging Centre (LiMRIC). LiMRIC is an imaging centre for research with top-level equipment, including the Siemens Prisma 3 Tesla MR scanner that we used to collect data.

Siemens Prisma 3T MR scanner, LiMRIC.

We worked closely with the radiography team during the development of the triple-echo sequences we used, the set-up of the experiment, participant screening and data collection. Fortunately, participant recruitment progressed quickly, allowing us to collect a significant amount of data before my internship ended. Please see below for the poster we used to recruit participants.

Promotional poster for participant recruitment.

When participants arrived at the LiMRIC, we explained the experimental protocol and scanning procedure to them again and took informed consent. The radiographer reviewed the screening forms and confirmed participants’ safety for scanning. After getting ready and completing the training task, participants started the experiment.

There were two parts to Dr Biau’s novel experimental paradigm. In the first part, participants watched short speech movies with manipulated audiovisual synchrony while undergoing scanning. In the second part, participants completed a memory task that tested their memory of the movies. Data collection is still ongoing.

WHY DOES SPEECH MEMORY MATTER?

Did participants form speech memories when the sound and video were asynchronous? Did they remember the asynchronous speech events? Preliminary cognitive-behavioural results suggest they did and with a high level of accuracy, which implies participants successfully formed memories of the speech events despite the video-sound asynchrony. These preliminary findings support the idea proposed by Dr Biau that theta rhythms of the hippocampus, which are neural oscillations with a frequency of 4-8Hz, help integrate audiovisual stimuli during speech events for memory encoding purposes. The brain imaging data will confirm the role the hippocampus played in the reconciliation of the asynchronous sound and speech, and the formation of the memories.

In addition to the new experimental paradigm and hypothesis this pioneering study introduces, it also has major implications for language development and communication research as synchronous lip and throat movement and auditory stimulus is a critical mechanism of phoneme acquisition. Neuroatypical populations with atypical hippocampal development or theta rhythms may not benefit from this entrainment mechanism and present with poorer speech memory, or poorer episodic memory more generally. One such population is the autistic population, where impaired theta modulation has been linked to deficits in working memory and semantic mapping.

THANK YOU

I had studied brain imaging through previous education, but I had never been in a research MRI lab or worked on a professional MRI project. I gained invaluable insight on the feasibility and vulnerability of MRI research by interacting with the equipment, familiarising myself with the facilities and learning to communicate with the radiography team. It helped integrate my theoretical knowledge of fMRI experimental design and data analysis with the practical reality and operational dynamics of fMRI data collection. Equally, the creative freedom Dr Biau granted me to design the promotional materials and document the experimental protocol was very valuable, as it allowed me to become familiar with the MRI screening process and learn how to communicate complex research to a lay audience.

The project was especially meaningful to me, as it directly relates to my research area and interests and has enriched my PhD research. I’ve already had the opportunity to apply my new knowledge of audiovisual integration and speech memory when designing my upcoming experiments. I want to say a huge thank you to Dr Biau for this incredible opportunity and to LuCiD for making it all possible.

 

Author

Leave a Comment

* Indicates fields are required