SDSU/UCSD JDP-LCD Doctoral Candidate – Jacob Momsen Presentations

September 13, 2022

Jacob Momsen, SDSU/UCSD JDP-LCD Doctoral Candidate, will present his Integrative Paper and Dissertation Proposal this week.

Doctoral Candidate – Jacob Momsen presents:

Dissertation Proposal Talk: The Conversational Dance: A test of the SAMP hypothesis

Tuesday, September 13, 2022 at 11:00 a.m. PST

Hybrid Event: SLHS 204 or via Zoom

Sensitivity to nonverbal cues such as gestures are vital for successful communicative interactions. Behavioral work has shown that co-speech gestures influence the perception of spoken prosody, but extant neuroimaging work aimed at understanding how visually available speaker information influences audition has predominantly focused on multisensory interactions to articulatory information. By contrast, neuroscientific investigations into speech-gesture interactions at the prosodic level are scarce, and current frameworks regarding audiovisual speech processing need to be extended to account for the way co-speech movements such as manual gestures influence the perception of spoken prosody. The proposed studies will implement electrophysiological methods to investigate the neurobiological underpinning of speech-gesture integration, and specifically the way that those co-speech movements which offer a visual analogue to spoken prosody shape speech processing in the brain. Aim 1 will determine whether visual prosodic features of co-speech gestures impact the cortical representation of continuous speech. Aim 2 will examine whether the processing of congruent visual prosody conveyed by co-speech gestures entails multisensory interactions with continuous speech, as opposed to being tracked independent of the auditory signal. Lastly, aim 3 will test a relatively novel hypothesis appealing to biomechanics, which posits that kinetic information conveyed by co-speech movements is explicitly monitored by neural mechanisms independent of those involved in processing other visuospatial or kinematic features, and that these mechanisms are relevant for the way visual prosody is integrated with speech. Together, this work will help expand current understanding about the scope of multisensory interactions that characterize spoken language processing by offering an empirical starting point for explaining audiovisual interactions at the scale of speech and co-speech gestures.

Integrative Paper Talk: A sensorimotor account of multimodal prosody: The role of the vestibular system

Wednesday, September 14, 2022 at 11:00 a.m. PST

Hybrid Event: SLHS 204 or via Zoom

The temporal signatures that characterize speech—especially its prosodic qualities—are observable in the movements of the hands and bodies of its speakers. A neurobiological account of these prosodic rhythms is thus likely to benefit from insights on the neural coding principles underlying co-speech gestures. Here we consider a role for the information processing capacity of the vestibular system, a sensory system that encodes movements of the body, in the neural representation of prosody. Careful review of the vestibular system’s functional organization as a hierarchical, predictive system, its relevance for rhythmic sound sequences, and its involvement in vocalization suggests vestibular codes maybe important for the neural tracking of speech. As the kinematics and time course of co-speech movements often mirror prosodic fluctuations in spoken prosody, we argue that the vestibular system helps encode and decode these prosodic features in multimodal discourse and possibly during auditory speech processing alone.

Categorized As