Multiparty conversation

In our daily life, we constantly adapt our languages depending on who we are talking to, and how much knowledge we share together. we collaboratively establish shared terms to efficiently refer to repeatedly mentioned items, for example calling a local coffee shop, "ER" among friends, but "Espresso Royale on Goodwin street" to people who are not familiar with the label. This process of designing utterances with respect to partner's knowledge state is called Audience Design (Clark, 1996). I examine how this process scales up to multiparty conversation where speakers interact with more than two partners who share different degrees of knowledge. Speakers are sensitive to partners' knowledge state and tailor utterances with respect to the overall knowledge states among partners.

I'm also interested in when and why this ability to use audience design declines and what factors support efficient conversation in older adults.


Interaction between memory and language use

Memory representations of what is and is not jointly known are necessary in order for speakers to tailor their utterances based on the knowledge and perspective of the addressee. If a speaker fails to remember what they have said in the past, they might repeat this information to the same addressee multiple times, thus making conversation inefficient and redundant. I take an interdisciplinary approach, aiming to understand the interdependence between memory and language in natural conversation. I examine how interlocutors – speakers and listeners – establish memory representations while communicating. Two naïve participants are recruited and have a naturalistic conversation, and then perform a memory test. I found that speakers had better memory for past referents than listeners, which is consistent with generation effect in the memory literature. However, this speaking benefit for memory for the past referent did not extends to memory for unmentioned aspects of the discourse context.

I also explore amnesic patients’ language production to test how they produce audience design and formulate memory representations of what different partners do and do not know. Individuals with amnesia have severe impairment in declarative memory caused by bilateral hippocampal damage (Duff & Brown-Schmidt, 2012). Hippocampus is believed to play a crucial role in the formation of new memories about experienced events. Thus, if hippocampus is impaired, updating memory representations of joint knowledge during conversation might be difficult for individuals with amnesia. One question that has not been addressed is whether this amnesic population is able to establish partner-specific joint knowledge during conversation. I examine whether individuals with amnesia are able to formulate partner-specific memory representations of joint knowledge and flexibly use them, like normal populations. I also work with individuals with neurodegenerative diseases who have memory and cognitive impairments, such as individuals with dementia (e.g., Alzheimer's disease).


Lexical differentiation (while recording eye movements or electrical brain activity (ERP))

Lexical differentiation is a phenomenon, which refers to the tendency of speakers to elaborate their referring expressions with modifiers, e.g., “the striped shirt”, if a different exemplar from the same category had been mentioned in the past, e.g. “the shirt” (Van der Wege, 2009). This suggests that speakers typically take into account both the local context, as well as the discourse history when designing definite referring expressions. Although Lexical differentiation has been replicated in production many times, listeners showed no evidence of lexical differentiation in behavioral studies (e.g., eye-gaze; Yoon & Brown-Schmidt, 2014). I revisit lexical differentiation in both production and comprehension by testing (in)appropriately differentiated expressions while recording participants' electrical brain activity.


Process of disfluency

Speakers are often disfluent (~6 per 100 words in spontaneous speech). Disfluency (e.g., "um" or "uh") does not contain any linguistic information, but listeners tend to actively process disfluency and predict upcoming words following disfluency. It is well establish that young adults predict something hard to label or discourse new referents following the speaker's disfluency (Arnold et al., 2004). They also easily cancel this expectation when there is a certain reason why the speaker is disfluent (e.g., when there is a naive partner who just join the conversation, Yoon & Brown-Schmidt, 2014). I'm interested in how children process disfluency in particular when they interact with multiple partners who share different knowledge each other. My research shows that 4-year-old children flexibly process disfluency with respect to the current partner's knowledge state, rather than based on their egocentric knowledge.