Plenary Talks & Abstracts

A context constructivist account of contextual diversity and word frequency

Michael Tanenhaus, Ph.D.

University of Rochester

Department of Brain and Cognitive Sciences, Department of Linguistics

The diversity of contexts in which a word occurs, operationalized as contextual diversity (CD), accounts for much of the variance in measures of lexical processing previously attributed to word frequency (WF). Building upon Adelman and colleagues’ proposal that CD is a better proxy for need probability than WF, we formalize and test a computational-level, “context constructionist” account of CD and WF. We propose that language users store fine-grained, contextualized statistical information about word distributions and use them to actively construct and update a context model that informs expectations about what words should be expected in the current context, resulting in predictability effects. In a relatively constraining context, then, the range of contexts in which a word occurs will be less predictive of its probability than WF. We find support for predictions from our account in five experiments in English and Chinese, using frequency judgments, eye-movements for sentences embedded in weakly and strongly constraining contexts, and a corpus analysis of eye-movement data for natural texts. Primary results are: (1) frequency judgments are more accurate for same-category word pairs compared to different category pairs (Experiment 1); (2) When other variables that influence lexical processing are controlled, CD but not WF effects are found in frequency judgments (Experiments 2a and 2b) and eye-movements in reading (Experiment 3) for weakly constrained contexts, whereas WF but not CD effects are found in more constrained contexts; and (3) In a corpus analysis of eye movements in reading natural texts, CD effects diminish as predictability increases, as measured by entropy and surprisal.  Moreover, there are some interesting twists, which have potentially important theoretical implications.  For example, there are suggestive differences between entropy and surprisal.  In addition, residual WF effects in low entropy contexts (when CD is factored out) are most likely predictability effects (Experiment 4). Taken together these results provide support for the novel predictions generated by our framework.

Speaking to be understood: Insights into Speech Processing and Effective Communication

Rajka Smiljanic, Ph.D.

University of Texas at Austin

Department of Linguistics

In our daily interactions, we frequently encounter situations where speech intelligibility varies significantly; the conversations occur in noisy classrooms or clinics, the talkers can be instructors who speak with a non-native accent or wear a protective face mask, and the listeners can be elderly parents with hearing loss or healthcare professionals from different linguistic backgrounds. In response to such challenges, talkers spontaneously adopt a listener-oriented clear speech; they slow down, produce wide pitch excursions, and carefully enunciate phonemes with the goal of making communication easier. A robust clear speech intelligibility benefit for a variety of talkers, listeners, and communication challenges is well-documented. In this talk, I will focus on work that moves beyond intelligibility variation. In one line of work, we examine whether clear speech aids in the process of speech segmentation. In another, we explore whether intelligibility-enhancing clear speech reduces listening effort required for speech processing. The combined results contribute evidence that clear speech not only aids signal‐dependent processing but also enhances deeper linguistic processing, abstracted from the input speech. Moreover, our results suggest that listeners instinctively direct their selective attention toward acoustically salient speech, thereby benefiting speech processing. The long-term goal of this research program is to understand the perceptual processes and cognitive mechanisms that underly successful perception of clear speaking style. Understanding how variations in speech clarity impact comprehension in everyday communication represents a theoretically interesting problem with direct applications in fields such as education, healthcare, and speech recognition, with an eye toward enhancing daily communication.

The Case Against Phonological Gender Assignment: Crosslinguistic Evidence from Hausa, Guébie and Beyond

Ruth Kramer, Ph.D.

Georgetown University

Department of Linguistics

According to classic typological research, grammatical gender can be assigned to nouns in several different ways. Gender can be assigned semantically (depending on social gender identity, animacy, etc.), morphologically (depending on the presence of a specific affix), or phonologically (e.g., depending on the final segment of the noun).  In this talk, I build a case against at the last member of this list: phonological gender assignment. I present the results of a crosslinguistic survey of phonological gender assignment as well as case studies of multiple languages that allegedly use phonological gender assignment including Hausa (Chadic), Gujarati (Indo-Aryan), Apurinã (Maipurean), and Guébie (Kru), among others.  I argue that the crosslinguistic trends and the case studies point towards phonology *not* being involved in grammatical gender assignment and, more importantly, that a phonological gender assignment analysis is less explanatory than alternative approaches. In morphosyntactic theories that assume the Late Insertion of morphophonological material (e.g., Distributed Morphology, nanosyntax, etc.), phonological gender assignment is predicted to be difficult at best because gender is assigned during the syntactic derivation and the syntax lacks phonological information.  This result therefore provide support for Late Insertion, and against theories where gender is assigned in the lexicon with access to phonological information. I close the talk with plans for future work to investigate additional languages with (alleged) phonological gender assignment.

Monolingual expectations, bilingual realities: Sociolinguistic perceptions and Latinx languaging

Salvatore Callesano, Ph.D.

University of Illinois, Urbana-Champaign

Department of Spanish and Portuguese

The co-existence of two or more languages is a global norm, and still, colonial histories continue to reinforce a one nation-one language ideology. Language contact is ever present in the sociopolitical history and contemporary landscape of the U.S., with Spanish being the second most used language. Research on the outcomes of language contact, both linguistic and ideological, are of critical importance for understanding the sociolinguistic dynamics of bilingual U.S. Latinx communities. In this talk, I discuss a series of sociolinguistic studies that, through various methodological approaches, show how mixed-language realities of Latinx communities are regularly met with and policed by unfounded monolingual expectations. In a perceptual dialectology mapping task concerned with lexical variation, results point to differences when analyzed through bilingual versus monolingual analytical lenses. On social media, young adult Latinx languaging, which shows evidence of well-documented language contact phenomena, are discriminated against under raciolinguistic ideologies. Then, a computational sociolinguistic approach to studying the comment sections of social media videos reveals a clear pattern for having discussions around Latinx languaging in English as opposed to Spanish, or even mixed Spanish-English discourse. Finally, interview discourses with second-generation Latinxs from Illinois highlight praise for a global understanding of bilingualism and show the simultaneous internalization of linguistic insecurity. The results of these studies come together in a discussion of how language is perceived within U.S. Latinx communities and add to ongoing discussions of how to approach the study of Latinx languaging, and bilingualism more broadly, with bilingual research designs.