About
The Cornell Phonetics Lab is a group of students and faculty who are curious about speech. We study patterns in speech — in both movement and sound. We do a variety research — experiments, fieldwork, and corpus studies. We test theories and build models of the mechanisms that create patterns. Learn more about our Research. See below for information on our events and our facilities.
9th November 2023 04:30 PM
ASL Linguistics Lecture Series: Debbie Chen Pichler to discuss Bimodal Bilingualism
The Department of Linguistics proudly presents Professor Debbie Chen Pichler, Professor of Linguistics at Gallaudet University
Dr. Pichler will present on "Bimodal Bilingualism and Beyond: What L2 Signers Teach Us about Language Acquisition".
ASL/English interpretation will be provided.
Abstract:
Research on bimodal bilinguals from around the world has broadened our understanding of how humans develop as multilinguals in both spoken and signed modalities.
This talk will overview some of the most significant insights from the last 20 years of bimodal bilingual research, focusing on language acquisition by young bimodal bilingual learners (i.e. Kids of Deaf Adults-KODA) children and deaf children with hearing amplification) and adult L2 signers.
This research also lies at the heart of heated debates about language choice and early intervention practices for deaf and hard of hearing children.
Bio:
As a young child, Dr. Deborah Chen Pichler got interested in linguistics and bilingualism, when she grew up in a bilingual English-Taiwanese family.
She's started working in the Linguistics department at Gallaudet University in 2002, where she teaches first and second language acquisition and generative syntax. Her research is focused on ASL acquisition (American Sign Language) by deaf children with deaf parents (with and without a CI) and by hearing bilingual children (CODA's: children of deaf adults).
She also researches the acquisition of ASL as a foreign language in adults.
She is especially interested in the use of technology in education, particularly in deaf education.
Location: 106 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
10th November 2023 12:20 PM
Phonetics Lab Meeting
Annabelle will lead a discussion of this paper :
Location: B11 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
15th November 2023 12:20 PM
PhonDAWG - Phonetics Lab Data Analysis Working Group
Sam will show some regressions and visualization of Fengyue's test data.
Location: Location: B11 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA16th November 2023 04:30 PM
Colloquium Talk by Kyle Johnson on Implicit Objects as Incorporated Theta-roles
The Cornell Linguistic Circle and the Department of Linguistics proudly present Dr. Kyle Johnson of University of Massachusetts, Amherst. Dr. Johnson will give a talk titled: "Implicit Objects as Incorporated Theta-roles"
Abstract:
Some verbs are capable of being used without an expression of their arguments. The direct and indirect objects of eat and throw are standard examples.
(1) a. Marlys ate cake.
Marlys ate.
b. Marlys threw the ball to Sam.
Marlys threw the ball.
The meanings of eat and throw preserve the θ-roles that cake and to Sam bear, even when those arguments are not present. Those θ-roles are understood to be existentially closed. They are said to be implicit when this happens. The ability for a θ-role to be implicit seems to be idiosyncratically controlled by the verb, but it does not extend to external arguments.
To make an external θ-role implicit, a valency changing operation is required. An external/internal argument contrast of this sort is also found in many kinds of Noun Incorporation constructions. The lexically idiosyncratic nature of making a θ-role implicit also seems to be a feature of some Noun Incorporation constructions.
Martí (2015) argues that the syntax and semantics of Noun Incorporation underlies making a θ-role implicit. I will pursue that thesis in this talk. I will suggest that we should think of θ-roles as being kinds of nominals, and sketch a syntax that makes sense of that idea. One of its consequences is that θ-roles can undergo Incorporation, and this is how implicit arguments are achieved.
Bio:
Kyle graduated with a BA in psychology from the University of California-Irvine in 1981. UC, Irvine had an interesting group of cognitive psychologists at that time working on learning theory, attention, and visual perception. The cognitive science group included linguists -- Mary-Louise Kean, Peter Culicover, Bernard Tranel, Ed Matthei, Ken Wexler, and Stephen Crain, who was a graduate student.
Dr. Johnson learned about linguistics from them and went on to study it at MIT, where he got a PhD in 1985. On his way to the position he now has at UMass, he taught, and mostly learned, at the University of Connecticut at Storrs, UC-Irvine, UCLA, University of Wisconsin at Madison and McGill University. His specialization is in syntactic theory.
The Cornell Phonetics Laboratory (CPL) provides an integrated environment for the experimental study of speech and language, including its production, perception, and acquisition.
Located in Morrill Hall, the laboratory consists of six adjacent rooms and covers about 1,600 square feet. Its facilities include a variety of hardware and software for analyzing and editing speech, for running experiments, for synthesizing speech, and for developing and testing phonetic, phonological, and psycholinguistic models.
Web-Based Phonetics and Phonology Experiments with LabVanced
The Phonetics Lab licenses the LabVanced software for designing and conducting web-based experiments.
Labvanced has particular value for phonetics and phonology experiments because of its:
Students and Faculty are currently using LabVanced to design web experiments involving eye-tracking, audio recording, and perception studies.
Subjects are recruited via several online systems:
Computing Resources
The Phonetics Lab maintains two Linux servers that are located in the Rhodes Hall server farm:
In addition to the Phonetics Lab servers, students can request access to additional computing resources of the Computational Linguistics lab:
These servers, in turn, are nodes in the G2 Computing Cluster, which currently consists of 195 servers (82 CPU-only servers and 113 GPU servers) consisting of ~7400 CPU cores and 698 GPUs.
The G2 Cluster uses the SLURM Workload Manager for submitting batch jobs that can run on any available server or GPU on any cluster node.
Articulate Instruments - Micro Speech Research Ultrasound System
We use this Articulate Instruments Micro Speech Research Ultrasound System to investigate how fine-grained variation in speech articulation connects to phonological structure.
The ultrasound system is portable and non-invasive, making it ideal for collecting articulatory data in the field.
BIOPAC MP-160 System
The Sound Booth Laboratory has a BIOPAC MP-160 system for physiological data collection. This system supports two BIOPAC Respiratory Effort Transducers and their associated interface modules.
Language Corpora
Speech Aerodynamics
Studies of the aerodynamics of speech production are conducted with our Glottal Enterprises oral and nasal airflow and pressure transducers.
Electroglottography
We use a Glottal Enterprises EG-2 electroglottograph for noninvasive measurement of vocal fold vibration.
Real-time vocal tract MRI
Our lab is part of the Cornell Speech Imaging Group (SIG), a cross-disciplinary team of researchers using real-time magnetic resonance imaging to study the dynamics of speech articulation.
Articulatory movement tracking
We use the Northern Digital Inc. Wave motion-capture system to study speech articulatory patterns and motor control.
Sound Booth
Our isolated sound recording booth serves a range of purposes--from basic recording to perceptual, psycholinguistic, and ultrasonic experimentation.
We also have the necessary software and audio interfaces to perform low latency real-time auditory feedback experiments via MATLAB and Audapter.