About
The Cornell Phonetics Lab is a group of students and faculty who are curious about speech. We study patterns in speech — in both movement and sound. We do a variety research — experiments, fieldwork, and corpus studies. We test theories and build models of the mechanisms that create patterns. Learn more about our Research. See below for information on our events and our facilities.
28th October 2024 12:20 PM
Phonetics Lab Meeting
We will have two practice talks today:
-Jennifer will present her AMP 2024 (Annual Meeting on Phonology 2024) talk: "The interaction of phonotactics and frequency-matching in alternation learning"
-Jeremy will present a talk that he & Sam are preparing for the HDLS16 (High Desert Linguistics Society 16th Biennial Conference) - "Polylect: Emergence of dialects in networks of speakers with random, constrained interactions"
Location: B11 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
30th October 2024 12:20 PM
PhonDAWG - Phonetics Lab Data Analysis Working Group
Yao will lead a discussion of the Duanmu reading.
Location: B11 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
31st October 2024 04:30 PM
Linguistics Colloquium Speaker: San Duanmu to speak on "A Gesture-Based Feature Theory and Its Consequences"
The Department of Linguistics proudly presents Dr. San Duanmu, Professor at the University of Michigan, who will give a talk titled "A Gesture-Based Feature Theory and Its Consequences"
Funded in part by the GPSAFC and Open to the Graduate Community.
Abstract:
Using the Principle of Contrast, Duanmu (2016) examined two phoneme inventory databases, UPSID and P-base, for the goal of determining a minimally sufficient system of feature that can distinguish all consonants and vowels in the world’s languages.
It is found that the set of necessary features is much smaller than previously assumed. In addition, no feature needs more than a binary contrast. Moreover, gesture-based features can yield all necessary distinctions, without the need for acoustic-based, perception-based, or completely abstract features.
The results offer new ways of looking at some age-old controversies. For example, are features or IPA symbols comparable across languages? Are binary features innate, or can they simply emerge? The results also raise some new questions that call for new answers, such as the following:
#1: In gesture-based features, manner features can be performed by different articulators, such as Lip-[+stop] and Tip-[+stop]. Can they really be treated as the same gestures? If not, how do we define sound classes (natural classes) that reply on manner features? For example, [+stop] is commonly used in the rule for the aspiration of [p t k] in English. How do we write this rule without using [+stop]?
#2: Current theories of feature specification (underspecification) suffer from two common problems: (i) failure to yield a solution in some cases and (ii) too many solutions overall. Can gesture-based features avoid such problems?
#3: Why is stress a structural feature, instead of a segmental feature?
#4: If the list of necessary features is smaller than previously assumed, what are examples of unnecessary distinctions the IPA offers and how should we deal with them?
I shall start with an overview of the proposed feature system and then discuss some of its consequences.
References:
Duanmu, San. 2016. A theory of phonological features. Oxford: Oxford University Press.
Bio:
San Duanmu's research focuses on general properties of language, especially those in phonology and the interction between phonology and morphosyntax. He is the author of The phonology of Standard Chinese (2nd edition, Oxford 2007), Syllable structure: the limits of variation (Oxford 2008), Foot and stress (in Chinese, Beijing Language and Culture University Press 2016), and A theory of phonological features (Oxford, 2016).
Location: Morrill Hall, 106 Cornell University Dept, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
1st November 2024 12:20 PM
Phonetics Lab Meeting: Informal talk with Dr. San Duanmu
Dr. San Duanmu will give an informal talk for P-Lab folks.
Location: B11 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
The Cornell Phonetics Laboratory (CPL) provides an integrated environment for the experimental study of speech and language, including its production, perception, and acquisition.
Located in Morrill Hall, the laboratory consists of six adjacent rooms and covers about 1,600 square feet. Its facilities include a variety of hardware and software for analyzing and editing speech, for running experiments, for synthesizing speech, and for developing and testing phonetic, phonological, and psycholinguistic models.
Web-Based Phonetics and Phonology Experiments with LabVanced
The Phonetics Lab licenses the LabVanced software for designing and conducting web-based experiments.
Labvanced has particular value for phonetics and phonology experiments because of its:
Students and Faculty are currently using LabVanced to design web experiments involving eye-tracking, audio recording, and perception studies.
Subjects are recruited via several online systems:
Computing Resources
The Phonetics Lab maintains two Linux servers that are located in the Rhodes Hall server farm:
In addition to the Phonetics Lab servers, students can request access to additional computing resources of the Computational Linguistics lab:
These servers, in turn, are nodes in the G2 Computing Cluster, which currently consists of 195 servers (82 CPU-only servers and 113 GPU servers) consisting of ~7400 CPU cores and 698 GPUs.
The G2 Cluster uses the SLURM Workload Manager for submitting batch jobs that can run on any available server or GPU on any cluster node.
Articulate Instruments - Micro Speech Research Ultrasound System
We use this Articulate Instruments Micro Speech Research Ultrasound System to investigate how fine-grained variation in speech articulation connects to phonological structure.
The ultrasound system is portable and non-invasive, making it ideal for collecting articulatory data in the field.
BIOPAC MP-160 System
The Sound Booth Laboratory has a BIOPAC MP-160 system for physiological data collection. This system supports two BIOPAC Respiratory Effort Transducers and their associated interface modules.
Language Corpora
Speech Aerodynamics
Studies of the aerodynamics of speech production are conducted with our Glottal Enterprises oral and nasal airflow and pressure transducers.
Electroglottography
We use a Glottal Enterprises EG-2 electroglottograph for noninvasive measurement of vocal fold vibration.
Real-time vocal tract MRI
Our lab is part of the Cornell Speech Imaging Group (SIG), a cross-disciplinary team of researchers using real-time magnetic resonance imaging to study the dynamics of speech articulation.
Articulatory movement tracking
We use the Northern Digital Inc. Wave motion-capture system to study speech articulatory patterns and motor control.
Sound Booth
Our isolated sound recording booth serves a range of purposes--from basic recording to perceptual, psycholinguistic, and ultrasonic experimentation.
We also have the necessary software and audio interfaces to perform low latency real-time auditory feedback experiments via MATLAB and Audapter.