About
The Cornell Phonetics Lab is a group of students and faculty who are curious about speech. We study patterns in speech — in both movement and sound. We do a variety research — experiments, fieldwork, and corpus studies. We test theories and build models of the mechanisms that create patterns. Learn more about our Research. See below for information on our events and our facilities.
28th September 2023 04:30 PM
Linguistics Colloquium and ASL Speaker: Dr. Julie Hochgesang
The Department of Linguistics proudly presents Dr. Julie Hochgesang, Professor at Gallaudet University.
Dr. Hochgesang will present on "Sharing ASL Data Online FAIRly with CARE the ASL Way: MoLo and O5S5 Projects".
Her talk is jointly sponsored by the Cornell Department of Linguistics and the Cornell Linguistics Circle.
Abstract:
As a deaf linguist in North America, my recent work has been the documentation of the language use of the ASL communities in North America. In my presentation, I discuss how language documenters share their data publicly, drawing upon Austin Principles of Data Citation, FAIR and CARE guidelines and practices specific to signed language researchers. I also present findings from a recent survey we did with the ASL communities about sharing ASL data online.
I focus on two current documentation projects - “Motivated Look at Indicating Verbs in ASL (MoLo)” and “Documenting the experiences of the ASL communities in the time of COVID-19 (O5S5)” which I am working on sharing as open access. While I describe the projects and showcase some of the data, I specifically highlight the data statements I am creating for both of them and reflect on what it means to publicly share ASL videos online.
Bio:
Julie. A. Hochgesang (/ˈhoʊkˌsæŋ/) is a professor of Linguistics at Gallaudet University. She is a deaf linguist who specializes in phonetics and phonology of signed languages, fieldwork, documentation, and corpora of signed languages, and ethics of working with signed language communities. Professor Hochgesang also works towards making linguistics accessible to the communities, especially the ASL communities, sharing multimodal products via social media and digital repositories.
She has contributed to ongoing efforts to create accessible collections for the ASL communities, most notably as active maintainer of the ASL Signbank. Her most recent ASL documentation projects include the "Philadelphia Signs Project". “Motivated Look at Indicating Verbs in ASL (MoLo)”, “Gallaudet University Documentation of ASL (GUDA)”, and “Documenting the Experiences of the ASL communities in the time of COVID-19 (O5S5 - ASL name derived from ASL variants for “Document COVID”).
The work she does is because of the ASL communities and she considers herself a member of these communities.
Location: 106 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA29th September 2023 12:20 PM
Phonetics Lab Meeting
Discussion of:
3rd October 2023 04:30 PM
ASL Lecture Series: Carlos Alaiza Javier to give a talk on "Comparing American Sign Language (ASL) with Spanish Sign Language (LSE – Lenguade de Signos Española)"
The Department of Linguistics proudly presents Carlos Alaiza Javier as the 2nd speaker of our Fall 2023 ASL Lecture Series.
Carlos - a native signer of Spanish Sign Language - will give a talk titled: "Comparing American Sign Language (ASL) with Spanish Sign Language (LSE – Lenguade de Signos Española)".
Abstract:
Sign languages around the world often have common properties. It appears that the modality itself has influenced aspects of how sign languages are structured. The field of sign language research is still new, and it is possible there are details that distinguish grammar features that are being overlooked.
This presentation will compare Spanish Sign Language’s grammatical and nonmanual features with ASL. The presenter will provide narratives in Spanish Sign Language and their translations in ASL. There will be opportunities for an analysis of similarities and differences between these two sign languages. In addition, information about the Deaf community in Spain will be shared.
ASL/English interpretation will be provided.
Bio:
Carlos Alaiza Javier was born and raised in Zaragoza, Spain. His parents and two older siblings are all Deaf and he is a native signer of Spanish Sign Language.
He graduated from La Purisima School for deaf students in Zaragoza. He is employed as a mechanic and is an avid traveler who has been to more than 40 countries. On the side, he is a professional photographer working with a Canon camera.
Location: 106 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA4th October 2023 12:20 PM
PhonDAWG - Phonetics Lab Data Analysis Working Group
We'll play around with Whisper from openai
Location: B11 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USAThe Cornell Phonetics Laboratory (CPL) provides an integrated environment for the experimental study of speech and language, including its production, perception, and acquisition.
Located in Morrill Hall, the laboratory consists of six adjacent rooms and covers about 1,600 square feet. Its facilities include a variety of hardware and software for analyzing and editing speech, for running experiments, for synthesizing speech, and for developing and testing phonetic, phonological, and psycholinguistic models.
Web-Based Phonetics and Phonology Experiments with LabVanced
The Phonetics Lab licenses the LabVanced software for designing and conducting web-based experiments.
Labvanced has particular value for phonetics and phonology experiments because of its:
Students and Faculty are currently using LabVanced to design web experiments involving eye-tracking, audio recording, and perception studies.
Subjects are recruited via several online systems:
Computing Resources
The Phonetics Lab maintains two Linux servers that are located in the Rhodes Hall server farm:
In addition to the Phonetics Lab servers, students can request access to additional computing resources of the Computational Linguistics lab:
These servers, in turn, are nodes in the G2 Computing Cluster, which currently consists of 195 servers (82 CPU-only servers and 113 GPU servers) consisting of ~7400 CPU cores and 698 GPUs.
The G2 Cluster uses the SLURM Workload Manager for submitting batch jobs that can run on any available server or GPU on any cluster node.
Articulate Instruments - Micro Speech Research Ultrasound System
We use this Articulate Instruments Micro Speech Research Ultrasound System to investigate how fine-grained variation in speech articulation connects to phonological structure.
The ultrasound system is portable and non-invasive, making it ideal for collecting articulatory data in the field.
BIOPAC MP-160 System
The Sound Booth Laboratory has a BIOPAC MP-160 system for physiological data collection. This system supports two BIOPAC Respiratory Effort Transducers and their associated interface modules.
Language Corpora
Speech Aerodynamics
Studies of the aerodynamics of speech production are conducted with our Glottal Enterprises oral and nasal airflow and pressure transducers.
Electroglottography
We use a Glottal Enterprises EG-2 electroglottograph for noninvasive measurement of vocal fold vibration.
Real-time vocal tract MRI
Our lab is part of the Cornell Speech Imaging Group (SIG), a cross-disciplinary team of researchers using real-time magnetic resonance imaging to study the dynamics of speech articulation.
Articulatory movement tracking
We use the Northern Digital Inc. Wave motion-capture system to study speech articulatory patterns and motor control.
Sound Booth
Our isolated sound recording booth serves a range of purposes--from basic recording to perceptual, psycholinguistic, and ultrasonic experimentation.
We also have the necessary software and audio interfaces to perform low latency real-time auditory feedback experiments via MATLAB and Audapter.