We’re hosting a virtual event at the UC Davis Phonetics Lab (Dept. of Linguistics) on Thursday, April 28th.
Come learn about speech science with Siri, Alexa, and Google assistant! Kids (ages 7-12) can participate in a real science experiment with a voice assistant (note that a parent must be present to consent). You will need a computer that can play sound and allow you to type/click (no other devices are needed). The experiment will take about 5 minutes. After, you’ll see a short presentation about our research, including an overview of the lab.
Postdoc Michelle Cohn, former grad student Bruno Ferenc Segedin, & Prof. Georgia Zellou published a paper in the Journal of Phonetics: "Acoustic-phonetic properties of Siri- and human-directed speech" (https://doi.org/10.1016/j.wocn.2021.101123; open access!)
Perceptual validation of vowel normalization methods for variationist research. https://doi.org/10.1017/S0954394521000016
Prosodic differences in human- and Alexa-directed speech, but similar local intelligibility adjustments. doi.org/10.3389/fcomm.2021.675704
Congrats to Dr. Georgia Zellou, postdoc Michelle Cohn, and graduate student Tyler Kline for their paper, "The influence of conversational role on phonetic alignment toward voice-AI and human interlocutors" published in Language, Cognition and Neuroscience (LCN) today!
Congrats to Dr. Georgia Zellou, postdoc Michelle Cohn, and graduate student Aleese Block for their paper, "Partial compensation for coarticulatory vowel nasalization across concatenative and neural text-to-speech", published in the Journal of the Acoustical Society of America (JASA) today!
Congrats to postdoc, Michelle Cohn, research coordinator Melina Sarian and Dr. Georgia Zellou for their new paper in Frontiers in Communication: Speech rate adjustments in conversations with an Amazon Alexa socialbot, in the special issue ‘Towards Omnipresent and Smart Speech Assistants‘.
Congrats to the two NSF postdoctoral fellows in the Phonetics Lab for being nominated for (Dr. Kayla Palakurthy) and awarded (Dr. Michelle Cohn) the UC Davis Award for Excellence in Postdoctoral Research.
Intelligibility of face-masked speech depends on speaking style: Comparing casual, smiled, and clear speech. (Cohn, Pycha & Zellou, 2021). Click here to read the paper in 'Cognition'
Feb 4., 2021 WFMY coverage of our 'Cognition' paper. They did an "experiment" to test if listeners could tell if the host was wearing a mask or not.
CBS-13 Sacramento covered our recent face-masked speech paper on Feb. 2, 2021
Congrats to the grad and undergrad students for their presentations at the 2021 Annual Meeting of the Linguistic Society of America (LSA).