Menu
  UCD Phonetics Lab
  • Home
  • Research
  • People
  • News
  • Participate!
  • HCI Research Group
  • Home
  • Research
  • People
  • News
  • Participate!
  • HCI Research Group

Interspeech 2020

7/25/2020

 
Picture
We are thrilled to have several papers accepted to the 2020 Interspeech conference:

  • Social and functional pressures in vocal alignment: Differences for human and voice-AI interlocutors (Georgia Zellou & Michelle Cohn) 
  • Perception of concatenative vs. neural text-to-speech (TTS): Differences in intelligibility in noise and language attitudes (Michelle Cohn & Georgia Zellou)
  • Individual variation in language attitudes toward voice-AI: The role of listeners’ autistic-like traits (Michelle Cohn, Melina Sarian, Kristin Predeck, & Georgia Zellou)

Including one for the new collaboration between UC Davis and Saarland University:
​
  • Differences in Gradient Emotion Perception: Human vs. Alexa Voices (Michelle Cohn, Eran Raveh, Kristin Predeck, Iona Gessinger, Bernd Möbius, & Georgia Zellou)  

LSA 2020

1/4/2020

 
Great job to Ph.D students Aleese Block, Tyler Kline, and Bruno Ferenc Segedin for presenting at the 2020 Linguistic Society of America annual Meeting in New Orleans!
  • California listeners’ patterns of partial compensation for coarticulatory /u/-fronting is influenced by the apparent age of the speaker (Aleese Block, Michelle Cohn, Georgia Zellou)​
Picture
Aleese Block presenting
  • Conversational role influences speech alignment toward digital assistant and human voices (Georgia Zellou, Michelle Cohn, Tyler Kline, Bruno Ferenc Segedin)
Picture
(L-R) Bruno Ferenc Segedin, Tyler Kline, & moderator Andries Coetzee
Picture
Tyler Kline presenting
Picture
Bruno Ferenc Segedin presenting

Visit from Daan van Esch (Google)

11/16/2019

 
Thanks to Daan van Esch (Google) for coming to visit the Phonetics Lab, going out to lunch with us, and giving a guest lecture this Thursday! 
Picture
Daan van Esch giving a guest lecture for Intro to Linguistics (LIN 1)
Picture
Daan van Esch giving a guest lecture for Intro to Linguistics (LIN 1)
Picture
Daan van Esch showing postdoc Michelle Cohn speech synthesized with Tacotron

Interspeech 2019

9/24/2019

 
Georgia Zellou, Michelle Cohn, & Bruno Ferenc Segedin presented 4 papers at Interspeech in Graz, Austria. 
Picture
View of Interspeech coffee break
Picture
Georgia Zellou presenting
Picture
Michelle Cohn presenting
Picture
Michelle Cohn, Georgia Zellou, & Bruno Ferenc Segedin
See below for links for the papers: 
  • Cathryn Snyder, Michelle Cohn, & Georgia Zellou (2019). Individual variation in cognitive processing style predicts differences in phonetic imitation of device and human voices.
  • Michelle Cohn & Georgia Zellou (2019). Expressiveness influences human vocal alignment toward voice-AI.
  • Bruno Ferenc Segedin, Michelle Cohn, & Georgia Zellou. (2019). Perceptual adaptation to device and human voices:  learning and generalization of a phonetic shift across real and voice-AI talkers.
  • Michelle Cohn, Georgia Zellou, Santiago Barreda. (2019) The role of musical experience in the perceptual weighting of acoustic cues for the obstruent coda voicing contrast in American English. 

Amazon VUI Summit

6/17/2019

 
Dr. Georgia Zellou and Dr. Michelle Cohn gave invited talks at the June 2019 Voice User Interface (VUI) Summit at the Amazon headquarters. 
​
  • Zellou, G. Exploring human “speech rules” during vocal interactions with voice-AI. 
  • Cohn, M. Exploring cognitive-emotional expression: The impact of voice “emojis” in human-Alexa interaction.
Picture
Michelle Cohn (left) and Georgia Zellou (right) at the Amazon headquarters
Picture
Dr. Georgia Zellou

Congrats to presenters at the UCD Symposium on Language Research

5/28/2019

 

'Most Innovative Research' Panel

Undergraduate researcher, Melina Sarian, did a fantastic job presenting her research with collaborators Dr. Georgia Zellou and Dr. Michelle Cohn at the 'Most Innovative Research' Panel
Picture
Melina Sarian presenting at the 'Most Innovative Research' Panel
Picture
Melina Sarian presenting at the 'Most Innovative Research' Panel

Dynamics of Voice-AI Interaction Panel

Bruno Ferenc Segedin and Michelle Cohn presented two talks in our 'Dynamics of Voice-AI Interaction' ​panel

Phon Lab at the UCD Language Symposium

5/23/2019

 
Picture

5 Minute Linguist Competition Video

1/24/2019

 
See below for the recording for the 5 Minute Linguist (5ML) competition, emceed by John McWhorter, at the Linguistic Society of America Annual Meeting in New York City. The aim of the competition to communicate a research project to a general audience in just 5 minutes (and with no notes!). 

We are thrilled that two talks selected as finalists were from our lab!

Talk 1 (0:00 - 8:22)
Michelle Cohn
(University of California, Davis): Phonologically motivated phonetic repair strategies in Siri- and human-directed speech


Talk 2 (9:45 - 15:43) 
Bruno Ferenc Segedin
(University of California, Davis) & Georgia Zellou (University of California, Davis): Lexical frequency mediates compensation for coarticulation: Are the seeds of sound change word-specific?


Congratulations to the other presenters, as well!
  • Andrew Cheng (University of California, Berkeley): Style-shifting, Bilingualism, and the Koreatown Accent
  • Bruno Ferenc Segedin (University of California, Davis) & Georgia Zellou (University of California, Davis): Lexical frequency mediates compensation for coarticulation: Are the seeds of sound change word-specific?
  • Kristin Denlinger (University of Texas, Austin) & Michael Everdell (University of Texas, Austin): A Mereological Approach to Reduplicated Resultatives in O’dam
  • Jessi Grieser (University of Tennessee): Talking Place, Speaking Race: Topic-based style shifting in African American Language as an expression of place identity
  • Kate Mesh (University of Haifa): Gaze decouples from pointing as a result of grammaticalization: Evidence from Israeli Sign Language
  • Jennifer Schechter (University at Buffalo): What Donald Trump’s ‘thoughts’ reveal: An acoustic analysis of 45’s coffee vowel
  • Ai Taniguchi (Carleton University): Why we say stuff



    Categories

    All
    2019 Undergraduate Research Conference
    5 Minute Linguist
    Awards
    Collaborations
    Device Directed Speech
    Gunrock
    Human Voice AI Interaction
    Human-voice AI Interaction
    Interspeech
    LSA
    LSA Summer Institute
    Research Assistant
    Research Grant
    Talks
    UCD Symposium On Language Research

    RSS Feed

Location: 
​251 Kerr Hall

Principal Investigators

Georgia Zellou, Ph.D.
Santiago Barreda, Ph.D

 ​Contact ucdphonlab@gmail.com
Picture
Picture