Menu
  UCD Phonetics Lab
  • Home
  • Research
  • People
  • News
  • Participate!
  • HCI Research Group
  • Home
  • Research
  • People
  • News
  • Participate!
  • HCI Research Group

Siri- and human-DS paper in Journal Phonetics

1/18/2022

 

Postdoc Michelle Cohn, former grad student Bruno Ferenc Segedin, & Prof. Georgia Zellou published a paper in the Journal of Phonetics: "Acoustic-phonetic properties of Siri- and human-directed speech" (https://doi.org/10.1016/j.wocn.2021.101123; open access!)

Picture

Interspeech 2020

7/25/2020

 
Picture
We are thrilled to have several papers accepted to the 2020 Interspeech conference:

  • Social and functional pressures in vocal alignment: Differences for human and voice-AI interlocutors (Georgia Zellou & Michelle Cohn) 
  • Perception of concatenative vs. neural text-to-speech (TTS): Differences in intelligibility in noise and language attitudes (Michelle Cohn & Georgia Zellou)
  • Individual variation in language attitudes toward voice-AI: The role of listeners’ autistic-like traits (Michelle Cohn, Melina Sarian, Kristin Predeck, & Georgia Zellou)

Including one for the new collaboration between UC Davis and Saarland University:
​
  • Differences in Gradient Emotion Perception: Human vs. Alexa Voices (Michelle Cohn, Eran Raveh, Kristin Predeck, Iona Gessinger, Bernd Möbius, & Georgia Zellou)  

Call for RAs for human-Alexa project

2/27/2020

 
Call for RAs: Alexa- vs. human-directed speech project (Dr. Michelle Cohn, UCD Phonetics Lab)

Postdoctoral fellow, Dr. Michelle Cohn, is recruiting undergraduate research assistants (RAs) to serve as confederates in the Alexa- (Amazon Echo) vs. human-directed speech project for Spring 2020 (with possibility of a paid STDT 2 position in Summer 2020 and LIN 199 units in Fall 2020).

RAs are expected to work approximately 6-8 hours a week on the project and will receive 2 units of LIN 199 credit per quarter (Note that 4 units count as upper division elective toward LIN major). 

Details (Spring Qtr 2020):
  • Act as a confederate in an interactive psycholinguistic experiment 
  • Will receive 2 units of LIN199 (6-8 hours / week) 
  • Attend lab meetings (if possible)
The project: Alexa- vs. human-directed speech
  • We are comparing how adults and children talk to Amazon's Alexa compared to a real human (you!). There will be both scripted and open ended interactions between you and the participants (and they will complete the same tasks with Alexa). Note that the scripted portion will not require memorization. 

​Requirements: 
  • Since 'Alexa' has apparent social characteristics ('female', native English speaker), we are aiming to match those properties in our confederates. At this time, we’re seeking only female native English speakers. 

Desired qualifications
  • Interest in research! (Prior experience not required) 
  • Be a Linguistics or Cognitive Science Major
  • Enthusiastic and interested in talking to new people

Application deadline: March 8, 2020 (by midnight PST)
To apply, email the following to the project PI, Dr. Cohn (mdcohn@ucdavis.edu): 
  • CV / Resume
  • Unofficial academic transcripts
  • 1-2 paragraph description of your interest in the RA position (why do you want to be an RA? Why do you think you'll be a good fit?) 
  • Spring 2020 course schedule 

Next steps: 
  • After receiving your application, we may offer you an interview with the project PI (Dr. Michelle Cohn).
  • Training will begin the first week of Spring quarter 2020. 

LSA 2020

1/4/2020

 
Great job to Ph.D students Aleese Block, Tyler Kline, and Bruno Ferenc Segedin for presenting at the 2020 Linguistic Society of America annual Meeting in New Orleans!
  • California listeners’ patterns of partial compensation for coarticulatory /u/-fronting is influenced by the apparent age of the speaker (Aleese Block, Michelle Cohn, Georgia Zellou)​
Picture
Aleese Block presenting
  • Conversational role influences speech alignment toward digital assistant and human voices (Georgia Zellou, Michelle Cohn, Tyler Kline, Bruno Ferenc Segedin)
Picture
(L-R) Bruno Ferenc Segedin, Tyler Kline, & moderator Andries Coetzee
Picture
Tyler Kline presenting
Picture
Bruno Ferenc Segedin presenting

Visit from Daan van Esch (Google)

11/16/2019

 
Thanks to Daan van Esch (Google) for coming to visit the Phonetics Lab, going out to lunch with us, and giving a guest lecture this Thursday! 
Picture
Daan van Esch giving a guest lecture for Intro to Linguistics (LIN 1)
Picture
Daan van Esch giving a guest lecture for Intro to Linguistics (LIN 1)
Picture
Daan van Esch showing postdoc Michelle Cohn speech synthesized with Tacotron

Collaboration with KTH & Furhat Robotics

9/25/2019

 
While at the KTH Royal Institute of Technology (Stockholm, Sweden) this September, Michelle Cohn met up with Dr. Jonas Beskow, co-founder of Furhat Robotics, and Ph.D. student Patrik Jonell. Together with Georgia Zellou, they are conducting a study to test the role of embodiment and gender in human's voice-AI interaction with three platforms: Amazon Echo, Nao, and Furhat. 
Picture
Michelle Cohn, Jonas Beskow, & Patrik Jonell at the KTH Studio

Interspeech 2019

9/24/2019

 
Georgia Zellou, Michelle Cohn, & Bruno Ferenc Segedin presented 4 papers at Interspeech in Graz, Austria. 
Picture
View of Interspeech coffee break
Picture
Georgia Zellou presenting
Picture
Michelle Cohn presenting
Picture
Michelle Cohn, Georgia Zellou, & Bruno Ferenc Segedin
See below for links for the papers: 
  • Cathryn Snyder, Michelle Cohn, & Georgia Zellou (2019). Individual variation in cognitive processing style predicts differences in phonetic imitation of device and human voices.
  • Michelle Cohn & Georgia Zellou (2019). Expressiveness influences human vocal alignment toward voice-AI.
  • Bruno Ferenc Segedin, Michelle Cohn, & Georgia Zellou. (2019). Perceptual adaptation to device and human voices:  learning and generalization of a phonetic shift across real and voice-AI talkers.
  • Michelle Cohn, Georgia Zellou, Santiago Barreda. (2019) The role of musical experience in the perceptual weighting of acoustic cues for the obstruent coda voicing contrast in American English. 

Congrats to Michelle Cohn, NSF Postdoc

8/1/2019

 
Congrats to Michelle Cohn for receiving a two year NSF Postdoctoral Fellowship to work with Dr. Georgia Zellou (PI: UC Davis Phonetics Lab, Dept. of Linguistics), Dr. Zhou Yu (PI: UC Davis Language and Multimodal Interaction Lab, Dept. of Computer Science) and Dr. Katharine Graf Estes (PI: UC Davis Language Learning Lab, Dept. of Psychology).
​
​Click here to see the official NSF posting
Picture

Amazon VUI Summit

6/17/2019

 
Dr. Georgia Zellou and Dr. Michelle Cohn gave invited talks at the June 2019 Voice User Interface (VUI) Summit at the Amazon headquarters. 
​
  • Zellou, G. Exploring human “speech rules” during vocal interactions with voice-AI. 
  • Cohn, M. Exploring cognitive-emotional expression: The impact of voice “emojis” in human-Alexa interaction.
Picture
Michelle Cohn (left) and Georgia Zellou (right) at the Amazon headquarters
Picture
Dr. Georgia Zellou

Congrats to presenters at the UCD Symposium on Language Research

5/28/2019

 

'Most Innovative Research' Panel

Undergraduate researcher, Melina Sarian, did a fantastic job presenting her research with collaborators Dr. Georgia Zellou and Dr. Michelle Cohn at the 'Most Innovative Research' Panel
Picture
Melina Sarian presenting at the 'Most Innovative Research' Panel
Picture
Melina Sarian presenting at the 'Most Innovative Research' Panel

Dynamics of Voice-AI Interaction Panel

Bruno Ferenc Segedin and Michelle Cohn presented two talks in our 'Dynamics of Voice-AI Interaction' ​panel
<<Previous

    Categories

    All
    2019 Undergraduate Research Conference
    5 Minute Linguist
    Awards
    Collaborations
    Device Directed Speech
    Gunrock
    Human Voice AI Interaction
    Human-voice AI Interaction
    Interspeech
    LSA
    LSA Summer Institute
    Research Assistant
    Research Grant
    Talks
    UCD Symposium On Language Research

    RSS Feed

Location: 
​251 Kerr Hall

Principal Investigators

Georgia Zellou, Ph.D.
Santiago Barreda, Ph.D

 ​Contact ucdphonlab@gmail.com
Picture
Picture