Menu
  UCD Phonetics Lab
  • Home
  • Research
  • People
  • News
  • Participate!
  • HCI Research Group
  • Home
  • Research
  • People
  • News
  • Participate!
  • HCI Research Group

Call for RAs for human-Alexa project

2/27/2020

 
Call for RAs: Alexa- vs. human-directed speech project (Dr. Michelle Cohn, UCD Phonetics Lab)

Postdoctoral fellow, Dr. Michelle Cohn, is recruiting undergraduate research assistants (RAs) to serve as confederates in the Alexa- (Amazon Echo) vs. human-directed speech project for Spring 2020 (with possibility of a paid STDT 2 position in Summer 2020 and LIN 199 units in Fall 2020).

RAs are expected to work approximately 6-8 hours a week on the project and will receive 2 units of LIN 199 credit per quarter (Note that 4 units count as upper division elective toward LIN major). 

Details (Spring Qtr 2020):
  • Act as a confederate in an interactive psycholinguistic experiment 
  • Will receive 2 units of LIN199 (6-8 hours / week) 
  • Attend lab meetings (if possible)
The project: Alexa- vs. human-directed speech
  • We are comparing how adults and children talk to Amazon's Alexa compared to a real human (you!). There will be both scripted and open ended interactions between you and the participants (and they will complete the same tasks with Alexa). Note that the scripted portion will not require memorization. 

​Requirements: 
  • Since 'Alexa' has apparent social characteristics ('female', native English speaker), we are aiming to match those properties in our confederates. At this time, we’re seeking only female native English speakers. 

Desired qualifications
  • Interest in research! (Prior experience not required) 
  • Be a Linguistics or Cognitive Science Major
  • Enthusiastic and interested in talking to new people

Application deadline: March 8, 2020 (by midnight PST)
To apply, email the following to the project PI, Dr. Cohn (mdcohn@ucdavis.edu): 
  • CV / Resume
  • Unofficial academic transcripts
  • 1-2 paragraph description of your interest in the RA position (why do you want to be an RA? Why do you think you'll be a good fit?) 
  • Spring 2020 course schedule 

Next steps: 
  • After receiving your application, we may offer you an interview with the project PI (Dr. Michelle Cohn).
  • Training will begin the first week of Spring quarter 2020. 

Interspeech 2019

9/24/2019

 
Georgia Zellou, Michelle Cohn, & Bruno Ferenc Segedin presented 4 papers at Interspeech in Graz, Austria. 
Picture
View of Interspeech coffee break
Picture
Georgia Zellou presenting
Picture
Michelle Cohn presenting
Picture
Michelle Cohn, Georgia Zellou, & Bruno Ferenc Segedin
See below for links for the papers: 
  • Cathryn Snyder, Michelle Cohn, & Georgia Zellou (2019). Individual variation in cognitive processing style predicts differences in phonetic imitation of device and human voices.
  • Michelle Cohn & Georgia Zellou (2019). Expressiveness influences human vocal alignment toward voice-AI.
  • Bruno Ferenc Segedin, Michelle Cohn, & Georgia Zellou. (2019). Perceptual adaptation to device and human voices:  learning and generalization of a phonetic shift across real and voice-AI talkers.
  • Michelle Cohn, Georgia Zellou, Santiago Barreda. (2019) The role of musical experience in the perceptual weighting of acoustic cues for the obstruent coda voicing contrast in American English. 

Congrats to Michelle Cohn, NSF Postdoc

8/1/2019

 
Congrats to Michelle Cohn for receiving a two year NSF Postdoctoral Fellowship to work with Dr. Georgia Zellou (PI: UC Davis Phonetics Lab, Dept. of Linguistics), Dr. Zhou Yu (PI: UC Davis Language and Multimodal Interaction Lab, Dept. of Computer Science) and Dr. Katharine Graf Estes (PI: UC Davis Language Learning Lab, Dept. of Psychology).
​
​Click here to see the official NSF posting
Picture

Amazon VUI Summit

6/17/2019

 
Dr. Georgia Zellou and Dr. Michelle Cohn gave invited talks at the June 2019 Voice User Interface (VUI) Summit at the Amazon headquarters. 
​
  • Zellou, G. Exploring human “speech rules” during vocal interactions with voice-AI. 
  • Cohn, M. Exploring cognitive-emotional expression: The impact of voice “emojis” in human-Alexa interaction.
Picture
Michelle Cohn (left) and Georgia Zellou (right) at the Amazon headquarters
Picture
Dr. Georgia Zellou

How do people talk to Alexa?

7/15/2018

 
A team led by Georgia Zellou has begun a collaborative research effort exploring how people adjust their speech to digital devices, such as Siri or Alexa. 

​For example, recent Ph.D graduate, Michelle Cohn, has been recording conversations with Gunrock, a social bot created by Zhou Yu's lab that is currently in the semifinals for the Amazon Alexa Prize. You can talk to it yourself if you have an Alexa-enabled device: just say "Let's chat!" and it will randomly invoke one of the three social bots in the running for the grand prize.  
Picture
Above you can see our microphone next to the Amazon Echo, capturing the interaction

    Categories

    All
    2019 Undergraduate Research Conference
    5 Minute Linguist
    Awards
    Collaborations
    Device Directed Speech
    Gunrock
    Human Voice AI Interaction
    Human-voice AI Interaction
    Interspeech
    LSA
    LSA Summer Institute
    Research Assistant
    Research Grant
    Talks
    UCD Symposium On Language Research

    RSS Feed

Location: 
​251 Kerr Hall

Principal Investigators

Georgia Zellou, Ph.D.
Santiago Barreda, Ph.D

 ​Contact ucdphonlab@gmail.com
Picture
Picture