Rivka Levitan ’09, an assistant professor in the Department of Computer and Information Sciences, knows that you’re more likely to engage with an avatar, or virtual assistant, if it talks like you—in pitch, pace, and a variety of other factors. For her dissertation, Levitan designed a game that required users to ask for help from an avatar, and found that people preferred the one that began to match their speaking style.
What’s not well understood is how all the variables that contribute to entrainment, or the phenomenon by which conversational partners start to talk more like each other, combine to increase trust and rapport.
Earlier this year, Levitan received a National Science Foundation CAREER award to study entrainment in order to improve human-to-computer communication, and to learn more about human-to-human conversation while she’s at it.
“The focus of the grant is to take a bunch of different factors that we previously identified as being important to entrainment, and to start building a more integrated model so that we know how all these things work together and we can be more specific about how people behave in different circumstances,” explains Levitan, who obtained her undergraduate degree in computer science and went on to earn a Ph.D. in computer science from Columbia University.
The CAREER award supports junior faculty who show exceptional promise and exemplify the role of teacher-scholars. It comes with $500,000 in funding over five years. Levitan was one of five CUNY professors to receive the award in 2019.
As part of the research, Levitan intends to establish a corpus, or a collection of audio recordings of dialogue that will include multiple conversations. She will then tease apart who is entraining to whom.
“It will allow us to answer more definitively things about gender dynamics and power dynamics, and to look at people’s intrinsic behavior and how it is modified by their partner,” she explains.
The research has implications for analyzing corporate dialogue for quality control and for technology, such as virtual assistants.
“Entrainment is known to be associated with rapport and trust, so there’s this idea that if Alexa entrains, you might like it better, you might use it more, or you might have longer conversations,” Levitan says. “There’s a lot we still don’t understand about how or why this happens. This research aims to contribute to that understanding.”