True or False? Automatic Detection of Deception and Trust in Spoken Dialogue
Spoken language processing (SLP) aims to teach computers to understand human speech. Automatic deception detection from speech is one of the few problems in AI where machines can potentially perform significantly better than humans, who can only detect lies about 50% of the time. In this talk, I will discuss my work on training computers to distinguish between deceptive and truthful speech using language features. My work combines machine learning with insights from psychology and linguistics to develop robust techniques to detect deceptive speech. I will also present ongoing research aimed at understanding the characteristics of trustworthy language. This work improves our scientific understanding of deception and trust, and has implications for security applications and for increasing trust in human-computer interaction.
Sarah Ita Levitan is a postdoctoral Research Scientist in the Department of Computer Science at Columbia University. Her research interests are in spoken language processing, and she is currently working on identifying acoustic-prosodic and linguistic indicators of trustworthy speech, as well as identifying linguistic characteristics of trustworthy news. She received her PhD in Computer Science at Columbia University, advised by Dr. Julia Hirschberg, and her dissertation addressed the problem of automatic deception detection from speech. Sarah Ita was a 2018 Knight News Innovation Fellow and a recipient of the NSF Graduate Research Fellowship and the NSF IGERT From Data to Solutions fellowship. She has interned at Google Research and at Interactions LLC.
Mark Santolucito, Yale University
Program Synthesis for Software Systems
Program synthesis is the process of automatically generating code from specifications. This specification, describing the intended code behavior, can be either explicitly expressed as formulas, can be given in the form of illustrative examples, or it can be inferred from the context. There are decades of research into program synthesis, but only recently we have seen synthesis scale to industrial benchmarks. However, these applications have been limited to simple data transformations and automation tasks.
In this talk, I outline new directions in software synthesis targeted at increasing scalability and expressivity so that synthesis tools can assist in the development of real-world large software systems. With these advances, we have successfully synthesized systems such as mobile apps, self-driving car controllers, and embedded systems. We have also applied synthesis to novel domains, including configuration file analysis and digital signal processing. I conclude by describing future work on exploring usability of program synthesis and challenges we face when integrating synthesis into developer workflow.
Mark Santolucito is completing his PhD from Yale University under the supervision of Ruzica Piskac. Mark’s work has been focused on program synthesis and computer music. His research has been published at top conferences including, CAV, OOPSLA, CHI, and SIGCSE. His work has also been recognized by industry, including Amazon Web Services, where he interned and applied his work on configuration file analysis. He was invited to the Heidelberg Laureate Forum and has received the Advanced Graduate Leadership award from Yale. He helped found the computer science department at Geumgang University in South Korea, and has taught a Creative Embedded Systems course at Yale.