Computer Science Seminars
Smaranda Muresan, Columbia University - October 7th, 2021
Knowledge-enhanced Text Generation: The Curious Case of Figurative Language and Argumentation
Large-scale language models based on transformer architectures, such as GPT-3 or BERT, have advanced the state of the art in Natural Language Understanding and Generation. However, even though these models have shown impressive performance for a variety of tasks, they often struggle to model implicit and/or non-compositional meaning, such as figurative language and argumentative text. In this talk, I will present some of our recent work on text generation models for figurative language and argumentation. There are two main challenges we have to address to make progress in this space: 1) the need to model common sense and/or connotative knowledge required for these tasks; and 2) the lack of large training datasets. I will discuss our proposed theoretically-grounded knowledge-enhanced text generation models for figurative language such as metaphor and simile, as well as for enthymeme reconstruction and if time permits argument reframing. I will conclude by discussing opportunities and remaining challenges for incorporating knowledge in neural text generation systems.
Smaranda Muresan is a Research Scientist at the Data Science Institute at Columbia University and an Amazon Scholar. Before joining Columbia, she was a faculty member in the School of Communication and Information at Rutgers University where she co-founded the Laboratory for the Study of Applied Language Technologies and Society. At Rutgers, she was the recipient of the Distinguished Achievements in Research Award. Her research interests are in computational semantics and discourse, particularly figurative language understanding and generation, argument mining and generation, and fact-checking. Most recently, she has been interested in applying NLP to education and public health, as well as in building NLP technologies for low resource languages. She received best papers awards at SIGDIAL 2017 and ACL 2018 (short paper). She is currently serving as a board member of the North American Chapter of the Association for Computational Linguistics (NAACL) and as a Program Co-Chair for ACL 2022.
Joshua Hodges, Audio Programmer Ltd. - April 15th, 2020
Building the Audio Programmer: Power through Inexperience, Transparency, and Sharing
In 2017, Joshua Hodge started a YouTube channel called The Audio programmer to teach a skill that he knew hardly anything about - audio software development. Since then, The Audio Programmer has become a central hub for audio developers of all levels. How is this possible?
In this talk, Joshua Hodge will discuss lessons learned while creating the Audio Programmer - how his inexperience became the best experience, how being excluded allowed him to be inclusive, and the ultimate power of sharing your work.
Ilya Volkovich, University of Michigan - March 3rd, 2020
Algebraic Problems: The Frontier of Efficient Randomized Computation
Randomness is a valuable resource in many computational tasks. Indeed, the security and/or the accuracy of many randomized algorithms and protocols rely on the random bits being truly random and independent. However, in practice such random bits are elusive, which may compromise the performance of the underlying systems. This motivates the following fundamental question:-
Can every computational task that requires randomness be carried out deterministically, paying, perhaps only a small overhead?
Meanwhile, the nature of many algebraic problems makes them amenable to randomized algorithms. For example: a random set of vectors is independent, a random assignment to a low-degree polynomial is non-zero etc. Thus, you can easily find a set of independent vectors and a non-zero assignment by picking them uniformly at random. Indeed, it is not surprising that the frontier of efficient randomized computation consists of algebraic problems. Among the frontier problems are Polynomial Identity Testing, Polynomial Factorization and others.
In this talk, I will discuss my research on the relationship between randomness, computation and algebra. Time permitting, I will also discuss the problems I have been working on and some recent connections to cryptography and machine learning.
Dr. Ilya Volkovich is a Senior Lecturer in the Department of Computer Science and Engineering at the University of Michigan, where he has taught courses in the theory of computation for several years. Previously, he was a Postdoctoral Research Associate in the Computer Science Department at Princeton University and held a visiting position at the Institute of Advanced Study. In 2012, he obtained his Ph.D. in Computer Science from Technion, Israel Institute of Technology, advised by Prof. Amir Shpilka. His research interests are in the broad area of theoretical computer science and discrete mathematics. More specifically, he is interested in aspects of algebraic complexity, randomness in computation, computational learning theory, and their applications to cryptography and machine learning.