Apr 20

Computer Science Seminar: Smaranda Muresan (Columbia University)

-
Milstein 912 and Zoom
  • Add to Calendar 2023-04-20 11:00:00 2023-04-20 12:00:00 Computer Science Seminar: Smaranda Muresan (Columbia University) Speaker: Smaranda Muresan (Columbia University) Title: Human-centric Natural Language Processing for Social Good and Responsible Computing The seminar will be available for in-person and Zoom participation. If you would like to receive the Zoom link, please register using the “Register” button above. Large language models (LLMs) constitute a paradigm shift in Natural Language Processing and its applications across all domains. To move towards human-centric NLP designed for social good and responsible computing, I argue we need knowledge-aware NLP systems and human-AI collaboration frameworks.  NLP systems that interact with humans need to be knowledge aware (e.g., linguistic, commonsense, sociocultural norms) and context aware (e.g., social, perceptual) so that they communicate better and in a safer and more responsible fashion with humans. Moreover, NLP systems should be able to collaborate with humans to create high-quality datasets for training and/or evaluating NLP models, to help humans solve tasks, and ultimately to align better with human values. In this talk, I will give a brief overview of my lab’s research around NLP for social good and responsible computing (e.g., misinformation detection, NLP for education and public health, building NLP technologies with language and culture diversity in mind). I will highlight key innovations on theory-guided and knowledge-aware models that allow us to address two important challenges: lack of training data, and the need to model commonsense knowledge. I will also present some of our recent work on human-AI collaboration frameworks for building high-quality datasets for various tasks such as generating visual metaphors or modeling cross-cultural norms similarities and differences.  Smaranda Muresan is a Research Scientist at the Data Science Institute at Columbia University and an Amazon Scholar. Before joining Columbia, she was a faculty member in the School of Communication and Information at Rutgers University where she co-founded the Laboratory for the Study of Applied Language Technologies and Society. At Rutgers, she was the recipient of the Distinguished Achievements in Research Award. Her research focuses on human-centric Natural Language Processing for social good and responsible computing. She develops theory-guided and knowledge-aware computational models for understanding and generating language in context (e.g., visual, social, multilingual, multicultural) with applications to computational social science, education, and public health. Research topics that she worked on over the years include: argument mining and generation, fact-checking and misinformation detection, figurative language understanding and generation (e.g., sarcasm, metaphor, idioms), and multilingual language processing for low-resource and endangered languages. Recently, her research interests include explainable models and human-AI collaboration frameworks for high-quality datasets creation. She received best papers awards at SIGDIAL 2017 and ACL 2018 (short paper). She served as a board member for the North American Chapter of the Association for Computational Linguistics (NAACL) 2020-2021, as a co-founder and co-chair of the New York Academy of Sciences’ Annual Symposium on NLP/Dialog/Speech (2019-2020) and as a Program Co-Chair for SIGDIAL 2020 and ACL 2022.   Milstein 912 and Zoom Barnard College barnard-admin@digitalpulp.com America/New_York public

Speaker: Smaranda Muresan (Columbia University)
Title: Human-centric Natural Language Processing for Social Good and Responsible Computing

The seminar will be available for in-person and Zoom participation. If you would like to receive the Zoom link, please register using the “Register” button above.

Large language models (LLMs) constitute a paradigm shift in Natural Language Processing and its applications across all domains. To move towards human-centric NLP designed for social good and responsible computing, I argue we need knowledge-aware NLP systems and human-AI collaboration frameworks.  NLP systems that interact with humans need to be knowledge aware (e.g., linguistic, commonsense, sociocultural norms) and context aware (e.g., social, perceptual) so that they communicate better and in a safer and more responsible fashion with humans. Moreover, NLP systems should be able to collaborate with humans to create high-quality datasets for training and/or evaluating NLP models, to help humans solve tasks, and ultimately to align better with human values. In this talk, I will give a brief overview of my lab’s research around NLP for social good and responsible computing (e.g., misinformation detection, NLP for education and public health, building NLP technologies with language and culture diversity in mind). I will highlight key innovations on theory-guided and knowledge-aware models that allow us to address two important challenges: lack of training data, and the need to model commonsense knowledge. I will also present some of our recent work on human-AI collaboration frameworks for building high-quality datasets for various tasks such as generating visual metaphors or modeling cross-cultural norms similarities and differences. 


Smaranda Muresan is a Research Scientist at the Data Science Institute at Columbia University and an Amazon Scholar. Before joining Columbia, she was a faculty member in the School of Communication and Information at Rutgers University where she co-founded the Laboratory for the Study of Applied Language Technologies and Society. At Rutgers, she was the recipient of the Distinguished Achievements in Research Award. Her research focuses on human-centric Natural Language Processing for social good and responsible computing. She develops theory-guided and knowledge-aware computational models for understanding and generating language in context (e.g., visual, social, multilingual, multicultural) with applications to computational social science, education, and public health. Research topics that she worked on over the years include: argument mining and generation, fact-checking and misinformation detection, figurative language understanding and generation (e.g., sarcasm, metaphor, idioms), and multilingual language processing for low-resource and endangered languages. Recently, her research interests include explainable models and human-AI collaboration frameworks for high-quality datasets creation. She received best papers awards at SIGDIAL 2017 and ACL 2018 (short paper). She served as a board member for the North American Chapter of the Association for Computational Linguistics (NAACL) 2020-2021, as a co-founder and co-chair of the New York Academy of Sciences’ Annual Symposium on NLP/Dialog/Speech (2019-2020) and as a Program Co-Chair for SIGDIAL 2020 and ACL 2022.