Colocated with the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024)
Recent advances in Natural Language Processing, and the emergence of pretrained Large Language Models (LLM) specifically, have made NLP systems omnipresent in various aspects of our everyday life. In addition to traditional examples such as personal voice assistants, recommender systems, etc, more recent developments include content-generation models such as ChatGPT, text-to-image models (Dall-E), and so on. While these emergent technologies have an unquestionable potential to power various innovative NLP and AI applications, they also pose a number of challenges in terms of their safe and ethical use. To address such challenges, NLP researchers have formulated various objectives, e.g., intended to make models more fair, safe, and privacy-preserving. However, these objectives are often considered separately, which is a major limitation since it is often important to understand the interplay and/or tension between them. For instance, meeting a fairness objective might require access to users’ demographic information, which creates tension with privacy objectives. The goal of this workshop is to move toward a more comprehensive notion of Trustworthy NLP, by bringing together researchers working on those distinct yet related topics, as well as their intersection.
We invite papers which focus on different aspects of safe and trustworthy language modeling. Topics of interest include (but are not limited to):
All submissions undergo double-blind peer review (with author names and affiliations removed) by the program committee, and they will be assessed based on their relevance to the workshop themes.
All submissions go through the Softconf START conference management system. To submit, use this Softconf submission link.
Submitted manuscripts must be 8 pages long for full papers and 4 pages long for short papers. Please follow NAACL submission policies. Both full and short papers can have unlimited pages for references and appendices. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper. Template files can be found here.
We also ask authors to include a limitation section and broader impact statement, following guidelines from the main conference.
If your paper has been reviewed by ACL, EMNLP, EACL, or ARR and the average rating is higher than 2.5 (either average soundness or excitement score), the paper is qualified to be submitted to the fast-track. In the appendix, please include the reviews and a short statement discussing what parts of the paper have been revised.
NAACL workshops are traditionally archival. To allow dual submission of work, we are also including a non-archival track. If accepted, these submissions will still participate and present their work in the workshop. A reference to the paper will be hosted on the workshop website (if desired), but will not be included in the official proceedings. Please submit through Softconf but indicate that this is a cross submission at the bottom of the submission form. You can also skip this step and inform us of your non-archival preference after the reviews.
Accepted and under-review papers are allowed to submit to the workshop but will not be included in the proceedings.
No anonymity period will be required for papers submitted to the workshop, per the latest updates to the ACL anonymity policy. However, submissions must still remain fully anonymized.
Jieyu Zhao is an assistant professor of Computer Science Department at University of Southern California. Prior to that, she was an NSF Computing Innovation Fellow at University of Maryland, College Park. Jieyu received her Ph.D. from Computer Science Department, UCLA. Her research interest lies in fairness of ML/NLP models. Her paper got the EMNLP Best Long Paper Award (2017). She was one of the recipients of 2020 Microsoft PhD Fellowship and has been selected to participate in 2021 Rising Stars in EECS workshop. Her research has been covered by news media such as Wires, The Daily Mail and so on. She was invited by UN-WOMEN Beijing on a panel discussion about gender equality and social responsibility.
The rapid advancement of natural language processing (NLP) technologies has unlocked a myriad of possibilities for positive societal impact, ranging from enhancing accessibility and communication to supporting disaster response and public health initiatives. However, the deployment of these technologies also raises critical concerns regarding accountability, fairness, transparency, and ethical use. In this talk, I will discuss our efforts for auditing NLP models, detecting and mitigating biases, and understanding how LLMs make decisions. We hope to open the conversation to foster a community-wide effort towards more accountable and inclusive NLP practices.
Prasanna Sattigeri is a Principal Research Scientist at IBM Research AI and the MIT-IBM Watson AI Lab, where his primary focus is on developing reliable AI solutions. His research interests encompass areas such as generative modeling, uncertainty quantification, and learning with limited data. His current projects are focused on the governance and safety of large language models (LLMs), aiming to establish both theoretical frameworks and practical systems that ensure these models are reliable and trustworthy. He has played a significant role in the development of several well-known open-source trustworthy AI toolkits, including AI Fairness 360, AI Explainability 360, and Uncertainty Quantification 360.
TBD
Organizers
Program Committee
If you are interested in reviewing submissions, please fill out this form.
Please contact us at trustnlp24naaclworkshop@googlegroups.com.