TrustNLP: Fourth Workshop on Trustworthy Natural Language Processing

Colocated with the 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024)

About

Recent advances in Natural Language Processing, and the emergence of pretrained Large Language Models (LLM) specifically, have made NLP systems omnipresent in various aspects of our everyday life. In addition to traditional examples such as personal voice assistants, recommender systems, etc, more recent developments include content-generation models such as ChatGPT, text-to-image models (Dall-E), and so on. While these emergent technologies have an unquestionable potential to power various innovative NLP and AI applications, they also pose a number of challenges in terms of their safe and ethical use. To address such challenges, NLP researchers have formulated various objectives, e.g., intended to make models more fair, safe, and privacy-preserving. However, these objectives are often considered separately, which is a major limitation since it is often important to understand the interplay and/or tension between them. For instance, meeting a fairness objective might require access to users’ demographic information, which creates tension with privacy objectives. The goal of this workshop is to move toward a more comprehensive notion of Trustworthy NLP, by bringing together researchers working on those distinct yet related topics, as well as their intersection.

Call for Papers

Topics

We invite papers which focus on different aspects of safe and trustworthy language modeling. Topics of interest include (but are not limited to):

  • Secure, Faithful & Trustworthy Generation with LLMs
  • Fairness in LLM alignment, Human Preference Elicitation, Participatory NLP
  • Data Privacy Preservation and Data Leakage Issues in LLMs
  • Toxic Language Detection and Mitigation
  • Red-teaming, backdoor or adversarial attacks and defenses for LLM safety
  • Explainability and Interpretability of LLM generation
  • Robustness of LLMs
  • Mitigating LLM Hallucinations & Misinformation
  • Fairness and Bias in multi-modal generative models: Evaluation and Treatments
  • Industry applications of Trustworthy NLP
  • Trustworthy NLP challenges and opportunities for Latin American and Caribbean languages
  • Regionally-relevant NLP fairness applications (toxicity, sentiment, content moderation, translation, etc.)
We welcome contributions which also draw upon interdisciplinary knowledge to advance Trustworthy NLP. This may include working with, synthesizing, or incorporating knowledge across expertise, sociopolitical systems, cultures, or norms.

Important Dates

  • DEADLINE EXTENDED to Tues, April 2nd, 2024: Workshop Paper Due Date (Direct Submission via Softconf)
  • Wed, March 27th, 2024: Workshop Paper Due Date (Direct Submission via Softconf)
  • Friday, April 5th, 2024: Workshop Paper Due Date (Fast-Track)
  • April 23rd, 2024: Notification of Acceptance
  • May 3rd, 2024: Camera-ready Papers Due
  • June 21/22, 2024: TrustNLP Workshop day

Submission Information

All submissions undergo double-blind peer review (with author names and affiliations removed) by the program committee, and they will be assessed based on their relevance to the workshop themes.

All submissions go through the Softconf START conference management system. To submit, use this Softconf submission link.

Submitted manuscripts must be 8 pages long for full papers and 4 pages long for short papers. Please follow NAACL submission policies. Both full and short papers can have unlimited pages for references and appendices. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper. Template files can be found here.

We also ask authors to include a limitation section and broader impact statement, following guidelines from the main conference.

Fast-Track Submission

If your paper has been reviewed by ACL, EMNLP, EACL, or ARR and the average rating is higher than 2.5 (either average soundness or excitement score), the paper is qualified to be submitted to the fast-track. In the appendix, please include the reviews and a short statement discussing what parts of the paper have been revised.

Non-Archival Option

NAACL workshops are traditionally archival. To allow dual submission of work, we are also including a non-archival track. If accepted, these submissions will still participate and present their work in the workshop. A reference to the paper will be hosted on the workshop website (if desired), but will not be included in the official proceedings. Please submit through Softconf but indicate that this is a cross submission at the bottom of the submission form. You can also skip this step and inform us of your non-archival preference after the reviews.

Policies

Accepted and under-review papers are allowed to submit to the workshop but will not be included in the proceedings.

No anonymity period will be required for papers submitted to the workshop, per the latest updates to the ACL anonymity policy. However, submissions must still remain fully anonymized.

Invited Speakers

Jieyu Zhao

Jieyu Zhao

Assistant Professor, University of Southern California

Jieyu Zhao is an assistant professor of Computer Science Department at University of Southern California. Prior to that, she was an NSF Computing Innovation Fellow at University of Maryland, College Park. Jieyu received her Ph.D. from Computer Science Department, UCLA. Her research interest lies in fairness of ML/NLP models. Her paper got the EMNLP Best Long Paper Award (2017). She was one of the recipients of 2020 Microsoft PhD Fellowship and has been selected to participate in 2021 Rising Stars in EECS workshop. Her research has been covered by news media such as Wires, The Daily Mail and so on. She was invited by UN-WOMEN Beijing on a panel discussion about gender equality and social responsibility.

Talk Title: Building Accountable NLP Models for Social Good

The rapid advancement of natural language processing (NLP) technologies has unlocked a myriad of possibilities for positive societal impact, ranging from enhancing accessibility and communication to supporting disaster response and public health initiatives. However, the deployment of these technologies also raises critical concerns regarding accountability, fairness, transparency, and ethical use. In this talk, I will discuss our efforts for auditing NLP models, detecting and mitigating biases, and understanding how LLMs make decisions. We hope to open the conversation to foster a community-wide effort towards more accountable and inclusive NLP practices.


Jieyu Zhao

Prasanna Sattigeri

Principal Research Scientist, IBM Research

Prasanna Sattigeri is a Principal Research Scientist at IBM Research AI and the MIT-IBM Watson AI Lab, where his primary focus is on developing reliable AI solutions. His research interests encompass areas such as generative modeling, uncertainty quantification, and learning with limited data. His current projects are focused on the governance and safety of large language models (LLMs), aiming to establish both theoretical frameworks and practical systems that ensure these models are reliable and trustworthy. He has played a significant role in the development of several well-known open-source trustworthy AI toolkits, including AI Fairness 360, AI Explainability 360, and Uncertainty Quantification 360.

Talk Title: TBD


Jieyu Zhao

Ahmad Beirami

Research Scientist, Google Research

Ahmad Beirami is a research scientist at Google Research, co-leading a research team on building safe, helpful, and scalable generative language models. At Meta AI, he led research to power the next generation of virtual digital assistants with AR/VR capabilities through robust generative language modeling. At Electronic Arts, he led the AI agent research program for automated playtesting of video games and cooperative reinforcement learning. Before moving to industry in 2018, he held a joint postdoctoral fellow position at Harvard & MIT, focused on problems in the intersection of core machine learning and information theory. He is the recipient of the 2015 Sigma Xi Best PhD Thesis Award from Georgia Tech.

Talk Title: TBD

Schedule

TBD

Committee

Organizers

Program Committee

  • Saied Alshahrani
  • Connor Baumler
  • Gagan Bhatia
  • Keith Burghardt
  • Yang Trista Cao
  • Javier Carnerero Cano
  • Canyu Chen
  • Xinyue Chen
  • Jwala Dhamala
  • Árdís Elíasdóttir
  • Aram Galstyan
  • Usman Gohar
  • Zihao He
  • Pengfei He
  • Qian Hu
  • Satyapriya Krishna
  • Jooyoung Lee
  • Yanan Long
  • Subho Majumdar
  • Ninareh Mehrabi
  • Sahil Mishra
  • Isar Nejadgholi
  • Huy Nghiem
  • Anaelia Ovalle
  • Jieyu Zhao
  • Aishwarya Padmakumar
  • Kartik Perisetla
  • Salman Rahman
  • Chahat Raj
  • Anthony Rios
  • Patricia Thaine
  • Simon Yu
  • Xinlin Zhuang
  • Chupeng Zhang
  • Chenyang Zhu
Interested in reviewing for TrustNLP?

If you are interested in reviewing submissions, please fill out this form.

Questions?

Please contact us at trustnlp24naaclworkshop@googlegroups.com.