TrustNLP: First Workshop on Trustworthy Natural Language Processing

Colocated with the Annual Conference of the North American Chapter of the Association for Computational Linguistics

About

Recent progress in Artificial Intelligence (AI) and Natural Language Processing (NLP) has greatly increased their presence in everyday consumer products in the last decade. Common examples include virtual assistants, recommendation systems, and personal healthcare management systems, among others. Advancements in these fields have historically been driven by the goal of improving model performance as measured by accuracy, but recently the NLP research community has started incorporating additional constraints to make sure models are fair and privacy-preserving. However, these constraints are not often considered together, which is important since there are critical questions at the intersection of these constraints such as the tension between simultaneously meeting privacy objectives and fairness objectives, which requires knowledge about the demographics a user belongs to. In this workshop, we aim to bring together these distinct yet closely related topics.

Call for papers

Overview

We invite papers which focus on developing models that are “explainable, fair, privacy-preserving, causal, and robust” (Trustworthy ML Initiative). Topics of interest include (but are not limited to):

  • Differential Privacy
  • Fairness and Bias: Evaluation and Treatments
  • Model Explainability and Interpretability
  • Accountability
  • Ethics
  • Industry applications of Trustworthy NLP
  • Causal Inference
  • Secure and trustworthy data generation

Important Dates

  • March 29, 2021: Submission Date

  • April 15, 2021: Notification of Acceptance

  • April 26, 2021: Camera-ready papers due

  • June 10, 2021: Workshop on Trustworthy NLP (TrustNLP)

Submission Policy

All submissions will be double-blind peer reviewed (with author names and affiliations removed) by the program committee and judged by their relevance to the workshop themes.
Accepted and under-review papers are allowed to submit to the workshop. Submitted manuscripts must be 8 pages long for full papers, and 4 pages long for short papers. Both full and short papers can have unlimited pages for references and appendices. Please note that at least one of the authors of each accepted paper must register for the workshop and present the paper. Template files can be found here: https://www.overleaf.com/latex/templates/naacl-hlt-2021-latex-template/kvjhhyjsvmxf.

We also ask authors to include a broader impact and ethical concerns statement, following guidelines from the main conference.

Please submit to https://www.softconf .com/naacl2021/trustnlp2021/

Non-Archival option

NAACL workshops are traditionally archival. To allow dual submission of work, we are also including a non-archival track. If accepted, these submissions will still participate and present their work in the workshop. A reference to the paper will be hosted on the workshop website (if desired), but will not be included in the official proceedings. Please submit through softconf but indicate that this is a cross submission at the bottom of the submission form. You can also skip this step and inform us of your non-archival preference after the reviews.

Anonymity Period

We will follow NAACL’s anonymity policy, and require full anonymity until time of acceptance (April 15, 2021).

People

Speakers

Organizers

Program committee

Program

The tentative program (subject to change) is below

 

Time

Event

9:00-9:10 am

Opening Address

9:10-50 am

Keynote 1: Richard Zemel

10-11 am

Paper Presentations

11-11:15 am

Break

11:15-12:15 pm

Paper Presentations

12:15-1:30 pm

Lunch break

1-2 pm

Mentorship Meeting

2-2:50 pm

Keynote 2: Mandy Korpusik

2:50-3 pm 

Break

3-4 pm

Poster session

4:15 -5:05 pm

Keynote 3: Robert Munro

5:05-5:15 pm

Closing Address

Speakers

Measuring and Mitigating Bias in Training Data

Robert Munro,
Author, Human-in-the-Loop Machine Learning

Annotators are the largest and most diverse workforce in machine learning. They can teach the broader technical community much about inclusive participation for designing machine learning applications, especially when the annotators are subject matter experts building training data for applications that they will use. This talk covers three aspects of annotation: working with different annotation workforces; sampling data to improve diversity; and quality control for annotations when there are multiple subjective points-of-view. In each case, the talk will cover some common approaches to measuring and mitigating bias and the relative strengths and limitations of each approach.

Trustworthy Spoken Dialogue Systems: Application to Nutrition

Mandy Korpusik,
Assistant Professor,Loyola Marymount University

In this talk, I will give an overview of ethical issues concerning chatbots and spoken dialogue systems. Since neural generative models are trained on vast amounts of text from the internet, including Reddit, chatbots and dialogue systems may learn stereotypes and offensive language, such as the racist Tay chatbot deployed by Microsoft in 2016. As an application of dialogue systems for good, I will discuss a spoken diet tracking system to help with weight loss. In our work, deep learning techniques perform a semantic mapping from raw, unstructured, human natural language directly to a structured, relational database, without any intermediate pre-processing steps or string matching heuristics. Specifically, I will show that a novel, weakly supervised convolutional neural architecture learns a shared latent space, where vector representations of natural language queries lie close to embeddings of database entries that have semantically similar meanings. We are currently exploring personalized meal recommendations, computer vision for logging photos of food, a nutrition-specific speech recognizer, and exercise tracking. For future work on nutrition and fitness spoken dialogue systems, we will need to consider fairness and mitigate bias by ensuring food recommendations take into consideration each user's goals and dietary restrictions, expanding the food database to include other cuisines beyond American foods, and building a speech recognizer that works well for all accents.

Fairness and Invariant Learning

Richard Zemel,
Industrial Research Chair in Machine Learning, University of Toronto

Robustness is of central importance in machine learning and has given rise to the fields of domain generalization and invariant learning, which are concerned with improving performance on a test distribution distinct from but related to the training distribution. In this talk I will focus on links between research on invariant learning and algorithmic fairness and show how the two fields can be mutually beneficial. While invariant learning methods typically rely on knowledge of disjoint domains or environments, sensitive label information indicating which demographic groups are at risk of discrimination is often used in the fairness literature. Drawing inspiration from recent fairness approaches that improve worst-case performance without knowledge of sensitive groups, I will present a novel domain generalization method that handles the more realistic scenario where environment partitions are not provided. We will then see how this approach can outperform invariant-learning approaches with handcrafted environments in multiple cases. I will also describe how invariant learning methods can be applied to a fairnesss task, of predicting the toxicity of internet comments using the Civil Comments dataset. This work reveals potential benefits as well as limitations in the interaction between robust machine learning methods and algorithmic fairness.

Contact us

For questions, please contact us at trustnlpworkshoporganizers@gmail.com