Bio

I’m a second year PhD student at the University of Copenhagen in the CopeNLU group studying natural language processing and machine learning. Specifically, my research focuses on automated fact checking, scientific language understanding, and domain adaptation. Before joining CopeNLU I was a research intern at IBM Research in Almaden, and received my master’s degree from University of California, San Diego for work on disease name normalization, which won a best application paper award at AKBC 2019. Outside of NLP I climb rocks and play dungeons and dragons :dragon_face: and dance games :arrow_left::arrow_down::arrow_up::arrow_right:.

News

  • (15/09/2020) 2 main conference and 1 Findings paper accepted to EMNLP 2020. Announcement thread

  • (19/07/2020) New website is now live!

  • (08/07/2020) We hosted a series of person-limited meetups at the University of Copenhagen to view the live sessions of ACL, with plenty of interesting discussions and good company :smile:

  • (05/03/2020) Preprint of our work on claim check-worthiness detection (w/ Isabelle Augenstein) is now available: https://arxiv.org/pdf/2003.02736.pdf

  • (01/10/2019) Started my PhD in natural language processing and machine learning at the University of Copenhagen

Featured Publications

Transformer Based Multi-Source Domain Adaptation

Dustin Wright and Isabelle Augenstein

Published in EMNLP, 2020

We demonstrate that when using large pretrained transformer models, mixture of experts methods can lead to significant gains in domain adaptation settings while domain adversarial training does not. We provide evidence that such models are relatively robust across domains, making homogenous predictions despite being fine-tuned on different domains.

Download here

Generating Label Cohesive and Well-Formed Adversarial Claims

Pepa Atanasova* and Dustin Wright* and Isabelle Augenstein

Published in EMNLP, 2020

We propose a novel method using universal adversarial triggers and GPT-2 to generate difficult adversarial claims for fact checking models which preserve label direction and are semantically coherent, showing that such generated claims easily fool fact checking models.

Claim Check-Worthiness Detection as Positive Unlabelled Learning

Dustin Wright and Isabelle Augenstein

Published in Findings of EMNLP, 2020

We show that when performing the task of claim check-worthiness detection, positive-unlabelled learning helps across multiple domains. Additionally, we highlight key similarities and differences in check-worthiness detection datasets.

Download here