Publications

Transformer Based Multi-Source Domain Adaptation

Dustin Wright and Isabelle Augenstein

Published in EMNLP, 2020

We demonstrate that when using large pretrained transformer models, mixture of experts methods can lead to significant gains in domain adaptation settings while domain adversarial training does not. We provide evidence that such models are relatively robust across domains, making homogenous predictions despite being fine-tuned on different domains.

Download here

Generating Label Cohesive and Well-Formed Adversarial Claims

Pepa Atanasova* and Dustin Wright* and Isabelle Augenstein

Published in EMNLP, 2020

We propose a novel method using universal adversarial triggers and GPT-2 to generate difficult adversarial claims for fact checking models which preserve label direction and are semantically coherent, showing that such generated claims easily fool fact checking models.

Claim Check-Worthiness Detection as Positive Unlabelled Learning

Dustin Wright and Isabelle Augenstein

Published in Findings of EMNLP, 2020

We show that when performing the task of claim check-worthiness detection, positive-unlabelled learning helps across multiple domains. Additionally, we highlight key similarities and differences in check-worthiness detection datasets.

Download here

NormCo: Deep Disease Normalization for Biomedical Knowledge Base Construction

Dustin Wright, Yannis Katsis, Raghav Mehta, Chun-Nan Hsu

Published in Automated Knowledge Base Construction, 2019

We develop a lightweight model for performing disease name normalization utilizing pretrained word-embeddings, distant supervision, and a dictionary of disease terms to outperform state of the art on disease name normalization on two datasets. AKBC 2019 Best Application Paper

Download here