I’m a third year PhD student at the University of Copenhagen in the CopeNLU group studying natural language processing and machine learning. Specifically, my research focuses on automated fact checking, scientific language understanding, and domain adaptation. Before joining CopeNLU I was a research intern at IBM Research in Almaden, and received my master’s degree from University of California, San Diego for work on disease name normalization, which won a best application paper award at AKBC 2019. Outside of NLP I climb rocks and play dungeons and dragons :dragon_face: and dance games :arrow_left::arrow_down::arrow_up::arrow_right:.


  • (15/03/2022) Gave an invited talk about science communication and misinformation detection at Elsevier

  • (24/02/2022) One paper accepted to ACL on generating scientific claims for zero-shot scientific fact checking! This work was done during my internship at AI2

  • (21/01/2022) Gave an invited talk about exaggeration detection in science for Search Engines Amsterdam

  • (01/09/2021) Our paper on few shot learning for exaggeration detection in science is accepted to EMNLP 2021

  • (02/08/2021) One paper published in Findings of ACL

  • (01/06/2021) Started an internship at AI2 with Lucy Wang at Semantic Scholar on scientific claim generation

  • (01/03/2021) Gave a talk at ETH Zürich about cite-worthiness detection.

  • (15/09/2020) 2 main conference and 1 Findings paper accepted to EMNLP 2020. Announcement thread

  • (19/07/2020) New website is now live!

  • (08/07/2020) We hosted a series of person-limited meetups at the University of Copenhagen to view the live sessions of ACL, with plenty of interesting discussions and good company :smile:

  • (05/03/2020) Preprint of our work on claim check-worthiness detection (w/ Isabelle Augenstein) is now available:

  • (01/10/2019) Started my PhD in natural language processing and machine learning at the University of Copenhagen


Featured Publications

Transformer Based Multi-Source Domain Adaptation

Dustin Wright and Isabelle Augenstein

Published in EMNLP, 2020

We demonstrate that when using large pretrained transformer models, mixture of experts methods can lead to significant gains in domain adaptation settings while domain adversarial training does not. We provide evidence that such models are relatively robust across domains, making homogenous predictions despite being fine-tuned on different domains.

Download here