Portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 1
Short description of portfolio item number 2
Eunjeong Stella Koh, Shlomo Dubnov, Dustin Wright
Published in IEEE 20th International Workshop on Multimedia Signal Processing (MMSP), 2018
We present promising results with symbolic music generation using a variational autoencoder with CNN encoder.
Download here
Dustin Wright, Yannis Katsis, Raghav Mehta, Chun-Nan Hsu
Published in Automated Knowledge Base Construction, 2019
We develop a lightweight model for performing disease name normalization utilizing pretrained word-embeddings, distant supervision, and a dictionary of disease terms to outperform state of the art on disease name normalization on two datasets. AKBC 2019 Best Application Paper
Download here
Varsha Dave Badal, Dustin Wright, Yannis Katsis, Ho-Cheol Kim, Austin D Swafford, Rob Knight, Chun-Nan Hsu
Published in Microbiome, 2019
We describe and highlight challenges in the construction of knowledge bases for human microbiome-disease associations, surveying relevant literature and providing recommendations for KB construction in this domain going forward.
Download here
Dustin Wright and Isabelle Augenstein
Published in Findings of EMNLP, 2020
We show that when performing the task of claim check-worthiness detection, positive-unlabelled learning helps across multiple domains. Additionally, we highlight key similarities and differences in check-worthiness detection datasets.
Download here
Pepa Atanasova* and Dustin Wright* and Isabelle Augenstein
Published in EMNLP, 2020
We propose a novel method using universal adversarial triggers and GPT-2 to generate difficult adversarial claims for fact checking models which preserve label direction and are semantically coherent, showing that such generated claims easily fool fact checking models.
Dustin Wright and Isabelle Augenstein
Published in EMNLP, 2020
We demonstrate that when using large pretrained transformer models, mixture of experts methods can lead to significant gains in domain adaptation settings while domain adversarial training does not. We provide evidence that such models are relatively robust across domains, making homogenous predictions despite being fine-tuned on different domains.
Download here
Dustin Wright and Isabelle Augenstein
Published in Findings of ACL, 2021
We introduce the CiteWorth dataset for cite-worthiness detection, provide several strong baselines for the task, and demontrate downstream usefulness of pre-training on cite-worthiness detection.
Download here
Dustin Wright and Isabelle Augenstein
Published in EMNLP, 2021
We formalize and introduce a test set for exaggeration detection of health science, and propose MT-PET, an extension of Pattern Exploiting Training, to perform the task in a few-shot setting.
Download here
Andreas Nugaard Holm, Barbara Plank, Dustin Wright and Isabelle Augenstein
Published in AAAI 2022 Workshop on Scientific Document Understanding (SDU 2022), 2022
We present a method and dataset for the novel task of predicting the trajectory of citations a paper will receive over time.
Download here
Dustin Wright, David Wadden, Kyle Lo, Bailey Kuehl, Arman Cohan, Isabelle Augenstein, and Lucy Lu Wang
Published in ACL, 2022
We develop methods for generating and evaluating atomic, valid scientific claims from citation texts.
Download here
Dustin Wright*, Jiaxin Pei*, David Jurgens, and Isabelle Augenstein
Published in EMNLP, 2022
We develop a new dataset and models for measuring information change in science communication, providing improved performance on scientific evidence retrieval and several large scale analyses of science communication.
Download here
Published:
This is a description of your talk, which is a markdown files that can be all markdown-ified like any other post. Yay markdown!
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.