Bio
I’m currently a Danish Data Science Academy postdoctoral fellow at the University of Copenhagen. My research centers around socially sustainable natural language processing (NLP) with the following mission: I want to help make the world’s knowledge reliable and accessible. This includes both real-world knowledge and scientific knowledge, with additional focus on knowledge diversity. The methods I use come from machine learning and natural language processing. Reliability is concerned with the factuality and faithfulness of different types of knowledge, as well as ensuring that ML systems themselves are safe and interpretable. Accessibility is concerned with making ML systems which process different types of knowledge both efficient and sustainable.
Previously I have been a visitor to the BlaBlaBlab at the University of Michigan, and was a postdoc in the Saints Lab working on sustainable AI. I did my PhD with CopeNLU where I worked on automated fact checking, automatic understanding and analysis of science communication, and domain adaptation. I received my master’s degree from University of California, San Diego, and have worked at IBM Research and the Allen Institute for Artificial Intelligence on the Semantic Scholar project. I also write sometimes on substack. Outside of science I like making music and playing table top role-playing games and rhythm games.
Featured Publications
Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI
Dustin Wright, Christian Igel, Gabrielle Samuel, and Raghavendra Selvan
Published in Communications of the ACM, 2025
We present a perspective on why efficiency will not make AI sustainable and propose systems thinking as a paradigm for the AI community to adopt.
Download here
Unstructured Evidence Attribution for Long Context Query Focused Summarization
Dustin Wright, Zain Muhammad Mujahid, Lu Wang, Isabelle Augenstein, David Jurgens
Published in arXiv, 2025
We propose the task of unstructured evidence attribution for long context query focused summarization and generate a synthetic dataset (SUnsET) to improve model performance on it.
Download here
LLM Tropes: Revealing Fine-Grained Values and Opinions in Large Language Models
Dustin Wright*, Arnav Arora*, Nadav Borenstein, Shrishti Yadav, Serge Belongie, and Isabelle Augenstein
Published in EMNLP Findings, 2024
We generate 156,000 responses to 62 political propositions across 6 language models and demonstrate systematic biases in their stances and plain-text responses.
Download here
Press
- Feature on Montreal AI Ethics Blog
- Interview on the NVIDIA AI podcast
- Exaggeration Detector Could Lead to More Accurate Health Science Journalism (NVIDIA blog)
- An NLP Approach to Exaggeration Detection in Science Journalism (unite.ai)
News
(23/06/2025) Gave a keynote at the ICWSM workshop on Misinformation titled The Many Faces of Science Misinformation
(23/06/2025) Co-led the tutorial Addressing AI Driven Misinformation at ICWSM 2025
(01/06/2025) I was awarded a 1.7M DKK Carlsberg Internationalization Postdoctoral Fellowship!
(01/05/2025) Our paper “Efficiency and Effectiveness of LLM-Based Summarization of Evidence in Crowdsourced Fact-Checking was accepted to SIGIR!
(25/04/2025) Gave an invited talk at the Pioneer Center for AI in Copenhagen titled Revealing Political Opinions in Large Language Models
(07/04/2025) Gave a research seminar at University of Minnesota titled Socially Sustainable NLP
(19/11/2024) Gave a DIKU Bits talk at University of Copenhagen titled LLM Tropes: Revealing Fine-Grained Values and Opinions in Large Language Models
(26/09/2024) Our paper BMRS: Bayesian Model Reduction for Structured Pruning was accepted to NeurIPS as a spotlight paper!
(20/09/2024) Our paper Revealing Fine-Grained Values and Opinions in Large Language Models was accepted to EMNLP Findings!
(09/08/2024) Gave a talk at University of Michigan titled Revealing Fine-Grained Values and Opinions in Large Language Models
(03/07/2024) Our paper Efficiency is Not Enough: A Critical Perspective of Environmentally Sustainable AI was accepted to Communications of the ACM!
(05/05/2024) Started a research stay in the BlaBlaBlab at the University of Michigan with David Jurgens, working on long-context summarization
(20/04/2024) Our paper Understanding Fine-grained Distortions in Reports of Scientific Findings was accepted to ACL findings!
(15/01/2024) I’ve started my two year Danish Data Science Academy postdoctoral fellowship
(20/07/2023) “Modeling Information Change in Science Communication with Semantically Matched Paraphrases” received an honorable mention (top 5 submission) at the International Conference on Computational Social Science!
(25/06/2023) I was awarded a two year postdoc fellowship from the Danish Data Science Academy to work on NLP for science communication!
(01/02/2023) Started a postdoc at University of Copenhagen in the Saints Lab working on sustainable machine learning
(06/10/2022) “Modeling Information Change in Science Communication with Semantically Matched Paraphrases” is accepted to EMNLP 2022!
(15/03/2022) Gave an invited talk about science communication and misinformation detection at Elsevier
(24/02/2022) One paper accepted to ACL on generating scientific claims for zero-shot scientific fact checking! This work was done during my internship at AI2
(21/01/2022) Gave an invited talk about exaggeration detection in science for Search Engines Amsterdam
(01/09/2021) Our paper on few shot learning for exaggeration detection in science is accepted to EMNLP 2021
(02/08/2021) One paper published in Findings of ACL
(01/06/2021) Started an internship at AI2 with Lucy Wang at Semantic Scholar on scientific claim generation
(01/03/2021) Gave a talk at ETH Zürich about cite-worthiness detection.
(15/09/2020) 2 main conference and 1 Findings paper accepted to EMNLP 2020. Announcement thread
(19/07/2020) New website is now live!
(08/07/2020) We hosted a series of person-limited meetups at the University of Copenhagen to view the live sessions of ACL, with plenty of interesting discussions and good company
(05/03/2020) Preprint of our work on claim check-worthiness detection (w/ Isabelle Augenstein) is now available: https://arxiv.org/pdf/2003.02736.pdf
(01/10/2019) Started my PhD in natural language processing and machine learning at the University of Copenhagen