About the Trust in Science project
Trust in Science is a flagship project of the Harvard Data Science Initiative (HDSI), conducted in collaboration with the Harvard Kennedy School’s Program on Science, Technology & Society (STS). At a time of seemingly widespread loss of confidence in science and expertise, the Project seeks to illuminate the varied factors that currently impede trusting relations between the producers and users of scientific information. It leverages data science, science and technology studies, and related disciplines to analyze the breakdowns in public trust, and to ask what steps could be taken to promote better mutual understanding.
The Project supports faculty-led research efforts, workshops, conferences, symposia, and external engagement to amplify the impact of funded work.
Supporting the Trust in Science project
The HDSI’s Trust in Science project is funded in large part by philanthropic support from donors who share our desire to advance understandings of trust and mistrust in science by leveraging data science and other relevant disciplines, toward the goal of creating actionable insights. Reflecting the breadth and potential impact of this work, we welcome additional support from both industry and individuals who seek to catalyze this progress. Contribute to the Trust in Science project here.
Current supporters of the Trust in Science project include Bayer and Microsoft.
Funded Research
Trust in Science during the Vaccine Phase of the COVID-19 pandemic in the U.S.
Matthew Baum (Harvard Kennedy School), Roy Perlis (Harvard Medical School), Mauricio Santillana (Harvard Medical School)
The US is engaged in a race to vaccinate its population against COVID-19 in time to generate herd immunity before more transmissible and morbid variants become widespread. This has created a series of unprecedented challenges, ranging from production to distribution to administration of the vaccine. Ultimately, it is the proverbial last mile that will determine whether and when the US is able to win this race; that is, getting the vaccines into arms before current vaccines become inadequate to prevent morbidity and mortality. The last mile depends to a significant extent on persuading Americans to accept the vaccine, which, in turn, rests on the very topic for which this HDSI initiative is named: trust in science.
We propose to address three primary questions in this research: (1), (2) To what extent is this decision influenced by individual-level trust in science and expertise, and (3) Which types of trust matter most in affecting individual behavior regarding vaccines? We are requesting funds to hire a full-time postdoctoral fellow for one year in order to aid in conducting the proposed research, analyzing the results, and preparing public reports and academic papers.
Mapping Covid-19 Misinformation
Yochai Benkler (Berkman Klein Center for Internet and Society, Harvard Law School)
This project addresses the widespread misinformation and disinformation surrounding COVID-19, particularly in politically polarized communities in the U.S., where simple myth-busting has proven ineffective. Researchers at the Berkman Klein Center’s Media Cloud team are developing a machine learning-based classifier, trained on human-coded data, to identify and track COVID-related disinformation at the story level—focusing on topics like conspiracy theories about the virus’s origin, false cures, and attacks on scientific institutions. By distinguishing between spreaders, superspreaders, debunkers, and irrelevant content, the project aims to monitor how misinformation spreads across distinct online communities. The ultimate goal is to scale this system to track millions of stories, enabling more targeted and timely interventions to combat disinformation and rebuild trust in science.
Countering COVID-19 Misinformation Via WhatsApp in Zimbabwe
Kevin Croke (Harvard T.H. Chan School of Public Health), Jeremy Bowles (Harvard Faculty of Arts and Sciences), Horacio Larreguy (Harvard Faculty of Arts and Sciences), John Marshall (Columbia University), Shelly Liu (UC Berkeley)
Misinformation about health is a serious problem in the COVID-19 crisis, including in developing countries where access to credible scientific news sources is limited, and misinformation spreads virally via social media. We propose to experimentally evaluate two methods to address this in Zimbabwe, where fake news regarding COVID-19 has been identified as a major problem. This study will build on a previous collaboration with Zimbabwean NGO Kubatana which has demonstrated a promising role for WhatsApp messages for correcting misconceptions about the COVID-19 virus and encouraging preventive behavior. WhatsApp Chatbots have been described as a promising tool to fight COVID misinformation, including by the World Health Organization, but evidence on their effectiveness is limited. An experiment using WhatsApp in Zimbabwe increased knowledge about COVID-19 by 0.26 standard deviations and increased compliance with social distancing. However, scaling the dissemination of fact-checking is costly because it is labor-intensive. This project will innovate by using a WhatsApp Chatbot, disseminated by the same trusted local NGO, to take fact-checking dissemination to scale. It will also test the effectiveness of a popular mass-media fact-checking tool distributed via WhatsApp.
Identifying and Correcting Confirmation Bias in Model Selection
Elena Glassman (Harvard John A. Paulson School of Engineering and Applied Sciences)
We identify and propose a family of experiments to study several key human decision-making moments and cognitive bias mitigation measures during the selection and deployment of the interpretable models: choosing a model, overiding the predictions of a model, and pulling a model out of deployment. By understanding and developing method recommendations that reduce faulty or suboptimal decisions made by humans in the ML model life cycle, we hope to reduce downstream harms and increase trust in the (revised) scientific process that incporporates and reports on the appropriately trusted ML models without over-reliance.
Protesting Expert Authorities: Autonomy and the Democratic Challenge to Evidence-Based Policy
Jennifer Hochschild (Harvard John F. Kennedy School of Government)
Funding a U.S.-representative survey, in both Spanish and English, primarily focused on vaccination but also including abortion and biobanks to explore different dimensions of bodily autonomy, political alignment, and trust in medical expertise. Chosen because they represent contrasting views on intervention, privacy, and the collective good across ideological and demographic lines. The survey will gather both open-ended responses to understand motivations (e.g., autonomy, self-interest, care for others) and closed-ended data on trust and political views. In addition, the study will analyze public communications from elite actors (e.g., officials, scientists) to assess how their messaging may shape public opinion, with the broader goal of informing more effective and trust-building health policymaking.
Seeing is Believing? How Data Visualization Affects Trust in Science
Hanspeter Pfister (Harvard John A. Paulson School of Engineering and Applied Sciences), Carolina Nobre (Harvard Data Science Initiative), Bo Yun Park (Harvard Faculty of Arts and Sciences)
How is the trust in data visualizations modulated by varying degrees of complexity and with the use of different visual encodings across different ethnoracial groups? We approach this question with a case study of the trust in the data around the COVID vaccines. We will conduct a survey wherein we embed different types of visualizations to examine how visual complexity and choice of visual encodings affect people’s trust in the safety and effectiveness of the COVID vaccine. This is particularly relevant for African-Americans and Hispanics, who despite having been hit harder by the pandemic are not getting vaccinated as much as white people. This interdisciplinary study will contribute to a better understanding of how scholars can and should use data visualizations for scientific persuasion and behavior compliances.
Trust at First Sight: How We Process and Accept Scientific Information through Visualizations
Hanspeter Pfister (Harvard John A. Paulson School of Engineering and Applied Sciences), Carolina Nobre (University of Toronto)
Following on from their previous award this project explores how people process and trust scientific information when it’s presented through data visualizations, such as graphs and charts. Using eye-tracking technology, researchers will observe how 120 participants interpret visualizations of varying complexity and connect that information to their existing beliefs. The study builds on previous findings showing that simpler visuals are generally more trusted than complex ones. Over the course of a year, the team will design 12 different visualizations and analyze how viewers engage with them. The goal is to develop practical guidelines for scientists, journalists, and others who create scientific visualizations, helping them design graphics that are both easy to understand and trustworthy.
Charting coronavirus vaccine confidence in the United States
Minttu Roenn (Harvard T.H. chan School of Public Health)
Increasing number of surveys collect indicators of coronavirus vaccine attitudes creating a rich and varied source of data. In this project, we propose to search and collate data from surveys, which measure confidence in coronavirus vaccination in the United States. The data will be used to perform an evidence synthesis of characteristics associated with vaccine confidence over time.
Ensuring privacy in COVID-19 epidemiological mobility data sets
Salil Vadhan (Harvard John A. Paulson School of Engineering and Applied Sciences), Satchit Balsari (Harvard T.H. Chan School of Public Health), Caroline Buckee (Harvard T.H. Chan School of Public Health), Merce Crosas (Institute for Quantitative Social Science), Gary King (Institute for Quantitative Social Science)
This project is a collaboration between the COVID-19 Mobility Data Network, co-founded by Harvard faculty Caroline Buckee and Satchit Balsari, and the OpenDP initiative, led by faculty directors Salil Vadhan and Gary King. It aims to apply differential privacy to vast amounts of COVID-19 mobility data to study the movement of individuals during social distancing restrictions while preserving their privacy.
Explainable AI for Promoting Trust in Science
Marinka Zitnik (Harvard Medical School), Himabindu Lakkaraju (Harvard Business School)
The Coronavirus Disease 2019 (COVID-19) pandemic has caused a severe strain on the health care systems as well as the economies of countries worldwide. The rapid spread and the disruptive nature of this pandemic call for a renewed public trust in science because this trust is found to be a critical factor in determining if the general public will comply with the health recommendations outlined by the authorities. This compliance, in turn, is key to solving the current crisis. To build, promote, and maintain public trust in science, we propose to develop novel computational frameworks that leverage explainable AI, an emerging area of artificial intelligence research that provides interpretable, easy-to-understand, yet highly accurate predictions. We will deploy our explainable AI toolset in several applications where trust in science is critical to curbing the spread of the virus and deploying new interventions rapidly. First, our AI tools will help doctors and healthcare professionals understand the functionality of complex ML models so they can decide if and when to trust these models. We believe this can have a significant impact on enabling medical professionals to leverage computational research in making informed decisions about diagnosis and treatment. Second, our AI tools will build trust more broadly by identifying what studies and scientific articles carry the most credible findings on COVID-19 so that the general public has the right information that they can rely on. In doing so, this project will provide a clear pathway to a more trustworthy scientific enterprise. It will promote trust in science among the general public as well as between scientists themselves.