Trust in Science Round 2 RFP
The US is engaged in a race to vaccinate its population against COVID-19 in time to generate herd immunity before more transmissible and morbid variants become widespread. This has created a series of unprecedented challenges, ranging from production to distribution to administration of the vaccine. Ultimately, it is the proverbial last mile that will determine whether and when the US is able to win this race; that is, getting the vaccines into arms before current vaccines become inadequate to prevent morbidity and mortality. The last mile depends to a significant extent on persuading Americans to accept the vaccine, which, in turn, rests on the very topic for which this HDSI initiative is named: trust in science.
We propose to address three primary questions in this research: (1), (2) To what extent is this decision influenced by individual-level trust in science and expertise, and (3) Which types of trust matter most in affecting individual behavior regarding vaccines? We are requesting funds to hire a full-time postdoctoral fellow for one year in order to aid in conducting the proposed research, analyzing the results, and preparing public reports and academic papers.
Identifying and Correcting Confirmation Bias in Model Selection
Elena Glassman (Harvard John A. Paulson School of Engineering and Applied Sciences)
We identify and propose a family of experiments to study several key human decision-making moments and cognitive bias mitigation measures during the selection and deployment of the interpretable models: choosing a model, overiding the predictions of a model, and pulling a model out of deployment. By understanding and developing method recommendations that reduce faulty or suboptimal decisions made by humans in the ML model life cycle, we hope to reduce downstream harms and increase trust in the (revised) scientific process that incporporates and reports on the appropriately trusted ML models without over-reliance.
Seeing is Believing? How Data Visualization Affects Trust in Science
Hanspeter Pfister (Harvard John A. Paulson School of Engineering and Applied Sciences), Carolina Nobre (Harvard Data Science Initiative), Bo Yun Park (Harvard Faculty of Arts and Sciences)
How is the trust in data visualizations modulated by varying degrees of complexity and with the use of different visual encodings across different ethnoracial groups? We approach this question with a case study of the trust in the data around the COVID vaccines. We will conduct a survey wherein we embed different types of visualizations to examine how visual complexity and choice of visual encodings affect people’s trust in the safety and effectiveness of the COVID vaccine. This is particularly relevant for African-Americans and Hispanics, who despite having been hit harder by the pandemic are not getting vaccinated as much as white people. This interdisciplinary study will contribute to a better understanding of how scholars can and should use data visualizations for scientific persuasion and behavior compliances.
Charting coronavirus vaccine confidence in the United States
Minttu Roenn (Harvard T.H. chan School of Public Health)
Increasing number of surveys collect indicators of coronavirus vaccine attitudes creating a rich and varied source of data. In this project, we propose to search and collate data from surveys, which measure confidence in coronavirus vaccination in the United States. The data will be used to perform an evidence synthesis of characteristics associated with vaccine confidence over time.
Special RFP for COVID-19 Trust in Science Projects
Mapping Covid-19 Misinformation
Yochai Benkler (Berkman Klein Center for Internet and Society, Harvard Law School)
Countering COVID-19 Misinformation Via WhatsApp in Zimbabwe
Kevin Croke (Harvard T.H. Chan School of Public Health), Jeremy Bowles (Harvard Faculty of Arts and Sciences), Horacio Larreguy (Harvard Faculty of Arts and Sciences), John Marshall (Columbia University), Shelly Liu (UC Berkeley)
Misinformation about health is a serious problem in the COVID-19 crisis, including in developing countries where access to credible scientific news sources is limited, and misinformation spreads virally via social media. We propose to experimentally evaluate two methods to address this in Zimbabwe, where fake news regarding COVID-19 has been identified as a major problem. This study will build on a previous collaboration with Zimbabwean NGO Kubatana which has demonstrated a promising role for WhatsApp messages for correcting misconceptions about the COVID-19 virus and encouraging preventive behavior. WhatsApp Chatbots have been described as a promising tool to fight COVID misinformation, including by the World Health Organization, but evidence on their effectiveness is limited. An experiment using WhatsApp in Zimbabwe increased knowledge about COVID-19 by 0.26 standard deviations and increased compliance with social distancing. However, scaling the dissemination of fact-checking is costly because it is labor-intensive. This project will innovate by using a WhatsApp Chatbot, disseminated by the same trusted local NGO, to take fact-checking dissemination to scale. It will also test the effectiveness of a popular mass-media fact-checking tool distributed via WhatsApp.
Ensuring privacy in COVID-19 epidemiological mobility data sets
Salil Vadhan (Harvard John A. Paulson School of Engineering and Applied Sciences), Satchit Balsari (Harvard T.H. Chan School of Public Health), Caroline Buckee (Harvard T.H. Chan School of Public Health), Merce Crosas (Institute for Quantitative Social Science), Gary King (Institute for Quantitative Social Science)
This project is a collaboration between the COVID-19 Mobility Data Network, co-founded by Harvard faculty Caroline Buckee and Satchit Balsari, and the OpenDP initiative, led by faculty directors Salil Vadhan and Gary King. It aims to apply differential privacy to vast amounts of COVID-19 mobility data to study the movement of individuals during social distancing restrictions while preserving their privacy.
The Coronavirus Disease 2019 (COVID-19) pandemic has caused a severe strain on the health care systems as well as the economies of countries worldwide. The rapid spread and the disruptive nature of this pandemic call for a renewed public trust in science because this trust is found to be a critical factor in determining if the general public will comply with the health recommendations outlined by the authorities. This compliance, in turn, is key to solving the current crisis. To build, promote, and maintain public trust in science, we propose to develop novel computational frameworks that leverage explainable AI, an emerging area of artificial intelligence research that provides interpretable, easy-to-understand, yet highly accurate predictions. We will deploy our explainable AI toolset in several applications where trust in science is critical to curbing the spread of the virus and deploying new interventions rapidly. First, our AI tools will help doctors and healthcare professionals understand the functionality of complex ML models so they can decide if and when to trust these models. We believe this can have a significant impact on enabling medical professionals to leverage computational research in making informed decisions about diagnosis and treatment. Second, our AI tools will build trust more broadly by identifying what studies and scientific articles carry the most credible findings on COVID-19 so that the general public has the right information that they can rely on. In doing so, this project will provide a clear pathway to a more trustworthy scientific enterprise. It will promote trust in science among the general public as well as between scientists themselves.