Trust in Science

About Trust in Science Programming at the HDSI

The Harvard Data Science Initiative is currently engaged in planning for the launch of a research program on the topic of trust and mistrust in science.  Our overarching goal: To increase public trust in science by leveraging data science to shed light on issues related to public mistrust of science, and developing interventions that support enduring cultural change.

Work supported by this program and the related HDSI Research Fund for Trust in Science will focus on advancing data-driven understandings of trust and mistrust in science. However, recognizing the immediate importance of public trust in science during the novel coronavirus pandemic, the HDSI has accelerated the availability of funding to support research that examines issues related to trust in science in the context of the COVID-19 pandemic.  Additional funding will be made available later this year for work not limited  to COVID-19 through the broader Trust in Science program.  Details about such funding will be posted here and advertised to the Harvard community through the HDSI mailing list and social media channels. 

Ethics and Governance

The HDSI views relationships with industry as critical to our mission of transformation through data science. The problems we tackle can be informed by the most difficult challenges facing industry - challenges that can be solved in partnership with academia. At the same time, we recognize that industry-academia collaboration gives rise to its own ethical and governance questions. When partnering around a complex and impactful topic such as trust in science, these issues are more pronounced.  Therefore, the HDSI's governance of this research program will be advised by an independent advisory board whose members are external to Harvard.  More information about the composition and work of this board will follow as the program takes shape. 

The HDSI Research Fund for Trust in Science

The Harvard Data Science Initiative Research Fund for Trust in Science will provide philanthropic support for research that advances understanding of trust and mistrust in science by leveraging data science, toward the goal of creating actionable insights.  Reflecting the breadth and potential impact of this work, the Fund is open to gifts from multiple donors who seek to catalyze this progress. Initial seed funding for the Fund has been provided by Bayer, an HDSI Corporate Member.

Support for projects related to COVID-19 and Trust in Science now available

In 2020, a portion of the fund will be used to support projects that employ data-driven approaches to advance our understanding of trust and mistrust in science in the context of the novel coronavirus pandemic.  Details about how to apply are provided here.

 

Funded COVID-19 Projects

 

Mapping Covid-19 Misinformation
Yochai Benkler (Berkman Klein Center for Internet and Society, Harvard Law School)

Countering COVID-19 Misinformation Via WhatsApp in Zimbabwe

Kevin Croke (Harvard T.H. Chan School of Public Health), Jeremy Bowles (Harvard Faculty of Arts and Sciences), Horacio Larreguy (Harvard Faculty of Arts and Sciences), John Marshall (Columbia University), Shelly Liu (UC Berkeley)

Misinformation about health is a serious problem in the COVID-19 crisis, including in developing countries where access to credible scientific news sources is limited, and misinformation spreads virally via social media. We propose to experimentally evaluate two methods to address this in Zimbabwe, where fake news regarding COVID-19 has been identified as a major problem. This study will build on a previous collaboration with Zimbabwean NGO Kubatana which has demonstrated a promising role for WhatsApp messages for correcting misconceptions about the COVID-19 virus and encouraging preventive behavior. WhatsApp Chatbots have been described as a promising tool to fight COVID misinformation, including by the World Health Organization, but evidence on their effectiveness is limited. An experiment using WhatsApp in Zimbabwe increased knowledge about COVID-19 by 0.26 standard deviations and increased compliance with social distancing. However, scaling the dissemination of fact-checking is costly because it is labor-intensive. This project will innovate by using a WhatsApp Chatbot, disseminated by the same trusted local NGO, to take fact-checking dissemination to scale. It will also test the effectiveness of a popular mass-media fact-checking tool distributed via WhatsApp.

Ensuring privacy in COVID-19 epidemiological mobility data sets
Salil Vadhan (Harvard John A. Paulson School of Engineering and Applied Sciences), Satchit Balsari (Harvard T.H. Chan School of Public Health), Caroline Buckee (Harvard T.H. Chan School of Public Health), Merce Crosas (Institute for Quantitative Social Science), Gary King (Institute for Quantitative Social Science)

This project is a collaboration between the COVID-19 Mobility Data Network, co-founded by Harvard faculty Caroline Buckee and Satchit Balsari, and the OpenDP initiative, led by faculty directors Salil Vadhan and Gary King. It aims to apply differential privacy to vast amounts of COVID-19 mobility data to study the movement of individuals during social distancing restrictions while preserving their privacy.

Explainable AI for Promoting Trust in Science
Marinka Zitnik (Harvard Medical School), Himabindu Lakkaraju (Harvard Business School)

The Coronavirus Disease 2019 (COVID-19) pandemic has caused a severe strain on the health care systems as well as the economies of countries worldwide. The rapid spread and the disruptive nature of this pandemic call for a renewed public trust in science because this trust is found to be a critical factor in determining if the general public will comply with the health recommendations outlined by the authorities. This compliance, in turn, is key to solving the current crisis. To build, promote, and maintain public trust in science, we propose to develop novel computational frameworks that leverage explainable AI, an emerging area of artificial intelligence research that provides interpretable, easy-to-understand, yet highly accurate predictions. We will deploy our explainable AI toolset in several applications where trust in science is critical to curbing the spread of the virus and deploying new interventions rapidly. First, our AI tools will help doctors and healthcare professionals understand the functionality of complex ML models so they can decide if and when to trust these models. We believe this can have a significant impact on enabling medical professionals to leverage computational research in making informed decisions about diagnosis and treatment. Second, our AI tools will build trust more broadly by identifying what studies and scientific articles carry the most credible findings on COVID-19 so that the general public has the right information that they can rely on. In doing so, this project will provide a clear pathway to a more trustworthy scientific enterprise. It will promote trust in science among the general public as well as between scientists themselves.