Objectivity in Science Part I - Who Conducts Science?

The following two posts draw inspiration from the incredible series Science Under the Scope by Sophie Wang of Free Radicals. We highly encourage you to check out her comic, as well as the rest of our suggested readings (listed below).

Sometime in grade school, all of us learned what it means to be a “good” scientist. Our teachers described how good scientists set aside their personal biases and leverage the scientific method. We learned that impartiality is important because it helps scientists systematically understand the world, rather than seeing only what they want or expect to see. Objectivity is especially important, our teachers may have explained, when scientific research is translated into technological and medical advances because it helps us ensure that science benefits everyone equally.

This notion of the impartial scientist lodged in our brains and sprouted into stereotypes of the white-coated, fact-obsessed, emotion-free nerd. Often, this idea has become so deeply wedged in our collective psyche that we forget about the scientists altogether, seeing only the science that they have produced. Scientific knowledge spreads before our eyes like the branches of a great tree and the root system of human scientists, hidden as it is from our daily gaze, is easily forgotten.

But of course no tree could survive without its roots. Those roots—and the soil they settle in, the sunshine, the rain, the wind—shape it as it grows. We can examine how science is similarly shaped by the scientists that produce it and by how it is conducted. Each scientist is a human being who brings their personal set of expectations, emotions, and experiences to the laboratory, the clinic, or the field every time they conduct a research study. They also conduct their science in the greater context of our society. In this pair of articles, we will consider how scientific knowledge grows out of the work of individual scientists and how it is shaped by its context to get a better sense for what objectivity in scientific research looks like.

The Muir Woods at dusk. We see a stand of trees embedded in a complex ecosystem, the roots invisible beneath the soil. Photo credit: Avery Krieger

The Muir Woods at dusk. We see a stand of trees embedded in a complex ecosystem, the roots invisible beneath the soil. Photo credit: Avery Krieger

The Root System of Scientific Knowledge

As a scientist, I am acutely aware of how little resemblance I bear to the grade school caricature of an impartial observer. I am a human being. Specifically, I am a white, able-bodied, cisgender woman with a high degree of education, living in relative economic comfort in suburban California. Just like all other human beings from all other walks of life, as much as I strive to be impartial, I will never be able to fully divorce myself from my emotions, my expectations, my prior experiences, my community, and every other factor that could contribute to my explicit and implicit biases.

How do individual biases like these impact science? We can take artificial intelligence (AI) research as one example. Many different types of people are involved in AI research, but the top tech companies and computer science departments studying AI are very white and very male. Only 2.3% of Google’s 2020 hires were Black women, while 30.1% of the hires were white men. Google’s attrition rate for Black employees was also higher than average in 2020. Here at Stanford, just 14 of the 944 computer science majors in 2020 were Black women. Across Stanford departments, those numbers decline further along in the academic pipeline.

What effect might this imbalance have? Let’s take a look at automated facial recognition technology, which uses AI, and see if we can find the bias. Facial recognition is a ubiquitous part of modern life. For example, it helps Google photos tag your pictures and it also helps law enforcement identify suspects from security camera footage. In a 2018 study, computer scientists Joy Buolamwini and Dr. Timnit Gebru found that facial recognition algorithms are highly accurate for light skinned faces, and especially for men. In contrast, the algorithms made many errors when presented with darker skinned women. This bias may be because the gold standard database of images for training these algorithms is about 80% white and male (Buolamwini and Gebru). In other words, the algorithms and the data they draw from are racially biased, perhaps due to the biases of the researchers who designed them.

Biased science can have serious societal impacts. According to the NAACP, Black Americans are five times more likely to be stopped by police and five times more likely to be arrested compared to white Americans. As a result, they are more likely to be subject to facial recognition processing and their picture is more likely to be included in the database of potential suspects. If you add racially biased facial recognition algorithms to this mix, Black women become more likely to be falsely accused of committing a crime because, as far as the algorithm is concerned, their face “looks like” other Black faces. In other words, while AI may feel more objective than human agents, biased AI is just one more ingredient in a recipe for discriminatory policing.

Mitigating Bias with Strong Objectivity

Faced with these dangers, scientists use a few common strategies to reduce the influence of bias on our research. We can use blinded or double blinded experiments (when the experimenter and subject do not know what the experimental condition is), random sampling, statistical analysis methods, and repeated trials. Each of these methods creates distance between the experimenter and the experiment, making it less likely that our personal biases will influence the outcome. For example, if you were trying to identify the best tasting wine, you would hide the price of each bottle to prevent your expectation (that the priciest wine should be the tastiest) from affecting your results. Strategies like this also help confirm that our exciting result did not occur by random chance. These are all important strategies for reducing individual bias, but none of them can completely eliminate it.

There is also an effort to work with data that is more representative of the general population. For example, as part of their study, Buolamwini and Dr. Gebru put together a new dataset of face images that includes a wider range of skin tones than previously existing databases (Buolamwini and Gebru). This more diverse dataset will help train facial recognition algorithms to accurately identify all kinds of faces, making it less likely that the algorithms will make mistakes on dark-skinned faces. Scientists, doctors, and community leaders are spearheading similar efforts to diversify participants in genetic and neuroscience research studies (Weinberger et al.). By studying a broader range of individuals, the scientific insights and medical advances made by these studies are more likely to benefit more people. But these efforts can be fraught in themselves. Even as it is important to diversify our data, human biodiversity runs the risk of becoming a modern day stand-in for race science (Saini, Ch. 6) and researchers, well-meaning or otherwise, can easily stumble into problematic racial stereotypes (Benjamin). And, of course, diverse datasets are just one piece of the scientific process—there is still plenty of room for bias to obstruct equity in research and medicine.

Another way to mitigate the effect of individual bias is to increase the number and diversity of scientists investigating a particular question. This approach accepts that each individual scientist will inevitably bring their unique perspective to their work. Rather than trying to remove this bias completely, it combines many different perspectives to approach objectivity in aggregate, a concept that Dr. Sandra Harding called “strong objectivity” (Harding). Critically, strong objectivity depends on diverse perspectives in every sense of the term (more viewpoints don’t do us much good if they are all very similar to each other).

Systemic Racism makes Science Homogenous

Unfortunately, as we saw for AI research, most scientific institutions have a long way to go to achieve strong objectivity and some are regressing. At Stanford, Black indigenous people of color (BIPOC) are increasingly under-represented as you go up the academic hierarchy and this trend is consistent across academia and medicine (National Academies of Sciences, Engineering, and Medicine; Roundtable on Black Men and Black Women in Science, Engineering, and Medicine), Ch. 5 Attacks on Diversity, Equity, and Inclusion in Education). Black graduate students are less likely to continue onto an academic position than their white peers (Jackson et al.), and BIPOC faculty are less likely to be promoted and more likely to leave their academic institution (Fang et al.). Dr. Gebru herself is one of the many Black researchers who will not be returning to Google next year. Dr. Gebru says that she was fired for calling out problematic minority hiring practices and biased search algorithms.

Why are BIPOC scientists underrepresented? Systemic racism at individual, institutional, and societal levels is likely to blame (Barber et al.): BIPOC students face greater adversity and receive less support than their white peers at elite, predominantly white institutions, while historically Black colleges and universities (HBCUs) are systematically underfunded (Gasman and Nguyen). When Black graduate students apply for post-doctoral positions, professors show both racial and gender biases when considering their CVs—faculty rank hypothetical Black, female post-doctoral candidates as less competent and less hireable than their white male counterparts even if their CVs are identical (Eaton et al.). Black professors receive worse scores on their grant proposals, making them less likely than their white colleagues to receive awards and funding (Erosheva et al.; Hoppe et al.; Ginther, Schaffer, et al.). Publications and citations are critical academic currency, yet BIPOC researchers are published less often and in less prestigious journals (Ginther, Basner, et al.), and they are less likely to be cited than their white colleagues (Bertolero et al.).

Moving Forward Towards Diversity in Science

There is a path towards more diverse (and therefore more objective) science. To achieve strong objectivity, Harding and others argue that we have an obligation, not only to proportionally include minority viewpoints, but to center and amplify those perspectives to make up for centuries of exclusion (Harding).

There are concrete actions we can implement right now to work towards strong objectivity. Educators can take responsibility for the achievement of historically marginalized students; design inclusivity-focused curricula; see each student’s inherent capacity for success; explicitly link STEM education and social justice; and adopt the many other best practices for helping Black students thrive in STEM that HBCUs have already perfected (Gasman and Nguyen). Academic institutions can celebrate the work of BIPOC scholars through symposia; elevate late-stage BIPOC scholars who are soon to enter the job market; and reward trainees and faculty who have shown a commitment to promoting diversity, equity, and inclusion. Graduate admissions and hiring committees can implement thoughtful, evidence-based strategies to improve diversity. The National Institute of Health and other major funding agencies can reduce bias in grant funding by publicly acknowledging that racism exists in science; enacting policies to eliminate racial funding disparities; prioritizing diversity in scoring criteria, funding teams, and review panels; and implementing anti-racism training for their leaders, staff, and reviewers (Stevens et al.). Researchers can diversify their citations and include diversity statements in their published works to increase transparency around citation diversity (Zurn et al.).

These are just some of the many actions we can take to improve diversity in STEM and make our way towards more objectivity in science. But of course, the root system of human scientists is not the only factor that shapes the tree of science. Just as the soil, sunlight, wind, and water can shape a tree as it grows, science is also impacted by how it is conducted. To achieve strong objectivity, we also need a diversity of scientific approaches, rather than training every scientist to think and act in the same way. In our next piece, we will consider this other element of strong objectivity as we examine the different ways that science is produced.

Senior Editor: Tia Donaldson, M.A.

Photo: Avery Krieger averykriegerphotography.com

References and Further Reading

Barber, Paul H., et al. “Systemic Racism in Higher Education.” Science, vol. 369, no. 6510, Sept. 2020, pp. 1440–41.

Benjamin, Ruha. “Race for Cures: Rethinking the Racial Logics of ‘trust’ in Biomedicine.” Sociology Compass, vol. 8, no. 6, Wiley, June 2014, pp. 755–69.

Bertolero, Maxwell A., et al. “Racial and Ethnic Imbalance in Neuroscience Reference Lists and Intersections with Gender.” bioRxiv, 12 Oct. 2020, p. 2020.10.12.336230, doi:10.1101/2020.10.12.336230.

*Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of the 1st Conference on Fairness, Accountability and Transparency, edited by Sorelle A. Friedler and Christo Wilson, vol. 81, PMLR, 2018, pp. 77–91.

Eaton, Asia A., et al. “How Gender and Race Stereotypes Impact the Advancement of Scholars in STEM: Professors’ Biased Evaluations of Physics and Biology Post-Doctoral Candidates.” Sex Roles, vol. 82, no. 3-4, Springer Science and Business Media LLC, Feb. 2020, pp. 127–41.

Erosheva, Elena A., et al. “NIH Peer Review: Criterion Scores Completely Account for Racial Disparities in Overall Impact Scores.” Science Advances, vol. 6, no. 23, June 2020, p. eaaz4868.

Fang, D., et al. “Racial and Ethnic Disparities in Faculty Promotion in Academic Medicine.” JAMA: The Journal of the American Medical Association, vol. 284, no. 9, Sept. 2000, pp. 1085–92.

*Gasman, Marybeth, and Thai-Huy Nguyen. Making Black Scientists. 2019, doi:10.4159/9780674242364.

Ginther, Donna K., Jodi Basner, et al. “Publications as Predictors of Racial and Ethnic Differences in NIH Research Awards.” PloS One, vol. 13, no. 11, Nov. 2018, p. e0205929.

Ginther, Donna K., Walter T. Schaffer, et al. “Race, Ethnicity, and NIH Research Awards.” Science, vol. 333, no. 6045, Aug. 2011, pp. 1015–19.

Harding, Sandra. “Rethinking Standpoint Epistemology: What Is‘ Strong Objectivity?’” The Centennial Review, vol. 36, no. 3, JSTOR, 1992, pp. 437–70.

Hoppe, Travis A., et al. “Topic Choice Contributes to the Lower Rate of NIH Awards to African-American/black Scientists.” Science Advances, vol. 5, no. 10, Oct. 2019, p. eaaw7238.

Jackson, Joanna R., et al. “Graduation and Academic Placement of Underrepresented Racial/Ethnic Minority Doctoral Recipients in Public Health Disciplines, United States, 2003-2015.” Public Health Reports, vol. 134, no. 1, 2019, pp. 63–71.

National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Policy and Global Affairs; Roundtable on Black Men and Black Women in Science, Engineering, and Medicine. The Impacts of Racism and Bias on Black People Pursuing Careers in Science, Engineering, and Medicine: Proceedings of a Workshop. Edited by Camara P. Jones et al., National Academies Press (US), 2020.

*Saini, Angela. Superior: The Return of Race Science. Beacon Press, 2019.

*Science Under the Scope: Full Series. 11 Mar. 2016, https://freerads.org/science-scope-full/.

Stevens, Kelly R., et al. “Fund Black Scientists.” Cell, vol. 184, no. 3, Feb. 2021, pp. 561–65.

Weinberger, Daniel R., et al. “Missing in Action: African Ancestry Brain Research.” Neuron, vol. 107, no. 3, Aug. 2020, pp. 407–11.Zurn, Perry, et al. “The Citation Diversity Statement: A Practice of Transparency, A Way of Life.” Trends in Cognitive Sciences, vol. 24, no. 9, Sept. 2020, pp. 669–72.

* highly recommended reading