Why does transparency matter? As the so-called "replication crisis" places the methods, measures, and culture of science under ever-greater scrutiny, what can social scientists do to maintain faith in their research? Such questions fascinate Simine Vazire, associate professor of psychology and transparency advocate.
In a recent survey of 1,500 scientists from multiple fields, published in Nature, 90 percent of respondents believed there to be a crisis of reproducibility in their field. More than half believed that crisis to be “significant.”
For Vazire, an interest in improving methods, refining measures, and improving transparency has been central to her research from the first. Her substantive area of research is personality and self-knowledge. “From graduate school onwards," she says, "I had to struggle with this question of what is the best way to measure what someone’s really like, independent of what they say they’re like?”
Overcoming flaws
Vazire was already asking questions about methodological rigor when, in 2011, two major events brought questions of transparency and replicability home to the field of psychology. First, a paper claiming evidence of extrasensory perception was published in a top journal. Because this claim was unexpected, it prompted scrutiny that revealed serious flaws.
“You could say, How did it get published?” Vazire says. “But when you looked that closely at other papers, many of them had the same flaws. So then we had to think, Maybe we should be just as unsure about those other papers as we are about this one.”
Published the same year, “False-Positive Psychology” presented evidence that common methods in the field allowed for cherry-picking of results. “That paper made us say, 'Wow, the way we’ve been doing things provides almost no evidence for our conclusions.'”
Vazire saw a need to focus on transparency. “The movement caught my attention pretty early on, because it spoke to some of the same questions I’ve been struggling with about how do we actually know things, about epistemology, and about the right way to do science, and flaws in our methods and how to overcome them. It became clear to me that we needed to deal with this before we were really going to be able to make progress.”
Pausing to think
For Vazire, transparency improves methodological rigor by exerting pressure on researchers while revealing to journal reviewers and editors whether or not a study’s findings are well supported.
“If we had greater transparency, people would no longer be able to convince themselves and others of extraordinary claims based on small samples, because they’d have to report everything they did.”
The biggest challenge facing the replication movement is, Vazire believes, “to get people to pause and think that maybe what we’re talking about is what they’re doing. It’s so hard for people to accept that the thing you think is really solid, really good — the thing you were taught to do — might actually be a bad practice.”
“It’s not intentional,” Vazire says. “So often, when people are doing those things, they don’t know that they’re doing them, or they don’t know that doing them increases the false positive rate by as much as it does. So it feels innocent — like you are just describing what you observed in your data.”
The problem is that by presenting “only the most exciting, most significant stuff, you’re finding patterns that are actually noise.” Lack of awareness — compounded by disciplinary pressures to generate novel, exciting, or counter-intuitive results — can encourage practices like “p-hacking.” P-hacking involves mining data for any statistically significant patterns. If hundreds or thousands of possible relationships are looked for, the risk of false positives caused by chance goes up.
Rethinking “publish or perish”
More ominously, it is possible that the “publish or perish” pressures of academic life — pressures engendering non-transparent practices — may also incentivize dishonesty. Of course, while instances of fraud — such as the now-infamous case of Diederik Stapel — make for splashy headlines, they are far less common than unintentional bad practices.
Still, Vazire says, “I don’t think fraud is nearly as rare as people thought it was. I think we underestimated how much pressure our incentive structure exerts on people.”
For Vazire, a key to the solution is giving researchers stronger incentives to be transparent than to p-hack.
“If we require more transparency, then it will be obvious all the gymnastics you had to do to get your beautiful counterintuitive result and then it will be less impressive.”
Taking steps
To this end, journals can encourage authors to provide replication packages; to pre-register studies (that is, register research rationale, design and hypotheses before collecting data); or to explain how sample size was determined. Such steps may undercut the bias toward striking results.
Changes to review and editorial practice could improve transparency in publication. Blinding (currently more common for reviewers, but less so for editors) can eliminate subconscious biases based on an author’s status. Vazire observes that in her own personal experience as an editor, engaging in partly blind editorial review has been an eye-opening experience — even scary at first. “Sometimes, you write the desk rejection letter and you’re about to hit send and you look and see who the author is, and it’s like, Oh no! I’ll get in trouble. But that feeling is actually validating to me. Or I go to send the letter and I see who the author is and think, Yeah, I might have been negatively influenced against them if I’d known, and I’m really glad to know that didn’t affect my decision.”
Vazire also feels that encouraging transparency in original research will improve disciplinary culture around replication. Currently, replicating others’ work is viewed with suspicion. As Vazire discusses in a blog post, "I Have Found the Solution and It Is Us," replication studies are held to higher standards of transparency than original studies. Authors (often suspected of being “out to get” the original researchers) are expected to pre-register; sample sizes are interrogated; data are expected to be public. If we hold replicators to such a high standard, wonders Vazire, why don't we feel the need to do this with original studies?
Changing the culture
Greater transparency, Vazire thinks, will facilitate debate and discussion of published work. She notes, “With more transparency, I think, will come a more open culture of criticism and debate. If I post all of my measures, post all my data, someone else can come along and say, Oh you found this and that’s cool, but I looked in this other subgroup of your data and actually found the opposite effect so that would be a really cool question to test in the future.”
Finally, hiring, promotion, and award committees could innovate their practices. This could help change the culture by “praising transparency, so that any costs can be outweighed by positive attention.” Normalizing interaction of this sort could promote faster, more responsible accumulation of knowledge.
“If we considered a published paper something that not only makes a claim but also is accountable for that claim, then it would become more normal for others to come along and reanalyze it. I think that would be really good for science.”
Returning to the core
As the transparency movement broadens its reach, it is increasingly difficult for a given field to claim immunity — at least, until its own methods have faced scrutiny.
“In fields where this hasn’t hit yet,” Vazire notes, “we don't know if there’s not a problem, or if we just don’t know about it yet. I suspect no field of science will be untouched by this.”
But Vazire also sees science’s crisis as an opportunity to return scientific inquiry to its core aim of accumulating knowledge — even when that process sometimes involves returning null results, proving oneself wrong, or investing time to see if the same study can be reproduced. In other words, a call for transparency is also a call to reclaim the soul of science.
Many scientists interpret that call as an accusation of dishonest practice. But when Vazire talks to people outside of science, she is struck that most take transparency for granted. “They say, 'That’s what I thought scientists were doing all along!' So, to people who feel these measures are draconian or extreme I would say: 'Think about when you were first learning about science. What did you imagine scientists did? Did you imagine scientists were trying to keep their process secret and just publishing their most beautiful results? Or did you think that they were open?'”
Vazire wants science to more closely resemble the approach she teaches her undergraduate students. “I tell them that science is about falsification and trying to prove yourself wrong. But what we actually do is try to defend our little finding against any attack. I think we’ve lost our way in trying to defend our current practices. We need to take a step back and think about what science is supposed to be about.”
Simine Vazire is currently editor-in-chief of Social Psychological and Personality Science and a senior editor at Collabra: Psychology. She devotes a substantial part of her work to questions of transparency and meta-science, through publications, such as “The N-factor: evaluating the quality of empirical journals with regard to sample size and statistical power,” (with co-author R. Chris Fraley, an alumnus of UC Davis’ doctoral program in Social-Personality Psychology); through her work with the Association for Psychological Science; and through her blog Sometimes I’m Wrong. Learn more about Simine Vazire at her website.
Watch videos from the Making Social Science Transparent conference, hosted by the UC Davis Institute for Social Sciences in April 2016.
— Phyllis Jeffrey