Discredited rape data overshadow what’s accurate

A Washington Times headline on Nov. 9 declared, “Pentagon ‘gay’ rape debacle: Report alleging male-on-male sexual trauma retracted.” In an almost unprecedented move, the American Psychological Association (APA) retracted an article it had published a week earlier in its journal, Psychological Services. “Preliminary Data Suggest Rates of Male Sexual Trauma May be Higher than Previously Reported” had claimed that the rate of rape for military males might be 15 times higher than acknowledged. The media trumpeted the presence of another rape crisis.

{mosads}Why were the data retracted? An APA press release explained, “Although the article went through our standard peer-review process, other scholars have … raised valid concerns regarding the design and statistical analysis, which compromise the findings.” Flawed methodology rendered useless results. This often occurs with rape research, whether it is conducted inside the military, at police stations or on campuses.

The most remarkable aspect of the APA retraction may be that it was mentioned by mainstream media. Most discredited assault studies are invisibly corrected, which allows the original, sensational conclusions to be repeated as fact. For example, the campus survey “Denying Rape but Endorsing Forceful Intercourse” exploded across the airwaves in early 2015. One in three male students would rape, researchers maintained, “if nobody would ever know and there wouldn’t be any consequences.” Activists cried out for tighter controls on campus.

A correction by the study’s publisher went virtually unnoticed. It read, “[A]fter publication of the article, Dr. Edwards [the lead researcher] contacted the editorial office to explain that the data presented inadvertently duplicated a dataset that was previously published in Problems of Psychology in the 21st Century. Two similar datasets (the other focusing on rape perceptions and how they differ in individual vs. group judgments) were collected at the same time, but from different individuals.”

The conclusions of “Denying Rape” are based on a different set of participants than researchers had surveyed. The massive error invalidates the results. But the publisher claimed, “The error does not affect the results or conclusions.” Analyzing an entirely different population than the one surveyed does not “affect the results”?

Restating discredited results is a standard tactic of politically motivated researchers. Even as the APA acknowledged the unreliability of its data and peer review, it insisted that other articles in the same issue of the journal supported the claims of “Preliminary Data Suggest Rates.” “One article having some problems with its statistical analysis,” the editor alleged, “should not undo the power and facts of the other 12 articles as a collection.”

Yes, it should. The retracted article was the anchor piece. Moreover, the supporting articles presumably went through the same incompetent review as “Preliminary Data Suggest Rates” and they should be viewed with equal skepticism. An honest researcher would set the data dial back to zero.

More likely, the original conclusions will be used to forge law. One media source explained, “Effective treatment for male MST [male sexual trauma] should combat male rape myths and dispel the notion that military sexual trauma among men is rare, the researchers argue.” Researchers who argue for government policy on the basis of a suggestive and now debunked study are expressing vested interests. It is a path by which bad data become bad law. In the process, solid studies are elbowed out of the way.

The most egregious example of bad data driving out good is the rape statistic that one in five women will be attacked. The National Crime Victimization Survey (NCVS), conducted by the Department of Justice, has been called the gold standard of rape statistics because its sound methodology is repeated consistently on an annual basis. In December 2014, the Bureau of Justice Statistics issued a special report based on NCVS data. It found that, from 2007 to 2013, the mean annual rate at which women were sexually assaulted was 4.7 per 1,000. Multiplying the figure by four approximates the risk a female student confronts during her time on campus. It is 20 per 1,000, or one in 50.

The one in five figure comes from the National Intimate Partner and Sexual Violence Survey by the Centers for Disease Control, which defines sexual violence so broadly that unwanted and nonphysical experiences are included; an example would be an offensive joke. And, yet, the NIPSVS stat drives out the NCVS one.

The APA and the media have a chance to shed honest light on a much-neglected area: male rape. Or they can create more myths. The track record is not heartening.

McElroy is a research fellow at the Independent Institute.

Tags Bureau of Justice Statistics Department of Justice Rape Sexual assault Statistics

Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed. Regular the hill posts

Main Area Bottom ↴

Testing Video

ASR RAW Boys Lacrosse: Coronado 8, Poway 6

ASR RAW Boys Lacrosse: Coronado 8, Poway 6
ASR RAW Girls Lacrosse: Coronado 15, Cathedral ...
Former Torrey Pines teammates take home another NCAA ...
Boys Lacrosse: Torrey Pines 11, Bishop's 9
More Videos

Top Stories

See All

Most Popular

Load more