For years, traditional “peer review” has come under fire. A jury of three experts, the peer reviewers, assess each article and recommend only those that they feel represent the most significant new work.
At many elite scientific journals, fewer than 10 percent of the articles submitted are accepted. Many of the rejected articles eventually travel down the “food chain” to be published in a plethora of less prestigious (and less noticed) specialty journals.
A year ago, the respected US journal Science was forced to retract two papers it had published about stem cells. The articles had been submitted by a South Korean team led by Hwang Woo-Suk.
Peer reviewers, as well as the editors, had failed to detect the fraud. In general, peer reviewers, themselves researchers pressed for time, don’t try to re-create experiments and rarely ask to see the raw data that supports a paper’s conclusions.
While peer review is expected to separate the wheat from the chaff, it’s “slow, expensive, profligate of academic time, highly subjective, prone to bias, easily abused, poor at detecting gross defects, and almost useless for detecting fraud,” summed up one critic in BMJ, the British medical journal, in 1997.
You must be logged in to post a comment.