On my first internship I worked at a lab and at one point a researcher asked all the staff to be nice enough to read his draft to spot mistakes. I obliged as well, not understanding he meant typos. I asked him about an equation I did not understand in his paper, because I could not link some terms to the rest of the text. He answered that he did just copy it from another paper, that it only made sense in the original publication and that he was not really understanding it either but just enough to know it had to be there.
On another internship I worked for a researcher who bragged about having published almost 10 papers on one of his algorithmic discoveries without ever revealing totally (which allowed him to start a for-profit company with it)
You know, the whole publication + review + reproduction thing really helped science become a more solid process, but we need something more elaborate now. Probably some kind of reputation system that would not just be the number of citations by friends and colleagues.
> You know, the whole publication + review + reproduction thing really helped science become a more solid process, but we need something more elaborate now.
No, we don’t. Increasing demands for rigour in pre-publication peer review are why publication times from submission sociology and economics reach and exceed two years and why papers which start off thirty pages long end up with eighty, after adding robustness checks and citing every tangentially relevant paper in the literature. We know that post publication peer review works perfectly well because it was the norm until after WWII with the rise of state funding of big science and the accompanying ass covering and form filling proceduralism that made it popular.
As always only replication counts, whether that’s checking that an experiment has the results claimed or that an argument follows from its premises.
We certainly don’t need to lean on reputation more. Science isn’t law; arguments from authority aren’t valid.
I think the accessability of publication has lead to a broader sampling of the normal curve, and as a result the overall quality of scientific literature is in decline. I imagine that just 50 years ago University was for a select elite, whether by nature or nurture, and the cost of running and printing journals pre internet ensured prioritization of a scarce resource. Nowadays publishing is relatively cheap and that coupled with what I imagine is a modern use of publication numbers as a KPI means lots of noise.
I felt it personally in grad school. If you objectively observe the work of yourself and your peers in such an environment, you may notice that there's a reason that none of you got into the Ivy Leagues.
Actually I totally botched the intent of my first message. My point is that the publication process is used as a reputation metric, which is something the research world (not science) needs. It needs it for valid purposes and using number of citations and impact factor for it is a hack that is now becoming very noisy due to the various exploits that can be made to this metric.
The publication+review+reproduction process is fine to discover scientific fact, I totally agree.
> We know that post publication peer review works perfectly well because it was the norm until after WWII with the rise of state funding of big science and the accompanying ass covering and form filling proceduralism that made it popular.
You know if there's anything written about the history of the modern scientific process, specifically on the rise of state funding of science? I'm particularly interested in when academics started to be essentially required to bring in external funding. I've only read offhand remarks like this and don't feel I have the full story.
This is an active area of interest in History of Science scholarship these days, which is steadily dismantling a lot of myths about how long peer review has existed and where it came from. It is in fact linked to the need to bring in funds from big grants agencies during the Cold War. You might try for example Melinda Baldwin, Scientific Autonomy, Public Accountability, and the Rise of “Peer Review” in the Cold War United States, which has a lot of references to recent scholarship: https://www.journals.uchicago.edu/doi/pdfplus/10.1086/700070
But there is zero reward for showing replication results. Not novel enough, you won't get published. And if you're unable to replicate it then maybe you just did it wrong, or there was a small trick they were using in the code which they left out of the paper, etc.
Having replication studies/papers be on par with "innovation" studies/papers on academic conferences and journals. Or maybe not on par, but considering them as something worthy of publication.
The idea is that you post your paper on some common repository like arxiv. The "reviewer entity" or RE which is self-organized group with interest in sub-field can be invited to review it. The RE accepts or rejects it. Citations graph propogates to RE as reputation score.
The problem with his proposal:
1. Authors are not anonymous which can bias prestigious REs to avoid papers from unknown authors.
2. Everything still depends on citations, most of which are usually worthless as people mostly cite to fill obligatory related sections, not because they are actually using your work.
3. No obligation for REs to review anything. Doing reviews is tiresome and busy people may just avoid it unless there is a clear obligation/honor system involved.
4. Prestigious REs will be invited to review by everyone, causing highly uneven distribution for the workload.
I believe openreview system was started based on LeCun's thoughts but needless to say we haven't found a good system that can resolve above issues. More importantly, any change in system needs to come from community leaders who are part of organizing commitees for conferences or website developers like arxiv and Google Scholars. Unfortunately, last two have been virtual stasis for long time.
It's astonishing how little investment exists for our main engines of progress that is scientific progress. Compare number of developers working on arxiv or Google Scholar vs Twitter!
> You know, the whole publication + review + reproduction thing really helped science become a more solid process,
I think the model is still good. The examples you stated really do fail at the reproduction part. I think one of the difficulties here is that reproduction is difficult and costly, and there is no one willing to pay for it. A common metric for a paper is "can a smart PhD student replicate the experiment from the contents of the paper?" How many advisors ask their students to replicate a paper? How many students can?
I had this experience with multi-objective evolutionary algorithms. I worked with a researcher who was hung up on the idea of "rotationally invariant search operators." This idea made little sense in the first place. To the extent that it meant anything at all, the properties he ascribed to the search operators were more or less the opposite of their actual behavior.
Chapter 6 is the only place you'll find me talking about rotational invariance, and unfortunately I had to recite the B.S. party line there. If you read any of it, read chapter 5 and chapter 2.
On another internship I worked for a researcher who bragged about having published almost 10 papers on one of his algorithmic discoveries without ever revealing totally (which allowed him to start a for-profit company with it)
You know, the whole publication + review + reproduction thing really helped science become a more solid process, but we need something more elaborate now. Probably some kind of reputation system that would not just be the number of citations by friends and colleagues.