Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting discussion on the ICLR openreview, resulting in a reject:

https://openreview.net/forum?id=PdauS7wZBfC



The review is great, it contains all the interesting points and counterpoints, in a much more succinct format than the article itself.


Another well received paper [1], but I want to point out that ICLR should really have an industry track.

The type of research in [1] (exhaustive analytic study on various parameters on RL training), is clearly beyond typical academia environment, probably also beyond normal industry labs. Note the paper was from Google Brain.

The study consumes a lot of people's time, and computing time. It's no doubt very useful and valuable. But I dont think they should be judged by the same group of reviewers with the other work from normal universities.

[1] https://openreview.net/forum?id=nIAxjsniDzg


While it wouldn't hurt, I don't think it is necessary. As with any large ML conference, many reviewers are in industry. I don't know the mix of industry to academic reviewers, but would not be surprised if it was biased towards industry supported research.


Copied from this URL, the final review comments that 1) summarized the other reviews, 2) describes the rational for rejection:

``` This paper extends recent work (Whittington & Bogacz, 2017, Neural computation, 29(5), 1229-1262) by showing that predictive coding (Rao & Ballard, 1999, Nature neuroscience 2(1), 79-87) as an implementation of backpropagation can be extended to arbitrary network structures. Specifically, the original paper by Whittington & Bogacz (2017) demonstrated that for MLPs, predictive coding converges to backpropagation using local learning rules. These results were important/interesting as predictive coding has been shown to match a number of experimental results in neuroscience and locality is an important feature of biologically plausible learning algorithms.

The reviews were mixed. Three out of four reviews were above threshold for acceptance, but two of those were just above. Meanwhile, the fourth review gave a score of clear reject. There was general agreement that the paper was interesting and technically valid. But, the central criticisms of the paper were:

Lack of biological plausibility The reviewers pointed to a few biologically implausible components to this work. For example, the algorithm uses local learning rules in the same sense that backpropagation does, i.e., if we assume that there exist feedback pathways with symmetric weights to feedforward pathways then the algorithm is local. Similarly, it is assumed that there paired error neurons, which is biologically questionable.

Speed of convergence The reviewers noted that this model requires many more iterations to converge on the correct errors, and questioned the utility of a model that involves this much additional computational overhead.

The authors included some new text regarding biological plausibility and speed of convergence. They also included some new results to address some of the other concerns. However, there is still a core concern about the importance of this work relative to the original Whittington & Bogacz (2017) paper. It is nice to see those original results extended to arbitrary graphs, but is that enough of a major contribution for acceptance at ICLR? Given that there are still major issues related to (1) in the model, it is not clear that this extension to arbitrary graphs is a major contribution for neuroscience. And, given the issues related to (2) above, it is not clear that this contribution is important for ML. Altogether, given these considerations, and the high bar for acceptance at ICLR, a "reject" decision was recommended. However, the AC notes that this was a borderline case. ```

The core reason is that the proposed model lacks biological plausibility. Or, if ignoring this weakness, the model is then computationally more intensive.

I HAVE NOT read the paper, but the review seems mostly based "feeling"; i.e., the reviewers feel that this work is not above the bar. Note that I am not criticizing the reviewers here, in my past review career of maybe in the range of 100+ papers, which I did until 6 years ago, most of them are junks. For the ones that are truly good work, which checks all the boxes: new result, hard problem, solid validation, it was easy to accept.

For yet a few other papers, which all seem to fall into the feeling category, everything looks right, but it was always on a borderline. And the review results can vary substantially based on the reviewers' own backgrounds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: