Should we apply high standards to those, who speak about high standards?

Dmytro Mishkin
4 min readNov 19, 2020


Let’s speak about ethics.

Recently we have submitted our position paper “ArXiving Before Submission Helps Everyone” to NeurIPS 2020 Workshop on “Navigating the Broader Impacts of AI Research” (NBIAIR).

We had two motivations for the submission.

1. The main motivation was to continue the discussion, which started with our two blog posts, followed by the arXiv paper discussed on twitter and reddit.

2. The secondary — to get meaningful feedback from the reviewers and possibly correct our position and improve our arguments based on that feedback.

The NBIAIR call for papers also specified that submissions may “include case studies, surveys, analyses, and position papers,” and we structured our submission as a position paper. Here is a quote from the workshop’s introduction

This workshop therefore aims to examine how concerns with harmful impacts should affect the way the research community develops its research agendas, conducts its research, evaluates its research contributions, and handles the publication and dissemination of its findings. This event complements other NeurIPS workshops this year (e.g., [17] [18]) devoted to normative issues in AI and builds on others from years past (e.g., [19] [20] [21]), but adopts a distinct focus on the ethics of research practice and the ethical obligations of researchers.

And the quote from the Call for Proposals:

Challenges of AI research practice and responsible publication: What practices are appropriate for the responsible development, conduct, and dissemination of AI research? How can we ensure wide-spread adoption?

- Surveys of responsible research practice in AI research, including common practices around data collection, crowdsourced labeling, documentation and reporting requirements, declaration of conflict of interest, etc. [25] [26] [27] [28]

- Limitations and benefits of the conference-based publication format, peer review, and other characteristics of AI publication norms, including alternative proposals (e.g., gated or staged release) [29] [30] [31]

Given these preliminaries, we were surprised by the content of the review and metareview (see them below in screenshots), which rejected our paper for low novelty, insufficient citations and (R2) “irrelevance to the workshop topic”.

Yet, against good review practices as recommended in CVPR, ICML, and NeurIPS, neither reviewers nor meta-reviewer provided a single reference, which we supposedly failed to cite.

So, we wrote a letter to the Program Chairs pointing out the issues with reviews and asked to reconsider the decision.

We were even more surprised by the answer from PC, who said that:

Nevertheless, to preserve the integrity of the review process, we must ultimately defer to the reviews that we’ve received.

Which is false. Quoting Jiri Matas — my supervisor and co-author of this paper, and who had an experience of being PC at CVPR and ECCV:

It’s the other way round — one of the PCs’ role is to handle complaints and deal with reviewing mistakes. It is the responsibility of the PC not to defer to reviews which are problematic or make false claims. The oversight of the PC of the reviewing process is it’s integral part. A PC in this situation has many options, e.g. asking for an independent opinion, asking the metareview to reconsider in the light of the facts brought up by the authors.

Moreover, the PC was in agreement with R2 that our paper is of low relevance to the workshop. Which can be a good reason for rejection, no doubts about that.

Yet — you guessed it! — we were even more surprised, that “An Open Review of OpenReview: A Critical Analysis of the Machine Learning Conference Review Process”, which covers the same topic as our submission, was accepted, i.e. is of high relevance to the workshop.

Which seems like double standards.

I am not saying that our paper cannot be rejected. It has some flaws, which, for example, were found by Ozan Sener in the discussion on Twitter.
However, the paper should not be rejected based on unbacked claims of not citing previous literature without providing any such reference in the reviews.
And the paper should not be rejected for “low relevance to the workshop”, while another paper with the same relevance is accepted to the same workshop.

I believe, that one cannot discuss the “ethics” and “good standards”, if one does not behave according to that standards.

P.S. Last, but now least, I would like to thank NBIAR reviewers, meta-review, and PC for such a bright illustration and support case of our point — one should post the work on arXiv to avoid being killed by low-quality reviews.

P.P.S. Interestingly, the discussion on social media turned out to be more constructive and deep than official workshop reviews. We are working on a revised version, which answers issues raised on social media.

Unclear if this is relevant to the workshop topic. They do no novel work.
Reviews we got
Meta-review: low novelty and insufficient literature review — in particular, not citing past literature making similar claim



Dmytro Mishkin

Computer Vision researcher and consultant. Co-founder of Ukrainian Research group “Szkocka”.