What Can We Do to Improve Peer Review in NLP?

Abstract

Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers face a poorly defined task forcing apples-to-oranges comparisons. As a community familiar with annotation, we can improve at least that.

Publication
In Findings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Date