Home

Papers

Submissions

Submission guidelines and editorial policies

Acceptance criteria

Author guidelines

Reviewer guidelines

Action Editor guidelines

Ethics guidelines

Code of conduct

Editorial board

Expert Reviewers

Contact

Frequently Asked Questions

Journal of Machine Learning Research (JMLR)

Proceedings of Machine Learning Research (PMLR)

Data-centric Machine Learning Research (DMLR)

Acceptance Criteria

Acceptance of a submission to TMLR should be based on positive answers to the following two questions.

Are the claims made in the submission supported by accurate, convincing and clear evidence?

This is the most important criterion. This implies assessing the technical soundness as well as the clarity of the narrative and arguments presented.

Any gap between claims and evidence should be addressed by the authors. Often, this will lead reviewers to ask the authors to provide more evidence by running more experiments. However, this is not the only way to address such concerns. Another is simply for the authors to adjust (reduce) their claims.

Would some individuals in TMLR's audience be interested in the findings of this paper?

This is arguably the most subjective criterion, and therefore needs to be treated carefully. Generally, a reviewer that is unsure as to whether a submission satisfies this criterion should assume that it does.

Crucially, it should not be used as a reason to reject work that isn't considered “significant” or “impactful” because it isn't achieving a new state-of-the-art on some benchmark. Nor should it form the basis for rejecting work on a method considered not “novel enough”, as novelty of the studied method is not a necessary criteria for acceptance. We explicitly avoid these terms (“significant”, “impactful”, “novel”), and focus instead on the notion of “interest”. If the authors make it clear that there is something to be learned by some researchers in their area from their work, then the criterion of interest is considered satisfied. TMLR instead relies on certifications (such as “Featured” and “Outstanding”) to provide annotations on submissions that pertain to (more speculative) assertions on significance or potential for impact.

Here's an example on how to use the criteria above. A machine learning class report that re-runs the experiments of a published paper has educational value to the students involved. But if it doesn't surface generalizable insights, it is unlikely to be of interest to (even a subset of) the TMLR audience, and so could be rejected based on this criterion. On the other hand, a proper reproducibility report that systematically studies the robustness or generalizability of a published method and lays out actionable lessons for its audience could satisfy this criterion.

© TMLR 2024.
Mastodon Mastodon Mastodon