Reviewer Guidelines - AI4ABM Workshop @ ICLR 2023
Please make your best effort to keep your identity hidden from the paper’s authors. Likewise, we have done our best to keep author identities hidden from you. Please do not search for preprints etc that the authors may have submitted non-anonymously.
If you suspect or know the identity of any paper that you have been assigned to review, please assess honestly whether this affects your ability to conduct an impartial review. If not, please contact the Program Chair as soon as possible so you may be assigned a different paper.
Immediate steps to take
After receiving your reviewing assignments, please check all the papers you are assigned and check for possible conflicts, and whether you feel comfortable to review the paper assigned.
If you believe a paper may be subject to plagiarism, or otherwise violate submission standards, please contact the Program Chair, but continue reviewing as normal in the meantime.
What to watch out for
We at AI4ABM value anything that’s useful for the community, as long as it is technically sound. Minor flaws should not normally lead to rejection, instead focus on good and stimulating aspects of each paper. Beating state-of-the-art by and of itself is not required for acceptance, rather focus on potential impact and novelty of the submission.
Any submission succeeding a length of 4 pages (excluding references and supplementary) is to be counted as a full paper submission. should detail projects on the intersection of artificial intelligence and agent-based modeling. Such projects can include academic research, as well as deployed results from academic institutions, startups and industry, public institutions, etc. Papers submissions should be up to eight pages excluding references, acknowledgements, and supplementary material.
Paper of a length of 4 pages fall either under the proposal category, or the benchmark category. Submission that fall between or outside both categories may be permissible as long as they contribute to the field.
Submissions in the proposal track constitutes early-stage or proposed work on the intersection of artificial intelligence and agent-based modeling. This submission category will be subject to very strict reviewing standards, meaning that ideas presented need to be thoroughly justified. Proposal submissions should be up to four pages excluding references, acknowledgements, and supplementary material.
This submission track asks for submissions that detail benchmark environments / datasets for research on the intersection of artificial intelligence and agent-based modelling. Benchmark environments may be based on real-world or synthetic data. Submissions need to justify why the described benchmark tasks / datasets are suitable to measuring progress in the field, and detail both the benchmark tasks, and also provide an easily accessible introduction to the underlying domain-specific questions to be answered. Benchmark submissions should be up to four pages excluding references, acknowledgements, and supplementary material, and should ideally be accompanied by open source implementations.
Relevance to the field
We are inclusive regarding the scope of work considered relevant to the field. Any submission that treats some form of multi-agent setting is considered relevant with AI methodology is relevant, as long as it meets the reviewing standards defined above. Submissions do not need to specifically point out the way in which they are relevant for AI4ABM research.
How to write a good review
For information on how to write a good review, please see official ICLR guidelines. In general, reviews can be short and concise, but need to be sufficiently complete.