Call for Papers
Please submit paper submissions through OpenReview via this link.
We invite high-quality paper submissions of new work in the following topics on the intersection of artificial intelligence and agent-based modelling:
- Large-Scale Agent-Based Modelling;
- Multi-agent learning;
- Multi-agent Inverse learning;
- Continual Learning;
- Program Synthesis;
- Machine Programming;
- Large-Scale Automatic Differentiation;
- Simulation-Based Inference;
- Large-Scale Probabilistic Inference;
- Simulation Intelligence;
- Probabilistic Programming;
- Explainable AI, Causal inference and discovery
- ML-aware domain-specific languages for Agent-Based Modelling;
- Any other machine learning approaches to the creation, calibration and validation of Agent-Based Models
To be eligible, all submissions must be directly relevant to agent-based modeling.
Accepted papers will be presented during joint virtual poster sessions and be made publicly available as non-archival reports, allowing future submissions to archival conferences or journals. All submissions should follow ICML format (except for page length requirements found in the various tracks). Submissions must not have been previously accepted to the main ICML 2022 conference. The review process will be double-blind.
You can submit papers to one of the following tracks:
Work that is in progress, published, or deployed.
Submissions in the papers track should detail projects on the intersection of artificial intelligence and agent-based modeling. Such projects can include academic research, as well as deployed results from academic institutions, startups and industry, public institutions, etc. Papers submissions should be up to eight pages excluding references, acknowledgements, and supplementary material.
Work that is in its early stages, and/or description of ideas for future work
Submissions in the proposal track constitutes early-stage or proposed work on the intersection of artificial intelligence and agent-based modeling. This submission category will be subject to very strict reviewing standards, meaning that ideas presented need to be thoroughly justified.
Proposal submissions should be up to four pages excluding references, acknowledgements, and supplementary material.
Work detailing suitable benchmark environments for research on the intersection of AI and agent-based modelling
This submission track asks for submissions that detail benchmark environments / datasets for research on the intersection of artificial intelligence and agent-based modelling. Benchmark environments may be based on real-world or synthetic data. Submissions need to justify why the described benchmark tasks / datasets are suitable to measuring progress in the field, and detail both the benchmark tasks, and also provide an easily accessible introduction to the underlying domain-specific questions to be answered.
Benchmark submissions should be up to four pages excluding references, acknowledgements, and supplementary material, and should ideally be accompanied by open source implementations.
Please indicate your preferred track by adding either “benchmark”, “proposal”, or “paper” as a keyword to your OpenReview submission form
Guidelines for Reviewers
Please see here.