A scientific paper consists of a constellation of artifacts beyond the document itself: software, data sets, scripts, hardware, evaluation data and documentation, raw survey results, mechanized test suites, benchmarks, and so on. Often, the quality of these artifacts is as important as that of the document itself. Based on the growing success of the Availability & Reproducibility Initiative (ARI) of previous SIGMOD conferences, we will run again this year an optional artifact evaluation process. All papers presented at SIGMOD 2025 -- accepted for publication PACMMOD Vol. 2(6) and Vol.3(1) -- are encouraged to participate in the artifact evaluation process, as well as papers from the SIGMOD 2025 Industry Track.
The submission process should follow these guidelines so that an artifact associated with a paper is considered for its availability and functionality, along with the reproducibility of the paper’s key results and claims. Please see this quick guide that summarizes the key requirements and guidelines of your submission.
The artifact evaluation has two phases: a single-anonymous phase of reviewing the overall quality of the artifact and a zero-anonymous phase of reproducing the results, during which reviewers are invited to collaborate with authors. At the end of the process, for every successfully reproduced paper the reviewers (and optionally all or some of the authors) are co-authoring a reproducibility report to document the process, the core reproduced results, and any success stories, i.e., cases that during the reproducibility review the artifacts quality was improved.
All accepted SIGMOD research and industry papers are encouraged to participate in artifact evaluation.
Submitting the artifacts associated with your accepted SIGMOD paper is a two-step process.
The ARC recommends that you create a single web page at a stable URL that contains your artifact package. The ARC may contact you with questions about your artifacts as needed.
We are looking for members of the Availability and Reproducibility Committee (ARC), who will contribute to the SIGMOD 2025 Availability and Reproducibility review process by evaluating submitted artifacts. ARC membership is especially suitable for researchers early in their career, such as PhD students. Even as a first-year PhD student, you are welcome to join the ARC, provided you are working in a topic area covered by SIGMOD (broadly data management). You can be located anywhere in the world as all committee discussions will happen online.
As an ARC member, you will not only help promote the reproducibility of experimental results in systems research, but also get to familiarize yourself with research papers just accepted for publication at SIGMOD 2025 and explore their artifacts. For a given artifact, you may be asked to evaluate its public availability, functionality, and/or ability to reproduce the results from the paper. You will be able to discuss with other ARC members and interact with the authors as necessary, for instance if you are unable to get the artifact to work as expected. Finally, you will provide a review for the artifact to give constructive feedback to its authors, discuss the artifact with fellow reviewers, and help award the paper artifact evaluation badges. For all successfully reproduced artifact, you will co-author a reproducibility report with your co-reviewers and optionally the authors of the paper to document the process, the core reproduced results, and any success stories, i.e., cases that during the reproducibility review the artifacts quality was improved.
We expect that each member will evaluate 2-3 artifacts. The duration of evaluating different artifacts may vary depending on its computational cost (to be checked during the "Sanity-Check" period). ARC members are expected to allocate time to choose the artifacts they want to review, to read the chosen papers, to evaluate and review the corresponding artifacts, and to be available for online discussion until artifact notification deadline. Please ensure that you have sufficient time and availability for the ARC during the evaluation period September 10 to November 10 2025. Please also ensure you will be able to carry out the evaluation independently, without sharing artifacts or related information with others and limiting all the discussions to within the ARC.We expect that evaluations can be done on your own computer (any moderately recent desktop or laptop computer will do). In other cases and to the extent possible, authors will arrange their artifacts so as to run in community research testbeds or will provide remote access to their systems (e.g., via SSH). Please also see this quick guide for reviewers.
If you are interested in taking part in the ARC, please complete this online self-nomination form.
Deadline: June 30, 2025, Anywhere on Earth
You can contact the chairs for any questions.
Chairs [email chairs]
Boris Glavic, University of Illinois, USA
Dirk Habich, TU Dresden, Germany
Holger Pirk, Imperial College London, UK
Manos Athanassoulis, Boston University, USA
Advisory Committee
Juliana Freire, New York University, USA
Stratos Idreos, Harvard University, USA
Dennis Shasha, New York University, USA
Availability and Reproducibility Committee (ARC)
Alexis Schlomer, CMU, USA
Amedeo Pachera, Lyon 1 University, France
Anas Ait aomar, Mohammed VI Polytechnic University, Morocco
Aneesh Raman, Boston University, USA
Anjiang Wei, Stanford University, USA
Anwesha Saha, Boston University, USA
Aoqian Zhang, Beijing Institute of Technology, China
Asim Nepal, University of Oregon, USA
Chi Zhang, Tsinghua University, China
Christos Panagiotopoulos, Harokopio University of Athens, Greece
Chuqing Gao, Purdue University, USA
Dakai Kang, University of California, Davis, USA
Daniel Kocher, University of Salzburg, Austria
Donghyun Sohn, Northwestern University, USA
Feng Yu, National University of Singapore, Singapore
Gaurav Tarlok Kakkar, Georgia Tech, USA
Giorgio Vinciguerra, University of Pisa, Italy
Gourab Mitra, Datometry, USA
Guozhang Sun, Northeastern University, China
Hengfeng Wei, Nanjing University, China
Hubert Mohr-Daurat, Imperial College London, United Kingdom
Ilin Tolovski, Hasso Plattner Institute, Germany
Jiashen Cao, Georgia Tech, USA
Johannes Pietrzyk, TU Dresden, Germany
Junchang Wang, Nanjing University of Posts and Telecommunications, China
Kai Chen, University of Virginia, USA
Kaiqiang Yu, Nanyang Technological University, Singapore
Kriti Goyal, Machine Learning Research Engineer, Apple, USA
Kyle Deeds, University of Washington, USA
Kyoseung Koo, Seoul National University, South Korea
Lam-Duy Nguyen, Technical University of Munich, Germany
Lisa Ehrlinger, Hasso Plattner Institute, Germany
Longlong Lin, Southwest University, China
Lukas Schwerdtfeger, BIFOLD/TU-Berlin, Germany
Maxwell Norfolk, Penn State University, USA
Meng Li, Nanjing University, China
Minxiao Chen, Beijing University of Posts and Telecommunications, China
Mo Sha, Alibaba Cloud, Singapore
Moe Kayali, University of Washington, USA
Muhammad Farhan, Australian National University, Australia
Mukul Singh, Microsoft Research, USA
Niccolò Meneghetti, University of Michigan-Dearborn, USA
Nihal Balivada, University of Oregon, USA
Ouael Ben Amara, The University of Michigan - Dearborn, USA
Qi Lin, Arizona State University, USA
Qiangqiang Dai, Beijing Institute of Technology, China
Qihao Cheng, Tsinghua University, China
Qilong Li, Southern University of Science and Technology, China
Qing Chen, University of Zurich, Switzerland
Qiuyang Mang, UC Berkeley, USA
Rico Bergmann, TU Dresden, Germany
Sabyasachi Behera, Department of Computer Science, University of Illinois Chicago, USA
Sadeem Alsudais, King Saud University, Saudi Arabia
Saeed Fathollahzadeh, Concordia University, Canada
Sebastian Baunsgaard, Technische Universität Berlin, Germany
Serafeim Chatzopoulos, ATHENA RC, Greece
Sheng Yao, Hong Kong University of Science and Technology, Hong Kong
Shistata Subedi, University of Oregon, USA
Shuhao Liu, Shenzhen Institute of Computing Sciences, China
Steven Purtzel, Humboldt-Universität zu Berlin, Germany
Supawit Chockchowwat, Google, USA
Suyang Zhong, National University of Singapore, China
Sven Helmer, University of Zurich, Switzerland, Switzerland
Tapan Srivastava, The University of Chicago, USA
Ted Shaowang, The University of Chicago, USA
Thomas Bodner, Hasso Plattner Institute, University of Potsdam, Germany
Tianxing Wu, Southeast University, China
Varun Jana, Penn, USA
Vassilis Stamatopoulos, ATHENA Research Center, Greece
Viktor Sanca, Oracle, USA
Viraj Thakkar, Arizona State University, USA
Wei Zhou, Shanghai Jiao Tong University, China
Wenshao Zhong, TikTok Inc., USA
William Zhang, Carnegie Mellon University, USA
Xiang Lian, Kent State University, USA
Xianghong Xu, ByteDance, China
Xiao He, Bytedance, China
Xiaodong Li, The University of Hong Kong, Hong Kong
Xiaojun Dong, University of California, Riverside, USA
Xiaozhen Liu, University of California, Irvine, USA
Xilin Tang, Cornell, USA
Yafan Huang, University of Iowa, USA
Yan Pang, University of Virginia, USA
Yangshen Deng, Southern University of Science and Technology, China
Yesdaulet Izenov, Nazarbayev University, Kazakhstan
Yi Li, Beijing Jiaotong University, China
Yichuan Wang, UC Berkeley, USA
Yihong Zhang, University of Washington, USA
Yiming Qiao, Tsinghua University, China
Yin Lou, Ant Group, USA
Yingli Zhou, The Chinese University of Hong Kong, Shenzhen, China
Yujie Hui, The Ohio State University, USA
Yuke Li, University of California, Merced, USA
Yuvaraj Chesetti, Northeastern University, USA
Yuxuan Zhu, University of Illinois Urbana Champaign, USA
Zechao Chen, Chongqing University, China
Zezhong Ding, USTC, China
Zhanghan Wang, New York University, USA
Zheng Wang, Huawei Singapore, Singapore
Ziyang Men, University of California, Riverside, USA
Ziyang Zhang, Southern University of Science and Technology, China