circle-symbol
  • Home
  • Process
  • Guidelines
  • Committee
  • Awards
  • History
  • Reproduced Papers

ACM SIGMOD 2021 Reproducibility

October 10, 2020

SIGMOD Reproducibility 2021 is about to start. All authors of research papers in SIGMOD 2020 are invited to submit an entry by December 18, 2020.

The material needed for the reproducibility will be submitted at the CMT website. The submission at the CMT should contain two PDF files. The original accepted paper and a PDF with at least the following information: (1) a link to download the code, data, and scripts; and (2) a step by step description on how to use the scripts for (a) code compilation, (b) data generation, and (c) repeating the paper experiments. In addition, please include a link to the the ACM digital library page for the paper and a detailed description of the hardware used. A readme template can be found here.

October 8, 2020

The ACM and the ACM Digital Library are updating their reproducibility badging system, and soon after that all reproduced papers will be directly accessed via ACM DL. The main difference between the old system and the new system is that the label naming of Reproduced and Replicable is reversed, and closer to the standard terminology used in reproducibility efforts. Hence the label awarded to papers the results of which are reproduced in the same conditions to support the findings of the paper will be called "ACM Results Reproduced".

Until this process is completed, all successfully reproduced papers from the ACM SIGMOD 2020 Reproducibility effort (papers presented in SIGMOD 2019) can be found here.

README

Quick guides for authors and reviewers.

What is SIGMOD Reproducibility?

SIGMOD Reproducibility has three goals:

circle-symbol
  • Highlight the impact of database research papers.
  • Enable easy dissemination of research results.
  • Enable easy sharing of code and experimentation set-ups.

In short, the goal is to assist in building a culture where sharing results, code, and scripts of database research is the norm rather than an exception. The challenge is to do this efficiently, which means building technical expertise on how to do better research via creating repeatable and shareable research. The SIGMOD Reproducibility committee is here to help you with this.

The SIGMOD Reproducibility effort works in coordination with the PVLDB Reproducibility to encourage the database community to develop a culture of sharing and cross-validation.


Why should I be part of this?

You will be making it easy for other researchers to compare with your work, to adopt and extend your research. This instantly means more recognition directly visible through ACM badges for your work and higher impact.

circle-symbol

Taking part in the SIGMOD Reproducibility process enables your paper to take the ACM Results Reproduced label. This is embedded in the PDF of your paper in the ACM digital library.

There is an option to also host your data, scripts and code in the ACM digital library as well to make them available to a broad audience, which will award the ACM Artifacts Available label.

circle-symbol

ACM Results Reproduced label

The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author.

ACM Artifacts Available label

Author-created artifacts relevant to this paper (data,code,scripts) have been placed on a publicly accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided.

Both the ACM Results Reproduced label and the ACM Artifacts Available label are visible in the ACM digital library.

Successful papers will be advertised at DBworld and the list of award winners are maintained in the main SIGMOD website. In addition, the official ACM Digital Library maintains all reproduced SIGMOD papers, and when possible it will contain the experimentation material of SIGMOD with available artifacts. The official SIGMOD Reproducibility website serves as a centralized location where researchers can learn about the reproducibility process and find all reproduced papers. We will continue to enhance the functionality and material on this website to make it attractive and useful for the community, so stop by often!


ACM SIGMOD Most Reproducible Paper Award

This is a new award that recognizes the best papers in terms of reproducibility. Every year, up to three most reproducible papers are picked and the awards are presented during the awards session of the SIGMOD conference (next year). Each award comes with a 750$ honorarium. The criteria are as follows:

  • Reproducibility (ideal: all results can be verified)
  • Ease of Reproducibility (ideal: just works)
  • Portability (ideal: linux, mac, windows)
  • Replicability (ideal: can change workloads, queries, data and get similar behavior with published results)

The awards are selected by the Reproducibility Awards Committee, chaired by Dennis Shasha. The committee is formed after all submissions are received so that there are no conflicts. Decisions are made based on scores that reviewers assign to each paper for all factors described above.




How much overhead is it?

At first, making research shareable seems like an extra overhead for authors. You just had your paper accepted in a major conference; why should you spend more time on it? The answer is to have more impact!

If you ask any experienced researcher in academia or in industry, they will tell you that they in fact already follow the reproducibility principles on a daily basis! Not as an afterthought, but as a way of doing good research.

Maintaining easily reproducible experiments, simply makes working on hard problems much easier by being able to repeat your analysis for different data sets, different hardware, different parameters, etc. Like other leading system designers, you will save significant amounts of time because you will minimize the set up and tuning effort for your experiments. In addition, such practices will help bring new students up to speed after a project has lain dormant for a few months.

Ideally reproducibility should be close to zero effort.


Criteria and Process

Availability

Each submitted experiment should contain: (1) A prototype system provided as a white box (source, configuration files, build environment) or a black-box system fully specified. (2) Input Data: Either the process to generate the input data should be made available, or when the data is not generated, the actual data itself or a link to the data should be provided. (3) The set of experiments (system configuration and initialization, scripts, workload, measurement protocol) used to produce the raw experimental data. (4) The scripts needed to transform the raw data into the graphs included in the paper.

Reproducibility

The central results and claims of the paper should be supported by the submitted experiments, meaning we can recreate result data and graphs that demonstrate similar behavior with that shown in the paper. Typically when the results are about response times, the exact numbers will depend on the underlying hardware. We do not expect to get identical results with the paper unless it happens that we get access to identical hardware. Instead, what we expect to see is that the overall behavior matches the conclusions drawn in the paper, e.g., that a given algorithm is significantly faster than another one, or that a given parameter affects negatively or positively the behavior of a system.

Replicability

One important characteristic of strong research results is how flexible and robust they are in terms of the parameters and the tested environment. For example, testing a new algorithm for several input data distributions, workload characteristics and even hardware with diverse properties provides a complete picture of the properties of the algorithm. Of course, a single paper cannot always cover the whole space of possible scenarios. Typically the opposite is true. For this reason, we expect authors to provide a short description as part of their submission about different experiments that one could do to test their work on top of what already exists in the paper. Ideally, the scripts provided should enable such functionality so that reviewers can test these cases. This would allow reviewers to argue about how “replicable” the results of the paper are under different conditions. Replicability is not mandatory for getting the ACM Reproducibility and Availability labels. It is though the ultimate goal of this effort and is an essential criterion for the Most Reproducible Paper Award.

We do not expect the authors to perform any additional experiments on top of the ones in the paper. Any additional experiments submitted will be considered and tested but they are not required. As long as the flexibility report shows that there is a reasonable set of existing experiments, then a paper meets the flexibility criteria. What reasonable means will be judged on a case-by-case basis based on the topic of each paper and in practice all accepted papers in top database conferences meet these criteria. You should see the flexibility report mainly as a way to describe the design space covered by the paper and the design space which is interesting to cover in the future for further analysis that may inspire others to work on open problems triggered by your work.

Process

Each paper is reviewed by one database group. The process happens in communication with the reviewers so that authors and reviewers can iron out any technical issues that arise. The end result is a short report which describes the result of the process. For successful papers the report will be hosted in the ACM digital library along with the data and code.

The goal of the committee is to properly assess and promote database research! While we expect that authors try as best as possible to prepare a submission that works out of the box, we know that sometimes unexpected problems appear and that in certain cases experiments are very hard to fully automate. The committee will not dismiss submissions if something does not work out of the box; instead, they will contact the authors to get their input on how to properly evaluate their work.




Packaging Guidelines

Every case is slightly different. Sometimes the reproducibility committee can simply rerun software (e.g., rerun some existing benchmark). At other times, obtaining raw data may require special hardware (e.g., sensors in the arctic). In the latter case, the committee will not be able to reproduce the acquisition of raw data, but then you can provide the committee with a protocol, including detailed procedures for system set-up, experiment set-up, and measurements.

Whenever raw data acquisition can be produced, the following information should be provided.

Environment

Authors should explicitly specify the OS and tools that should be installed as the environment. Such specification should include dependencies with specific hardware features (e.g., 25 GB of RAM are needed) or dependencies within the environment (e.g., the compiler that should be used must be run with a specific version of the OS).

System

System setup is one of the most challenging aspects when repeating experiments. System setup will be easier to conduct if it is automatic rather than manual. Authors should test that the system they distribute can actually be installed in a new environment. The documentation should detail every step in system setup:

  • How to obtain the system?
  • How to configure the environment if need be (e.g., environment variables, paths)?
  • How to compile the system? (existing compilation options should be mentioned)
  • How to use the system? (What are the configuration options and parameters to the system?)
  • How to make sure that the system is installed correctly?

The above tasks should be achieved by executing a set o scripts provided by the authors that will download needed components (systems, libraries), initialize the environment, check that software and hardware is compatible, and deploy the system.

Tools

The committee strongly suggests using ReproZip to streamline this process. ReproZip can be used to capture the environment, the input files, the expected output files, and the required libraries. A detailed how-to guide (installing, packing experiments, unpacking experiments) can be found in the ReproZip Documentation. ReproZip will help both the authors and the evaluators to seamlessly rerun experiments. If using ReproZip to capture the experiments proves to be difficult for a particular paper, the committee will work with the authors to find the proper solution based on the specifics of the paper and the environment needed. More tools are available here: https://reproduciblescience.org/reproducibility-directory/

.

Experiments

Given a system, the authors should provide the complete set of experiments to reproduce the paper's results. Typically, each experiment will consist of the following parts.

  1. A setup phase where parameters are configured and data is loaded.
  2. A running phase where a workload is applied and measurements are taken.
  3. A clean-up phase where the system is prepared to avoid interference with the next round of experiments.

The authors should document (i) how to perform the setup, running and clean-up phases, and (ii) how to check that these phases complete as they should. The authors should document the expected effect of the setup phase (e.g., a cold file cache is enforced) and the different steps of the running phase, e.g., by documenting the combination of command line options used to run a given experiment script.

Experiments should be automatic, e.g., via a script that takes a range of values for each experiment parameter as arguments, rather than manual, e.g., via a script that must be edited so that a constant takes the value of a given experiment parameter.

Graphs and Plots

For each graph in the paper, the authors should describe how the graph is obtained from the experimental measurements. The submission should contain the scripts (or spreadsheets) that are used to generate the graphs. We strongly encourage authors to provide scripts for all their graphs using a tool such as Gnuplot or Matplotlib. Here are two useful tutorials for Gnuplot: a brief manual and tutorial, and a tutorial with details about creating eps figures and embed them using LaTeX and another two for Matplotlib: examples from SciPy, and a step-by-step tutorial discussing many features.

Ideal Reproducibility Submission

At a minimum the authors should provide a complete set of scripts to install the system, produce the data, run experiments and produce the resulting graphs along with a detailed Readme file that describes the process step by step so it can be easily reproduced by a reviewer.

The ideal reproducibility submission consists of a master script that:

  1. installs all systems needed,
  2. generates or fetches all needed input data,
  3. reruns all experiments and generates all results,
  4. generates all graphs and plots, and finally,
  5. recompiles the sources of the paper

... to produce a new PDF for the paper that contains the new graphs. It is possible!


Best Practices

A good source of dos and don’ts can be found in the ICDE 2008 tutorial by Ioana Manolescu and Stefan Manegold (and a subsequent EDBT 2009 tutorial).

They include a road-map of tips and tricks on how to organize and present code that performs experiments, so that an outsider can repeat them. In addition, the ICDE 2008 tutorial discusses good practices on experiment design more generally, addressing, for example, how to chose which parameters to vary and in what domain.

A discussion about reproducibility in research including guidelines and a review of existing tools can be found in the SIGMOD 2012 tutorial by Juliana Freire, Philippe Bonnet, and Dennis Shasha.


Reproducibility Committee

Chair: Manos Athanassoulis, Boston University [email]

Advisory Committee

Juliana Freire, New York University, USA

Stratos Idreos, Harvard University, USA

Dennis Shasha, New York University, USA

Committee

Angelos-Christos Anadiotis, Ecole Polytechnique, France

Raja Appuswamy, Eurecom, France

Joy Arulraj, Georgia Institute of Technology, USA

Dmytro Bogatov, Boston University, USA

Renata Borovica-Gajic, University of Melbourne, Australia

Shimin Chen, Institute of Computing Technology, Chinese Academy of Sciences, China

Raul Castro Fernandez, University of Chicago, USA

Thomas Heinis, Imperial College, UK

Asterios Katsifodimos, Delft University of Technology, Netherlands

Andreas Kipf, Massachusetts Institute of Technology, USA

Wolfgang Lehner, TU Dresden, Germany

John Paparrizos, University of Chicago, USA

Ilia Petrov, Reutlingen University, Germany

Mirek Riedewald, Northeastern University, USA

Yingjun Wu, Amazon, USA

Dong Xie, Penn State University, USA

Huanchen Zhang, Snowflake, USA

Kostas Zoumpatianos, Snowflake, USA






Copyright © ACM SIGMOD. All Rights Reserved.

More questions

Dispute: Can I dispute the reproducibility results?

You will not have to! If any problems appear during the reproducibility testing phase, the committee will contact you directly, so we can work with you to find the best way to evaluate your work.

Rejects: What happens if my work does not pass?

Although we expect that we can help all papers pass the reproducibility process, in the rare event that a paper does not go through the reproducibility process successfully, this information will not be made public in any way. So there is no downside in submitting your work!

Quick Guides for Authors

  1. The chair of the committee will put you in touch with one of the reviewers.
  2. Provide access to your code through a repository or an archive.
  3. Provide access to the necessary datasets either through (i) links to data sources, (ii) direct access to archives, and/or (iii) data generators.
  4. You can also provide alternate data inputs (e.g., smaller), especially in the case of experiments where the original datasets is very long running.
  5. The more important part of the submission is a main script that brings all the above together: Include in your code a main script that will install all systems needed, download or generate all data and run all experiments from your paper. If for some technical reasons this is too hard to achieve in your case, provide >1 scripts that do the above steps individually along with examples of the input needed in each script/step.
  6. Provide a detailed specification of the hardware needed to run these experiments. You may also provide alternate hardware specifications that will work out in case the original one is not available. If the hardware required is rare you may work with the reviewer to provide remote access directly to your hardware if possible.
  7. Provide an estimate of the time needed to run the experiments on the tested hardware. If the time needed is significant – e.g., more than a couple of days – provide individual times per experiment/graph as well. If the process includes downloading data or lengthy systems installation steps please provide time estimates for those as well (for your hardware).
  8. Provide a detailed specification of the software stack needed (OS, kernel, libraries).
  9. The committee strongly encourages you to use ReproZip to package your submissions. ReproZip automatically detects all dependencies for your experiments and packs your experiment along with all necessary data files, libraries, environment variables and options. ReproZip also supports Vagrant-built virtual machines and Docker containers, making it possible to run your experiments on different architectures. Make sure to inform your reviewer that you are using ReproZip.

A great submission contains code, data, and a main script that repeats all experiments along with a short (1-2 page) PDF note that contains examples on how to use the script, as well as hardware/software specifications and time estimates.

The reviewer will contact you directly if any issues occur so you can work together to reproduce the results.

For successful papers, the process will result in a short report that you may edit together with the reviewer, while the paper will receive the ACM Results Reproduced label that will be visible in the ACM library and materials (code, reports, etc) will be maintained online and accessible to the rest of the community.

This is a quick guide. Please read the main SIGMOD reproducibility webpage for more details, best practices, and access to tutorials.

Download

Quick Guides for Reviewers

  1. The chair of the committee will suggest papers for you to review. Before accepting to review a paper take a look at the experiments, specifically at the hardware and software required to ensure you can fulfill the basic requirements.
  2. The chair of the committee will put you in touch with the contact author of the paper you agreed to review. If necessary initiate a discussion with the authors to ensure you are at the same page regarding specific details in the experiments that may not be clear.
  3. After you receive access to code, datasets, and scripts, quickly verify that the submission is complete and has enough documentation before proceeding further.
  4. Encourage authors to use repro-zip that makes it easier to handle software dependencies.
  5. When an issue appears, immediately contact the authors to help resolve it. The authors’ collaboration is both welcome and expected because they know and understand the specifics of their algorithms and code. In most cases, issues have to do with automating a script (e.g., making it path independent) or dealing with dependencies.
  6. The goal is both to go through the process of reproducing the results and to provide enough feedback to the authors so that they can create a shareable instance of their code and experiments.
  7. When running experiments eliminate any interference in your systems, i.e., make sure, unless otherwise stated by the authors, that the machines used are dedicated to the reproducibility experiments.
  8. If the results you gathered during the reproducibility process do not support the claims, contact the authors to address this. This may be due to issues that have to do with the set-up of the experiments or the hardware specifications.
  9. Write a short report (1-2 pages) to document the results and whether you were able to reproduce the results. Please see here for a reproducibility report LaTeX template (and here for its pdf version). If you can reproduce only part of the results please indicate that in your document. Also, make a note of any specific details regarding the set-up or analysis that are important. Communicate this report to the authors to make sure they think this is a fair result and take into account any suggestions.
  10. If a paper cannot be reproduced contact the chair.

This is a quick guide. Please read the main SIGMOD reproducibility webpage for more details and good practices we expect authors to follow.

Download