The IO500 Foundation Steering Committee Rules - Version 2.0

Submission Rules for the Research and Production Lists

The following rules should ensure a fair comparison of the IO500 results between systems and configurations. They serve to reduce mistakes and improve accuracy.

  1. Submissions are made using the latest version of the IO500 application in GitHub and all binaries should be built according to the instructions in Running.
  2. Read-after-write semantics: The system must be able to correctly read freshly written data from a different client node after the close operation on the writer has been completed.
    1. The stonewall flag must be set to 300 to ensure all create/write run for at least 300 seconds.
      1. We defined a very high workload for all benchmarks that should satisfy this requirement but you may have to set higher values.
      2. There can be no edits made to the source code including used codes such as IOR beyond changing the allowed variables and adding commands to configure the storage system in (e.g. setting striping parameters).
      3. An exception to this rule is possible for submitters who have a legitimate reason by requesting an exception from the committee via
  3. The file names for the mdtest and IOR output files may not be pre-created.
  4. You must run all phases of the benchmark on a single storage system without interruption.
  5. There is no limitation on the number of storage nodes, the storage servers may optionally be co-located on the client nodes.
  6. All data must be written to persistent storage within the measured time for the individual benchmark, e.g. if a file system caches data, it must ensure that data is persistently stored before acknowledging the close.
  7. Data and metadata must be written in its entirety and not reduced based on its contents. The goal of the IO500 is to provide dataset independent performance results, and techniques such as deduplication, compression, and other lossless and lossy techniques would bias performance as the IO500 benchmark uses partially predictable content that is not representative of existing workloads in the real world.
  8. Submitting the results must be done in accordance with the instructions on our submission page. Please verify the correctness of your submission before you submit it.
  9. If a tool other than the included pfind is used for the find phase, then it must follow the same input and output behavior as the included pfind, and the source code must be included in the submission.
      It is not required to capture the list of matched files.
  10. Please also refer to the README documents in the GitHub repo.
  11. Please read the file for the new changes on the IO500 benchmark
  12. Only submissions using at least 10 physical client nodes are eligible to win IO500 awards and at least one benchmark process must run on each client.
    1. We accept results on fewer nodes for documentation purposes but they cannot be awarded.
    2. Virtual machines can be used but the above rule must be followed. More than one virtual machine can be run on each physical node.
    3. For the 10 node challenge, there must be exactly 10 physical client nodes and at least one benchmark process must run on each node.
    4. The only exception to this rule is the find benchmark, which may optionally use fewer nodes/processes.
  13. Each of the four main phases (IOR easy and hard, mdtest easy and hard) has a directory which can be precreated and tuned (e.g. using tools such as "lfs setstripe" or "beegfs_ctl"; however, additional subdirectories within these directories cannot be precreated.
  14. Submissions received after the posted deadline for a list may be accepted, at the discretion of the committee, upon request to In the case where a late submission would be the winner in any category, then a key consideration for acceptance is ensuring there is enough time before publication of the final list. Any submission not accepted for the current list would be automatically submitted for the following list.

Please send any requests for changes to these rules or clarifying questions to our mailing list.

The IO500 committee will assign a reproducibility score per submission.

This will impact a submission as follows:

  • Undefined/Limited Score - Lowest levels of reproducibility. The IO500 committee will work with submitters to try and gather more information and raise the score, but if additional information is not received the submission may be rejected.
  • Proprietary/Fully Reproducible Score - Eligible for IO500. Entries with a 'Proprietary' score are eligible for the Research List and entries with a 'Fully Reproducible’ score are eligible for the Production list (if they also meet the other requirements for the Production List).

Additional Eligibility Rules for the Production List

Each submission will only be on one of the Research or Production lists. The following additional requirements must be satisfied to be eligible for the Production list.

The IO500 steering committee has final say on whether a submission meets the above requirements.

All information, including fault tolerance mechanisms, will be listed prominently on the IO500 list so it is clear to everyone what tradeoffs are employed to achieve the published score. Further, multiple submissions with different fault-tolerance/reliability mechanisms may be submitted and published in order to demonstrate the capabilities of a submission along different dimensions (although we may limit the total number that can be on a list).

For further details, please see both the IO500 Reproducibility Proposal and List Split Proposal.