IO500

The IO500 benchmark has been developed together with the community and its development is still ongoing. The benchmark is essentially a benchmark suite bundled with execution rules. It harnesses existing and trusted open source benchmarks.

The goal for the benchmark is to capture user-experienced performance. It aims to be:

  • Representative
  • Understandable
  • Scalable
  • Portable
  • Inclusive
  • Lightweight
  • Trustworthy

The Lists

We publish multiple lists for each BoF at SC and ISC as well as maintaining the current most up-to-date lists. We intend to not modify a list after the release date but in exceptional circumstances. However, we allow to improve and clarify list metadata upon the request of the submitters. We publish a Historic List of all submissions received and multiple filtered lists from the historic list. We maintain a Full List which is the subset of submissions which were valid according to the set of list-specific rules in place at the time of the list’s publication.

Our primary lists are Ranked Lists which show only opted-in submissions from the Full List and only the best submission per storage system. We have two ranked lists: the IO500 List for submissions which ran on any number of client nodes and the 10 Node Challenge list for only those submissions which ran on exactly ten client nodes.

In summary, for each BoF, we have the following lists:

  • Historic list: all submissions ever received
  • Full list: the subset of the Historic list of submissions that are currently valid
  • IO500 List: the subset of the Full list of submissions marked for inclusion in the IO500 ranked list, showing only one highest-scoring result per storage system
  • 10-Node Challenge List: the subset from the Full list of submissions run on exactly ten nodes and marked for inclusion in the 10-Node Challenge ranked list, showing only one highest-scoring result per storage system

Please note that the Ranked Lists only show the best submission for each storage systems, so if a storage system has multiple submissions only the one with the highest overall score is shown in the Ranked Lists. All submissions will appear in the Full and Historical lists. However, please note that at the semi-annual BOFs we present the IO500 Bandwidth and IO500 Metadata awards based on the highest bandwidth and metadata scores. In some cases, the highest bandwidth and metadata scores are on submissions for that do not have the highest overall score and are only visible in the Full List.

Workloads

The benchmark covers various workloads and computes a single score for comparison. The workloads are:

  • IOEasy: Applications with well optimized I/O patterns
  • IOHard: Applications that require a random workload
  • MDEasy: Metadata/small objects
  • MDHard: Small files (3901 bytes) in a shared directory
  • Find: Finding relevant objects based on patterns

The individual performance numbers are preserved and accessible via the web or the raw data. This allows deriving other relevant metrics.

We are in the process to establish a procedure to extend the current workload with further meaningful metrics.

Further Reading

Using the IO500 Logo

We welcome the promotion of the IO500 using the logo.

IO500 Logo License Terms

The IO500 logo is copyrighted us but may be used under the following conditions:

  1. The logo is used for its intended purpose to promote the IO500. You may use it:
    1. together with results obtained by using the IO500 with statements that you are using the benchmark
    2. together with opinions about the benchmark
  2. The appearance of the logo shall not be modified. You may change the file format and resolution.
  3. The logo must be placed onto a white or gray background.

If you are in doubt, contact the steering board.

Download the Logo