YOU ARE HERE
Please describe the purpose and general usage of the submitted system. This would include the types of typical applications it supports (e.g., defense applications, molecular dynamics, benchmarking, system test, systems research), and the general use and purpose of the data generated by the applications running on it.
Please provide the deployment timeframe of the submitted system, or for on-demand cloud systems, the general period over which it is deployed and destroyed.
Please describe the availability of the system to users and who are its set of most regular users.
How is this storage software available? (e.g., commercially, open-source, not publicly available) Note that if the storage software is not open-source or commercially available, then a general description would be requested, but this would limit the submission’s reproducibility score.
Can anyone download/purchase this software?
List either product webpage or open-source repo of the exact software used in IO500.
Give any and all additional details of this specific storage deployment: (e.g., type of storage server product such IBM ESS or DDN SFA400X2, use of Ext4 or some other file system on each storage node, dual connected storage media to storage servers).
State here that you provided all scripts/documentation that would allow someone else to reproduce your environment and attempt to achieve a similar IO500 score as the submitted system.
NOTE: provide all files/documentation/scripts that would enable a user to build your environment and deploy the custom scripts, software, or config files once they have obtained the appropriate storage system hardware and software. These may be included into the io500.tgz or into the extraCodes upload on the submission form.
Several examples include:
Does your system have a single point of failure as defined by “Definition 7” of a Production System? Please describe all mechanisms that provide fault tolerance for the submitted storage system. Be specific to your submission, not general storage system capabilities.
Please list any additional information needed to determine whether this system has a single point of failure.
Please provide a description of how the IO500 benchmark was executed, e.g., via system scheduler (e.g., SLURM) to run a job on the compute cluster, which initially ran a setup process to configure the client and file system, and then started the full benchmark.
During the IO500 benchmark execution was the system entirely dedicated to running the benchmark or were there other jobs running in the same cluster and storage system?
Please describe all caching mechanisms in client/server that were utilized during the IO500 run. This could include caching in any storage medium (e.g., SSD, RAM).
A few examples would include:
Is the submitted system a stand-alone storage system or an acceleration layer in front of another storage system that is the source of truth for all application data? This question relates to whether the submitted system is a burst buffer layered on primary storage or primary storage itself.
Please describe any steps taken to ensure that the results are trustworthy.