Press Release for April 14, 2021

The organizers of the IO500 competition have several changes to announce to the public. First, John Bent has resigned from the IO500 board. We wish to thank John for being a co-founder of IO500 and his passionate contributions in community outreach and ensuring we have the best quality, fair competition possible.

Second, replacing John is Dean Hildebrand, who is currently a member of the Google Office of the CTO. Prior to Google, Dean spent over 10 years at IBM Research in HPC and developing IBM Spectrum Scale (GPFS). Dean brings both the traditional and cloud HPC perspectives to the team, enhancing our scope.

Third, with our formal incorporation as a non-profit, we have moved all of the IO500 assets from the Virtual Institute for IO (VI4IO) into the new corporate entity including the website. While there were a few hiccups with this transition for the SC20 IO500 list, this will offer a stronger, independent foundation for IO500 going forward. The corrected IO500 lists for SC20 are available at io500.org. The 10 node challenge results changed from those announced at SC20 and can be seen at https://io500.org/list/sc20/ten.

Fourth, with the move to the new hosting service, we have brought on a new member of the IO500 foundation to manage our web presence. Jean Luca Bez (Lawrence Berkeley National Laboratory) has agreed to take on this role. His excellent work building the new website and managing the changes demonstrates that we are in good hands for our future digital presence needs.

Fifth, IO500 is now implementing a set schedule for releasing the new competition benchmark twice a year roughly half way between the ISC and SC events (March and September) with a 2-week testing period. As long as no significant problems are identified during that period, all tests run during that time will be considered official entries. This additional time will enable more sites to schedule time to test and run the benchmarks and participate. The testing period will hopefully identify any issues that might shorten the competition period enabling smoother participation from everyone.

Sixth, we are now testing expanded, automated metadata collection about the storage systems being tested. This will make the details about the systems more complete and consistent, which allows better insights into the configurations that achieved the results, and improves the ability to make comparisons between systems at different scales (e.g. bandwidth per server).

We wish to thank the community for the continued support and look forward to everyone participating in the future.