Summit

Institution Oak Ridge National Laboratory
Client Procs Per Node
Client Operating System RHEL
Client Operating System Version 7.5
Client Kernel Version

DATA SERVER

Storage Type NLSAS
Volatile Memory 4GB
Storage Interface 12Gbit SAS
Network InfiniBand EDR
Software Version Spectrum Scale 5.x
OS Version RHEL 7.5

INFORMATION

Client Nodes 10
Client Total Procs 160
Metadata Nodes 154
Metadata Storage Devices 211
Data Nodes 154
Data Storage Devices 211

METADATA

Easy Write 29.07 kIOP/s
Easy Stat 1,807.74 kIOP/s
Easy Delete 282.13 kIOP/s
Hard Write 10.77 kIOP/s
Hard Read 737.56 kIOP/s
Hard Stat 916.31 kIOP/s
Hard Delete 27.16 kIOP/s

Submitted Files

io500
#!/bin/bash
#
# INSTRUCTIONS:
# Edit this file as needed for your machine.
# This simplified version is just for running on a single node.
# It is a simplified version of the site-configs/sandia/startup.sh which include SLURM directives.
# Most of the variables set in here are needed for io500_fixed.sh which gets sourced at the end of this.
# Please also edit 'extra_description' function.

set -euo pipefail  # better error handling

# turn these to True successively while you debug and tune this benchmark.
# for each one that you turn to true, go and edit the appropriate function.
# to find the function name, see the 'main' function.
# These are listed in the order that they run.
io500_run_ior_easy="True" # does the write phase and enables the subsequent read
io500_run_md_easy="True"  # does the creat phase and enables the subsequent stat
io500_run_ior_hard="True" # does the write phase and enables the subsequent read
io500_run_md_hard="True"  # does the creat phase and enables the subsequent read
io500_run_find="True"
io500_run_ior_easy_read="True"
io500_run_md_easy_stat="True"
io500_run_ior_hard_read="True"
io500_run_md_hard_stat="True"
io500_run_md_hard_read="True"
io500_run_md_easy_delete="True" # turn this off if you want to just run find by itself
io500_run_md_hard_delete="True" # turn this off if you want to just run find by itself
io500_run_mdreal="False"  # this one is optional
io500_cleanup_workdir="False"  # this flag is currently ignored. You'll need to clean up your data files manually if you want to.
io500_stonewall_timer=300 # Stonewalling timer, stop with wearout after 300s with default test, set to 0, if you never want to abort...

# to run this benchmark, find and edit each of these functions.
# please also edit 'extra_description' function to help us collect the required data.
function main {
  setup_directories
  setup_paths
  setup_ior_easy # required if you want a complete score
  setup_ior_hard # required if you want a complete score
  setup_mdt_easy # required if you want a complete score
  setup_mdt_hard # required if you want a complete score
  setup_find     # required if you want a complete score
  setup_mdreal   # optional
  run_benchmarks
}

function setup_directories {
  # set directories for where the benchmark files are created and where the results will go.
  # If you want to set up stripe tuning on your output directories or anything similar, then this is good place to do it.
  timestamp=`date +%Y.%m.%d-%H.%M.%S`           # create a uniquifier
  io500_workdir=$PWD/datafiles/io500.$timestamp # directory where the data will be stored
  io500_result_dir=$PWD/results/$timestamp      # the directory where the output results will be kept
  mkdir -p $io500_workdir $io500_result_dir
}

function setup_paths {
  # Set the paths to the binaries.  If you ran ./utilities/prepare.sh successfully, then binaries are in ./bin/
  io500_ior_cmd=$PWD/bin/ior
  io500_mdtest_cmd=$PWD/bin/mdtest
  io500_mdreal_cmd=$PWD/bin/md-real-io
  io500_mpirun="jsrun"
  io500_mpiargs="-n 10 -a 16 -c 16 -r 1" 
#  io500_mpiargs="-n 10 -c ALL_CPUS -a 1"

}

function setup_ior_easy {
  # io500_ior_easy_size is the amount of data written per rank in MiB units,
  # but it can be any number as long as it is somehow used to scale the IOR
  # runtime as part of io500_ior_easy_params
  io500_ior_easy_size=700000
  # 2M writes, 2 GB per proc, file per proc
  io500_ior_easy_params="-t 16m -b ${io500_ior_easy_size}m -F"
}

function setup_mdt_easy {
  io500_mdtest_easy_params="-u -L" # unique dir per thread, files only at leaves
  io500_mdtest_easy_files_per_proc=1750000
}

function setup_ior_hard {
  io500_ior_hard_writes_per_proc=9500
  io500_ior_hard_other_options="" #e.g., -E to keep precreated files using lfs setstripe, or -a MPIIO
}

function setup_mdt_hard {
  io500_mdtest_hard_files_per_proc=1250000
  io500_mdtest_hard_other_options=""
}

function setup_find {
  #
  # setup the find command. This is an area where innovation is allowed.
  #    There are three default options provided. One is a serial find, one is python
  #    parallel version, one is C parallel version.  Current default is to use serial.
  #    But it is very slow. We recommend to either customize or use the C parallel version.
  #    For GPFS, we recommend to use the provided mmfind wrapper described below.
  #    Instructions below.
  #    If a custom approach is used, please provide enough info so others can reproduce.

  # the serial version that should run (SLOWLY) without modification
  #io500_find_mpi="False"
  #io500_find_cmd=$PWD/bin/sfind.sh
  #io500_find_cmd_args=""

  # a parallel version in C, the -s adds a stonewall
  #   for a real run, turn -s (stonewall) off or set it at 300 or more
  #   to prepare this (assuming you've run ./utilities/prepare.sh already):
  #   > cd build/pfind
  #   > ./prepare.sh
  #   > ./compile.sh
  #   > cp pfind ../../bin/
  #   If you use io500_find_mpi="True", then this will run with the same
  #   number of MPI nodes and ranks as the other phases.
  #   If you prefer another number, and fewer might be better here,
  #   Then you can set io500_find_mpi to be "False" and write a wrapper
  #   script for this which sets up MPI as you would like.  Then change
  #   io500_find_cmd to point to your wrapper script.
  io500_find_mpi="True"
  io500_find_cmd="$PWD/bin/pfind"
  # uses stonewalling, run pfind 
  io500_find_cmd_args="-s $io500_stonewall_timer -r $io500_result_dir/pfind_results"

  # for GPFS systems, you should probably use the provided mmfind wrapper
  # if you used ./utilities/prepare.sh, you'll find this wrapper in ./bin/mmfind.sh
  #io500_find_mpi="False"
  #io500_find_cmd="$PWD/bin/mmfind.sh"
  #io500_find_cmd_args=""
}

function setup_mdreal {
  io500_mdreal_params="-P=5000 -I=1000"
}

function run_benchmarks {
  # Important: source the io500_fixed.sh script.  Do not change it. If you discover
  # a need to change it, please email the mailing list to discuss
  source ./utilities/io500_fixed.sh 2>&1 | tee $io500_result_dir/io-500-summary.$timestamp.txt
}

# Add key/value pairs defining your system
# Feel free to add extra ones if you'd like
function extra_description {
  # top level info
  io500_info_system_name='xxx'      # e.g. Oakforest-PACS
  io500_info_institute_name='xxx'   # e.g. JCAHPC
  io500_info_storage_age_in_months='xxx' # not install date but age since last refresh
  io500_info_storage_install_date='xxx'  # MM/YY
  io500_info_filesystem='xxx'     # e.g. BeeGFS, DataWarp, GPFS, IME, Lustre
  io500_info_filesystem_version='xxx'
  io500_info_filesystem_vendor='xxx'
  # client side info
  io500_info_num_client_nodes='xxx'
  io500_info_procs_per_node='xxx'
  # server side info
  io500_info_num_metadata_server_nodes='xxx'
  io500_info_num_data_server_nodes='xxx'
  io500_info_num_data_storage_devices='xxx'  # if you have 5 data servers, and each has 5 drives, then this number is 25
  io500_info_num_metadata_storage_devices='xxx'  # if you have 2 metadata servers, and each has 5 drives, then this number is 10
  io500_info_data_storage_type='xxx' # HDD, SSD, persistent memory, etc, feel free to put specific models
  io500_info_metadata_storage_type='xxx' # HDD, SSD, persistent memory, etc, feel free to put specific models
  io500_info_storage_network='xxx' # infiniband, omnipath, ethernet, etc
  io500_info_storage_interface='xxx' # SAS, SATA, NVMe, etc
  # miscellaneous
  io500_info_whatever='WhateverElseYouThinkRelevant'
}

main
ior_easy_read
IOR-3.2.0: MPI Coordinated Test of Parallel I/O
Began               : Thu Nov  8 16:19:21 2018
Command line        : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/ior -r -R -C -Q 1 -g -G 27 -k -e -t 16m -b 700000m -F -o /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_easy/ior_file_easy -O stoneWallingStatusFile=/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_easy/stonewall
Machine             : Linux g26n07
TestID              : 0
StartTime           : Thu Nov  8 16:19:21 2018
Path                : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_easy
FS                  : 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

Options: 
api                 : POSIX
apiVersion          : 
test filename       : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_easy/ior_file_easy
access              : file-per-process
type                : independent
segments            : 1
ordering in a file  : sequential
ordering inter file : constant task offset
task offset         : 1
tasks               : 160
clients per node    : 16
repetitions         : 1
xfersize            : 16 MiB
blocksize           : 683.59 GiB
aggregate filesize  : 106.81 TiB

Results: 

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   --------   ----
WARNING: Expected aggregate file size       = 117440512000000.
WARNING: Stat() of aggregate file size      = 71368934686720.
WARNING: Using actual aggregate bytes moved = 71368934686720.
read      148884     716800000  16384      0.005292   457.15     0.000703   457.15     0   
Max Read:  148884.18 MiB/sec (156116.38 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev   Max(OPs)   Min(OPs)  Mean(OPs)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt   blksiz    xsize aggs(MiB)   API RefNum
read       148884.18  148884.18  148884.18       0.00    9305.26    9305.26    9305.26       0.00  457.15212     0    160  16    1   1     1        1         0    0      1 734003200000 16777216 68062720.0 POSIX      0
Finished            : Thu Nov  8 16:26:58 2018
ior_easy_write
IOR-3.2.0: MPI Coordinated Test of Parallel I/O
Began               : Thu Nov  8 15:56:07 2018
Command line        : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/ior -w -C -Q 1 -g -G 27 -k -e -t 16m -b 700000m -F -o /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_easy/ior_file_easy -O stoneWallingStatusFile=/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_easy/stonewall -O stoneWallingWearOut=1 -D 300
Machine             : Linux g26n07
TestID              : 0
StartTime           : Thu Nov  8 15:56:07 2018
Path                : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_easy
FS                  : 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

Options: 
api                 : POSIX
apiVersion          : 
test filename       : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_easy/ior_file_easy
access              : file-per-process
type                : independent
segments            : 1
ordering in a file  : sequential
ordering inter file : constant task offset
task offset         : 1
tasks               : 160
clients per node    : 16
repetitions         : 1
xfersize            : 16 MiB
blocksize           : 683.59 GiB
aggregate filesize  : 106.81 TiB
stonewallingTime    : 300
stoneWallingWearOut : 1

Results: 

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   --------   ----
stonewalling pairs accessed min: 26121 max: 26587 -- min data: 408.1 GiB mean data: 412.2 GiB time: 300.2s
WARNING: Expected aggregate file size       = 117440512000000.
WARNING: Stat() of aggregate file size      = 71368934686720.
WARNING: Using actual aggregate bytes moved = 71368934686720.
WARNING: maybe caused by deadlineForStonewalling
write     223562     716800000  16384      0.106599   304.34     0.000947   304.45     0   
Max Write: 223562.21 MiB/sec (234421.97 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev   Max(OPs)   Min(OPs)  Mean(OPs)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt   blksiz    xsize aggs(MiB)   API RefNum
write      223562.21  223562.21  223562.21       0.00   13972.64   13972.64   13972.64       0.00  304.44645     0    160  16    1   1     1        1         0    0      1 734003200000 16777216 68062720.0 POSIX      0
Finished            : Thu Nov  8 16:01:12 2018
ior_hard_read
IOR-3.2.0: MPI Coordinated Test of Parallel I/O
Began               : Thu Nov  8 16:27:30 2018
Command line        : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/ior -r -R -C -Q 1 -g -G 27 -k -e -t 47008 -b 47008 -s 9500 -o /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_hard/IOR_file -O stoneWallingStatusFile=/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_hard/stonewall
Machine             : Linux g26n07
TestID              : 0
StartTime           : Thu Nov  8 16:27:30 2018
Path                : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_hard
FS                  : 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

Options: 
api                 : POSIX
apiVersion          : 
test filename       : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_hard/IOR_file
access              : single-shared-file
type                : independent
segments            : 9500
ordering in a file  : sequential
ordering inter file : constant task offset
task offset         : 1
tasks               : 160
clients per node    : 16
repetitions         : 1
xfersize            : 47008 bytes
blocksize           : 47008 bytes
aggregate filesize  : 66.55 GiB

Results: 

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   --------   ----
read      1914.20    45.91      45.91      0.003667   35.59      0.000998   35.60      0   
Max Read:  1914.20 MiB/sec (2007.18 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev   Max(OPs)   Min(OPs)  Mean(OPs)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt   blksiz    xsize aggs(MiB)   API RefNum
read         1914.20    1914.20    1914.20       0.00   42698.69   42698.69   42698.69       0.00   35.59828     0    160  16    1   0     1        1         0    0   9500    47008    47008   68142.1 POSIX      0
Finished            : Thu Nov  8 16:28:06 2018
ior_hard_write
IOR-3.2.0: MPI Coordinated Test of Parallel I/O
Began               : Thu Nov  8 16:06:48 2018
Command line        : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/ior -w -C -Q 1 -g -G 27 -k -e -t 47008 -b 47008 -s 9500 -o /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_hard/IOR_file -O stoneWallingStatusFile=/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_hard/stonewall -O stoneWallingWearOut=1 -D 300
Machine             : Linux g26n07
TestID              : 0
StartTime           : Thu Nov  8 16:06:48 2018
Path                : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_hard
FS                  : 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

Options: 
api                 : POSIX
apiVersion          : 
test filename       : /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/ior_hard/IOR_file
access              : single-shared-file
type                : independent
segments            : 9500
ordering in a file  : sequential
ordering inter file : constant task offset
task offset         : 1
tasks               : 160
clients per node    : 16
repetitions         : 1
xfersize            : 47008 bytes
blocksize           : 47008 bytes
aggregate filesize  : 66.55 GiB
stonewallingTime    : 300
stoneWallingWearOut : 1

Results: 

access    bw(MiB/s)  block(KiB) xfer(KiB)  open(s)    wr/rd(s)   close(s)   total(s)   iter
------    ---------  ---------- ---------  --------   --------   --------   --------   ----
stonewalling pairs accessed min: 2800 max: 9500 -- min data: 0.1 GiB mean data: 0.2 GiB time: 300.2s
write     162.04     45.91      45.91      0.038519   420.49     0.001035   420.53     0   
Max Write: 162.04 MiB/sec (169.91 MB/sec)

Summary of all tests:
Operation   Max(MiB)   Min(MiB)  Mean(MiB)     StdDev   Max(OPs)   Min(OPs)  Mean(OPs)     StdDev    Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt   blksiz    xsize aggs(MiB)   API RefNum
write         162.04     162.04     162.04       0.00    3614.46    3614.46    3614.46       0.00  420.53319     0    160  16    1   0     1        1         0    0   9500    47008    47008   68142.1 POSIX      0
Finished            : Thu Nov  8 16:13:49 2018
mdtest_easy_delete
-- started at 11/08/2018 16:28:15 --

mdtest-1.9.3 was launched with 160 total task(s) on 10 node(s)
Command line used: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/mdtest "-r" "-F" "-d" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_easy" "-n" "1750000" "-u" "-L" "-x" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_easy-stonewall"
Path: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05
FS: 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

160 tasks, 280000000 files

SUMMARY rate: (of 1 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :          0.000          0.000          0.000          0.000
   File stat         :          0.000          0.000          0.000          0.000
   File read         :          0.000          0.000          0.000          0.000
   File removal      :     282126.218     282126.218     282126.218          0.000
   Tree creation     :          0.000          0.000          0.000          0.000
   Tree removal      :          1.082          1.082          1.082          0.000

-- finished at 11/08/2018 16:28:50 --
mdtest_easy_stat
-- started at 11/08/2018 16:27:21 --

mdtest-1.9.3 was launched with 160 total task(s) on 10 node(s)
Command line used: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/mdtest "-T" "-F" "-d" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_easy" "-n" "1750000" "-u" "-L" "-x" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_easy-stonewall"
Path: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05
FS: 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

160 tasks, 280000000 files

SUMMARY rate: (of 1 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :          0.000          0.000          0.000          0.000
   File stat         :    1807735.855    1807735.855    1807735.855          0.000
   File read         :          0.000          0.000          0.000          0.000
   File removal      :          0.000          0.000          0.000          0.000
   Tree creation     :          0.000          0.000          0.000          0.000
   Tree removal      :          0.000          0.000          0.000          0.000

-- finished at 11/08/2018 16:27:26 --
mdtest_easy_write
-- started at 11/08/2018 16:01:14 --

mdtest-1.9.3 was launched with 160 total task(s) on 10 node(s)
Command line used: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/mdtest "-C" "-F" "-d" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_easy" "-n" "1750000" "-u" "-L" "-x" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_easy-stonewall" "-W" "300"
Path: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05
FS: 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

160 tasks, 280000000 files
Continue stonewall hit min: 46858 max: 60319 avg: 53925.5 
stonewall rank 0: 54274 of 60319 

SUMMARY rate: (of 1 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :     846368.737     846368.737     846368.737          0.000
   File stat         :          0.000          0.000          0.000          0.000
   File read         :          0.000          0.000          0.000          0.000
   File removal      :          0.000          0.000          0.000          0.000
   Tree creation     :          1.259          1.259          1.259          0.000
   Tree removal      :          0.000          0.000          0.000          0.000

-- finished at 11/08/2018 16:06:46 --
stonewall rank 105: 54826 of 60319 
stonewall rank 45: 53334 of 60319 
stonewall rank 66: 55229 of 60319 
stonewall rank 156: 57745 of 60319 
stonewall rank 60: 55431 of 60319 
stonewall rank 113: 56644 of 60319 
stonewall rank 95: 58956 of 60319 
stonewall rank 43: 55968 of 60319 
stonewall rank 104: 52707 of 60319 
stonewall rank 94: 53105 of 60319 
stonewall rank 99: 54280 of 60319 
stonewall rank 148: 56041 of 60319 
stonewall rank 17: 51015 of 60319 
stonewall rank 31: 51956 of 60319 
stonewall rank 29: 56023 of 60319 
stonewall rank 16: 56927 of 60319 
stonewall rank 47: 54184 of 60319 
stonewall rank 83: 57733 of 60319 
stonewall rank 67: 52102 of 60319 
stonewall rank 52: 58405 of 60319 
stonewall rank 50: 57796 of 60319 
stonewall rank 77: 50999 of 60319 
stonewall rank 112: 50805 of 60319 
stonewall rank 39: 49487 of 60319 
stonewall rank 71: 53435 of 60319 
stonewall rank 49: 50253 of 60319 
stonewall rank 37: 54307 of 60319 
stonewall rank 107: 54631 of 60319 
stonewall rank 103: 52145 of 60319 
stonewall rank 130: 55891 of 60319 
stonewall rank 48: 55127 of 60319 
stonewall rank 108: 54190 of 60319 
stonewall rank 102: 58023 of 60319 
stonewall rank 4: 57119 of 60319 
stonewall rank 53: 57050 of 60319 
stonewall rank 119: 55402 of 60319 
stonewall rank 11: 52563 of 60319 
stonewall rank 92: 54703 of 60319 
stonewall rank 141: 58879 of 60319 
stonewall rank 46: 54915 of 60319 
stonewall rank 58: 55688 of 60319 
stonewall rank 21: 53753 of 60319 
stonewall rank 129: 54760 of 60319 
stonewall rank 86: 52478 of 60319 
stonewall rank 78: 52321 of 60319 
stonewall rank 151: 58194 of 60319 
stonewall rank 54: 54056 of 60319 
stonewall rank 41: 54579 of 60319 
stonewall rank 124: 56420 of 60319 
stonewall rank 57: 55560 of 60319 
stonewall rank 125: 55782 of 60319 
stonewall rank 27: 57682 of 60319 
stonewall rank 76: 54925 of 60319 
stonewall rank 121: 52795 of 60319 
stonewall rank 63: 53913 of 60319 
stonewall rank 106: 54402 of 60319 
stonewall rank 93: 55936 of 60319 
stonewall rank 22: 53214 of 60319 
stonewall rank 89: 54313 of 60319 
stonewall rank 61: 51250 of 60319 
stonewall rank 126: 49969 of 60319 
stonewall rank 159: 52765 of 60319 
stonewall rank 38: 52218 of 60319 
stonewall rank 18: 53593 of 60319 
stonewall rank 14: 52480 of 60319 
stonewall rank 132: 55443 of 60319 
stonewall rank 13: 54913 of 60319 
stonewall rank 32: 56790 of 60319 
stonewall rank 44: 55126 of 60319 
stonewall rank 73: 57196 of 60319 
stonewall rank 117: 52353 of 60319 
stonewall rank 149: 50581 of 60319 
stonewall rank 19: 53663 of 60319 
stonewall rank 158: 52125 of 60319 
stonewall rank 139: 52120 of 60319 
stonewall rank 147: 50851 of 60319 
stonewall rank 42: 52478 of 60319 
stonewall rank 122: 49492 of 60319 
stonewall rank 2: 51049 of 60319 
stonewall rank 133: 52213 of 60319 
stonewall rank 70: 55893 of 60319 
stonewall rank 142: 50994 of 60319 
stonewall rank 154: 51133 of 60319 
stonewall rank 79: 54636 of 60319 
stonewall rank 109: 55349 of 60319 
stonewall rank 146: 51182 of 60319 
stonewall rank 140: 53125 of 60319 
stonewall rank 116: 50136 of 60319 
stonewall rank 24: 55043 of 60319 
stonewall rank 68: 57754 of 60319 
stonewall rank 64: 54498 of 60319 
stonewall rank 120: 52584 of 60319 
stonewall rank 36: 53788 of 60319 
stonewall rank 115: 53358 of 60319 
stonewall rank 127: 54109 of 60319 
stonewall rank 153: 56515 of 60319 
stonewall rank 6: 51653 of 60319 
stonewall rank 97: 58689 of 60319 
stonewall rank 128: 54394 of 60319 
stonewall rank 134: 51943 of 60319 
stonewall rank 114: 50313 of 60319 
stonewall rank 51: 52619 of 60319 
stonewall rank 136: 55346 of 60319 
stonewall rank 56: 57654 of 60319 
stonewall rank 23: 51797 of 60319 
stonewall rank 10: 54375 of 60319 
stonewall rank 74: 53930 of 60319 
stonewall rank 118: 54921 of 60319 
stonewall rank 8: 54746 of 60319 
stonewall rank 137: 58282 of 60319 
stonewall rank 59: 55680 of 60319 
stonewall rank 20: 54665 of 60319 
stonewall rank 96: 55248 of 60319 
stonewall rank 138: 50548 of 60319 
stonewall rank 100: 51601 of 60319 
stonewall rank 75: 57064 of 60319 
stonewall rank 72: 54657 of 60319 
stonewall rank 82: 54805 of 60319 
stonewall rank 5: 54599 of 60319 
stonewall rank 15: 48974 of 60319 
stonewall rank 144: 52066 of 60319 
stonewall rank 1: 55351 of 60319 
stonewall rank 110: 52935 of 60319 
stonewall rank 90: 47788 of 60319 
stonewall rank 131: 52230 of 60319 
stonewall rank 26: 52406 of 60319 
stonewall rank 3: 47498 of 60319 
stonewall rank 28: 52202 of 60319 
stonewall rank 135: 57281 of 60319 
stonewall rank 98: 53977 of 60319 
stonewall rank 30: 49283 of 60319 
stonewall rank 157: 54176 of 60319 
stonewall rank 123: 54463 of 60319 
stonewall rank 145: 49125 of 60319 
stonewall rank 35: 55818 of 60319 
stonewall rank 101: 52190 of 60319 
stonewall rank 40: 55559 of 60319 
stonewall rank 152: 53503 of 60319 
stonewall rank 12: 49605 of 60319 
stonewall rank 81: 53618 of 60319 
stonewall rank 7: 55069 of 60319 
stonewall rank 69: 50759 of 60319 
stonewall rank 91: 52317 of 60319 
stonewall rank 55: 51071 of 60319 
stonewall rank 25: 52953 of 60319 
stonewall rank 62: 53921 of 60319 
stonewall rank 84: 53626 of 60319 
stonewall rank 87: 52520 of 60319 
stonewall rank 111: 54577 of 60319 
stonewall rank 65: 57619 of 60319 
stonewall rank 88: 55274 of 60319 
stonewall rank 85: 58912 of 60319 
stonewall rank 150: 53969 of 60319 
stonewall rank 80: 54818 of 60319 
stonewall rank 33: 46858 of 60319 
stonewall rank 9: 52550 of 60319 
stonewall rank 155: 52776 of 60319 
stonewall rank 34: 54460 of 60319 
mdtest_hard_delete
-- started at 11/08/2018 16:29:05 --

mdtest-1.9.3 was launched with 160 total task(s) on 10 node(s)
Command line used: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/mdtest "-r" "-t" "-F" "-w" "3901" "-e" "3901" "-d" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_hard" "-n" "1250000" "-x" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_hard-stonewall"
Path: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05
FS: 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

160 tasks, 200000000 files

SUMMARY rate: (of 1 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :          0.000          0.000          0.000          0.000
   File stat         :          0.000          0.000          0.000          0.000
   File read         :          0.000          0.000          0.000          0.000
   File removal      :      27163.111      27163.111      27163.111          0.000
   Tree creation     :          0.000          0.000          0.000          0.000
   Tree removal      :          0.029          0.029          0.029          0.000

-- finished at 11/08/2018 16:31:43 --
mdtest_hard_read
-- started at 11/08/2018 16:28:57 --

mdtest-1.9.3 was launched with 160 total task(s) on 10 node(s)
Command line used: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/mdtest "-E" "-t" "-F" "-w" "3901" "-e" "3901" "-d" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_hard" "-n" "1250000" "-x" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_hard-stonewall"
Path: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05
FS: 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

160 tasks, 200000000 files

SUMMARY rate: (of 1 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :          0.000          0.000          0.000          0.000
   File stat         :          0.000          0.000          0.000          0.000
   File read         :     737556.969     737556.969     737556.969          0.000
   File removal      :          0.000          0.000          0.000          0.000
   Tree creation     :          0.000          0.000          0.000          0.000
   Tree removal      :          0.000          0.000          0.000          0.000

-- finished at 11/08/2018 16:29:02 --
mdtest_hard_stat
-- started at 11/08/2018 16:28:09 --

mdtest-1.9.3 was launched with 160 total task(s) on 10 node(s)
Command line used: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/mdtest "-T" "-t" "-F" "-w" "3901" "-e" "3901" "-d" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_hard" "-n" "1250000" "-x" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_hard-stonewall"
Path: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05
FS: 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

160 tasks, 200000000 files

SUMMARY rate: (of 1 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :          0.000          0.000          0.000          0.000
   File stat         :     916310.643     916310.643     916310.643          0.000
   File read         :          0.000          0.000          0.000          0.000
   File removal      :          0.000          0.000          0.000          0.000
   Tree creation     :          0.000          0.000          0.000          0.000
   Tree removal      :          0.000          0.000          0.000          0.000

-- finished at 11/08/2018 16:28:13 --
mdtest_hard_write
-- started at 11/08/2018 16:13:51 --

mdtest-1.9.3 was launched with 160 total task(s) on 10 node(s)
Command line used: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/bin/mdtest "-C" "-t" "-F" "-w" "3901" "-e" "3901" "-d" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_hard" "-n" "1250000" "-x" "/gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05/mdt_hard-stonewall" "-W" "300"
Path: /gpfs/alpine/stf007/scratch/gmarkoma/io-500-dev/datafiles/io500.2018.11.08-15.56.05
FS: 225668.2 TiB   Used FS: 1.2%   Inodes: 30000.0 Mi   Used Inodes: 0.4%

160 tasks, 200000000 files
Continue stonewall hit min: 19357 max: 20942 avg: 19914.6 
stonewall rank 0: 19404 of 20942 

SUMMARY rate: (of 1 iterations)
   Operation                      Max            Min           Mean        Std Dev
   ---------                      ---            ---           ----        -------
   File creation     :     643679.338     643679.338     643679.338          0.000
   File stat         :          0.000          0.000          0.000          0.000
   File read         :          0.000          0.000          0.000          0.000
   File removal      :          0.000          0.000          0.000          0.000
   Tree creation     :       6188.464       6188.464       6188.464          0.000
   Tree removal      :          0.000          0.000          0.000          0.000

-- finished at 11/08/2018 16:19:02 --
stonewall rank 71: 20906 of 20942 
stonewall rank 121: 20930 of 20942 
stonewall rank 142: 19572 of 20942 
stonewall rank 112: 19603 of 20942 
stonewall rank 40: 19403 of 20942 
stonewall rank 125: 20375 of 20942 
stonewall rank 65: 20373 of 20942 
stonewall rank 1: 20937 of 20942 
stonewall rank 59: 20195 of 20942 
stonewall rank 145: 20371 of 20942 
stonewall rank 140: 19404 of 20942 
stonewall rank 91: 20937 of 20942 
stonewall rank 39: 20194 of 20942 
stonewall rank 70: 19405 of 20942 
stonewall rank 90: 19403 of 20942 
stonewall rank 135: 20369 of 20942 
stonewall rank 55: 20340 of 20942 
stonewall rank 62: 19601 of 20942 
stonewall rank 42: 19603 of 20942 
stonewall rank 114: 19624 of 20942 
stonewall rank 124: 19616 of 20942 
stonewall rank 20: 19388 of 20942 
stonewall rank 134: 19639 of 20942 
stonewall rank 56: 19643 of 20942 
stonewall rank 64: 19639 of 20942 
stonewall rank 150: 19405 of 20942 
stonewall rank 155: 20349 of 20942 
stonewall rank 30: 19403 of 20942 
stonewall rank 120: 19377 of 20942 
stonewall rank 84: 19637 of 20942 
stonewall rank 89: 20191 of 20942 
stonewall rank 12: 19600 of 20942 
stonewall rank 108: 19884 of 20942 
stonewall rank 88: 19893 of 20942 
stonewall rank 153: 19722 of 20942 
stonewall rank 149: 20192 of 20942 
stonewall rank 14: 19639 of 20942 
stonewall rank 141: 20905 of 20942 
stonewall rank 23: 19758 of 20942 
stonewall rank 5: 20373 of 20942 
stonewall rank 66: 19663 of 20942 
stonewall rank 111: 20893 of 20942 
stonewall rank 105: 20352 of 20942 
stonewall rank 136: 19639 of 20942 
stonewall rank 102: 19598 of 20942 
stonewall rank 36: 19662 of 20942 
stonewall rank 74: 19639 of 20942 
stonewall rank 101: 20913 of 20942 
stonewall rank 22: 19603 of 20942 
stonewall rank 43: 19728 of 20942 
stonewall rank 52: 19604 of 20942 
stonewall rank 33: 19757 of 20942 
stonewall rank 104: 19616 of 20942 
stonewall rank 130: 19406 of 20942 
stonewall rank 98: 19897 of 20942 
stonewall rank 122: 19543 of 20942 
stonewall rank 10: 19407 of 20942 
stonewall rank 21: 20908 of 20942 
stonewall rank 32: 19605 of 20942 
stonewall rank 69: 20190 of 20942 
stonewall rank 53: 19758 of 20942 
stonewall rank 99: 20189 of 20942 
stonewall rank 49: 20188 of 20942 
stonewall rank 44: 19639 of 20942 
stonewall rank 118: 19894 of 20942 
stonewall rank 94: 19617 of 20942 
stonewall rank 148: 19896 of 20942 
stonewall rank 85: 20327 of 20942 
stonewall rank 19: 20185 of 20942 
stonewall rank 82: 19583 of 20942 
stonewall rank 31: 20938 of 20942 
stonewall rank 86: 19663 of 20942 
stonewall rank 132: 19607 of 20942 
stonewall rank 96: 19656 of 20942 
stonewall rank 51: 20933 of 20942 
stonewall rank 106: 19660 of 20942 
stonewall rank 95: 20346 of 20942 
stonewall rank 151: 20894 of 20942 
stonewall rank 50: 19377 of 20942 
stonewall rank 41: 20935 of 20942 
stonewall rank 116: 19640 of 20942 
stonewall rank 57: 19742 of 20942 
stonewall rank 103: 19742 of 20942 
stonewall rank 119: 20196 of 20942 
stonewall rank 156: 19668 of 20942 
stonewall rank 138: 19898 of 20942 
stonewall rank 113: 19738 of 20942 
stonewall rank 126: 19609 of 20942 
stonewall rank 72: 19604 of 20942 
stonewall rank 16: 19662 of 20942 
stonewall rank 8: 19900 of 20942 
stonewall rank 115: 20342 of 20942 
stonewall rank 46: 19665 of 20942 
stonewall rank 48: 19855 of 20942 
stonewall rank 54: 19638 of 20942 
stonewall rank 13: 19759 of 20942 
stonewall rank 79: 20195 of 20942 
stonewall rank 143: 19756 of 20942 
stonewall rank 109: 20119 of 20942 
stonewall rank 146: 19663 of 20942 
stonewall rank 61: 20938 of 20942 
stonewall rank 158: 19899 of 20942 
stonewall rank 159: 20158 of 20942 
stonewall rank 15: 20373 of 20942 
stonewall rank 6: 19643 of 20942 
stonewall rank 58: 19896 of 20942 
stonewall rank 76: 19667 of 20942 
stonewall rank 75: 20373 of 20942 
stonewall rank 17: 19769 of 20942 
stonewall rank 78: 19898 of 20942 
stonewall rank 11: 20934 of 20942 
stonewall rank 100: 19408 of 20942 
stonewall rank 92: 19598 of 20942 
stonewall rank 38: 19899 of 20942 
stonewall rank 2: 19604 of 20942 
stonewall rank 123: 19752 of 20942 
stonewall rank 80: 19406 of 20942 
stonewall rank 45: 20375 of 20942 
stonewall rank 26: 19666 of 20942 
stonewall rank 3: 19758 of 20942 
stonewall rank 152: 19599 of 20942 
stonewall rank 24: 19640 of 20942 
stonewall rank 9: 20191 of 20942 
stonewall rank 129: 20144 of 20942 
stonewall rank 68: 19895 of 20942 
stonewall rank 27: 19770 of 20942 
stonewall rank 35: 20371 of 20942 
stonewall rank 133: 19759 of 20942 
stonewall rank 131: 20940 of 20942 
stonewall rank 34: 19637 of 20942 
stonewall rank 63: 19732 of 20942 
stonewall rank 47: 19746 of 20942 
stonewall rank 25: 20368 of 20942 
stonewall rank 28: 19902 of 20942 
stonewall rank 139: 20164 of 20942 
stonewall rank 144: 19637 of 20942 
stonewall rank 107: 19769 of 20942 
stonewall rank 4: 19633 of 20942 
stonewall rank 60: 19357 of 20942 
stonewall rank 128: 19899 of 20942 
stonewall rank 157: 19768 of 20942 
stonewall rank 18: 19893 of 20942 
stonewall rank 83: 19757 of 20942 
stonewall rank 93: 19756 of 20942 
stonewall rank 67: 19768 of 20942 
stonewall rank 29: 20193 of 20942 
stonewall rank 154: 19637 of 20942 
stonewall rank 73: 19761 of 20942 
stonewall rank 110: 19410 of 20942 
stonewall rank 87: 19746 of 20942 
stonewall rank 147: 19751 of 20942 
stonewall rank 37: 19742 of 20942 
stonewall rank 127: 19768 of 20942 
stonewall rank 137: 19770 of 20942 
stonewall rank 97: 19718 of 20942 
stonewall rank 77: 19750 of 20942 
stonewall rank 7: 19769 of 20942 
stonewall rank 117: 19771 of 20942 
result_summary
[RESULT] BW   phase 1            ior_easy_write              218.322 GB/s : time 304.45 seconds
[RESULT] IOPS phase 1         mdtest_easy_write              846.369 kiops : time 334.15 seconds
[RESULT] BW   phase 2            ior_hard_write                0.158 GB/s : time 420.53 seconds
[RESULT] IOPS phase 2         mdtest_hard_write              643.679 kiops : time 313.27 seconds
[RESULT] IOPS phase 3                      find              854.990 kiops : time  15.21 seconds
[RESULT] BW   phase 3             ior_easy_read              145.395 GB/s : time 457.15 seconds
[RESULT] IOPS phase 4          mdtest_easy_stat             1807.740 kiops : time  24.69 seconds
[RESULT] BW   phase 4             ior_hard_read                1.869 GB/s : time  35.60 seconds
[RESULT] IOPS phase 5          mdtest_hard_stat              916.311 kiops : time   6.97 seconds
[RESULT] IOPS phase 6        mdtest_easy_delete              282.126 kiops : time  40.03 seconds
[RESULT] IOPS phase 7          mdtest_hard_read              737.557 kiops : time   9.41 seconds
[RESULT] IOPS phase 8        mdtest_hard_delete               27.163 kiops : time 160.81 seconds
[SCORE] Bandwidth 9.84383 GB/s : IOPS 506.93 kiops : TOTAL 70.6409