TY - GEN
T1 - I/O performance of the SX-Aurora TSUBASA
AU - Yokokawa, Mitsuo
AU - Nakai, Ayano
AU - Komatsu, Kazuhiko
AU - Watanabe, Yuta
AU - Masaoka, Yasuhisa
AU - Isobe, Yoko
AU - Kobayashi, Hiroaki
N1 - Funding Information:
This work is supported partially by MEXT Next Generation High-Performance Computing Infrastructure and Applications R&D Program, entitled R&D of a Quantum-Annealing-Assisted Next Generation HPC Infrastructure and Its Applications.
Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - File outputs or checkpoints for intermediate results frequently appear at appropriate time intervals in large-scale time-advancement numerical simulations where they are utilized for simulation post-processing and/or for restarting consecutive simulations. However, file input/output (I/O) for large-scale data often takes excessive time due to bandwidth limitations between processors and/or secondary storage systems like hard disk drives (HDDs) and solid state drives (SSDs). Accordingly, efforts are ongoing to reduce the time required for file I/O operations in order to speed up such simulations, which means it is necessary to acquire advanced I/O performance knowledge related to high-performance computing systems used.In this study, I/O performance with respect to the connection bandwidth between the vector host (VH) server and the vector engines (VEs) for three configurations of the SX-Aurora TSUB-ASA supercomputer system, specifically the A300-2, A300-4, and A300-8 configurations, were measured and evaluated. The accelerated I/O function, which is a distinctive feature of the SX-Aurora TSUBASA I/O system, was demonstrated to have excellent performance compared to its normal I/O function.
AB - File outputs or checkpoints for intermediate results frequently appear at appropriate time intervals in large-scale time-advancement numerical simulations where they are utilized for simulation post-processing and/or for restarting consecutive simulations. However, file input/output (I/O) for large-scale data often takes excessive time due to bandwidth limitations between processors and/or secondary storage systems like hard disk drives (HDDs) and solid state drives (SSDs). Accordingly, efforts are ongoing to reduce the time required for file I/O operations in order to speed up such simulations, which means it is necessary to acquire advanced I/O performance knowledge related to high-performance computing systems used.In this study, I/O performance with respect to the connection bandwidth between the vector host (VH) server and the vector engines (VEs) for three configurations of the SX-Aurora TSUB-ASA supercomputer system, specifically the A300-2, A300-4, and A300-8 configurations, were measured and evaluated. The accelerated I/O function, which is a distinctive feature of the SX-Aurora TSUBASA I/O system, was demonstrated to have excellent performance compared to its normal I/O function.
KW - Accelerated I/O function
KW - DMA engine
KW - I/O performance
KW - SX-Aurora TSUBASA
KW - Vector supercomputer
UR - http://www.scopus.com/inward/record.url?scp=85091560581&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85091560581&partnerID=8YFLogxK
U2 - 10.1109/IPDPSW50202.2020.00014
DO - 10.1109/IPDPSW50202.2020.00014
M3 - Conference contribution
AN - SCOPUS:85091560581
T3 - Proceedings - 2020 IEEE 34th International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2020
SP - 27
EP - 35
BT - Proceedings - 2020 IEEE 34th International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2020
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 34th IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2020
Y2 - 18 May 2020 through 22 May 2020
ER -