Machine Learning Based Parallel I/O Predictive Modeling: A Case Study on Lustre File Systems
AI/Machine Learning/Deep Learning
Performance Analysis and Optimization
TimeWednesday, June 27th11:30am - 12pm
DescriptionParallel I/O hardware and software infrastructure is a key contributor to performance variability for applications running on large-scale HPC systems. This variability confounds efforts to predict application performance for characterization, modeling, optimization, and job scheduling. We propose a modeling approach that improves predictive ability by explicitly treating the variability and by leveraging the sensitivity of application parameters on performance to group applications with similar characteristics.
We develop a Gaussian Process-based machine learning algorithm to model I/O performance and its variability as a function of application and file system characteristics. We demonstrate the effectiveness of the proposed approach using data collected from the Edison system at the National Energy Research Scientific Computing Center (NERSC). The results show that the proposed sensitivity-based models are better at prediction when compared to application-partitioned or unpartitioned models. We highlight modeling techniques that are robust to the outliers that can occur in production parallel file systems. Using the developed metrics and modeling approach, we provide insights into the file system metrics that have a significant impact on I/O performance.