Hands-on Practical Hybrid Parallel Application Performance Engineering
Performance Analysis and Optimization
Scientific Software Development
TimeSunday, June 24th9am - 6pm
DescriptionThis tutorial presents state-of-the-art performance tools for leading-edge HPC systems founded on the community-developed Score-P instrumentation and measurement infrastructure, demonstrating how they can be used for performance engineering of effective scientific applications based on standard MPI, OpenMP, hybrid combination of both, and increasingly common usage of accelerators. Parallel performance tools from the Virtual Institute – High Productivity Supercomputing (VI-HPS) are introduced and featured in hands-on exercises with Score-P, Scalasca, Vampir, and TAU. We present the complete workflow of performance engineering, including instrumentation, measurement (profiling and tracing, timing and PAPI hardware counters), data storage, analysis, tuning, and visualization. Emphasis is placed on how tools are used in combination for identifying performance problems and investigating optimization alternatives. Using their own notebook computers with a provided HPC Linux [http://www.hpclinux.org] OVA image containing all of the necessary tools (running within a virtual machine), participants will conduct exercises on the Stampede system at TACC where remote access to Intel Xeon Phi (KNL) nodes will be provided for the hands-on sessions. This will help to prepare participants to locate and diagnose performance bottlenecks in their own parallel programs.
Content Level Introductory: 50 %, Intermediate: 35 %, Advanced: 15 %
Target Audience 1. Application developers, striving for best application performance on HPC systems. 2. HPC support staff who assist application developers with performance tuning. 3. System managers and administrators, responsible for operational aspects of HPC systems and concerned about usability and scalability of optimization tools, and 4. Computer system manufacturers.
PrerequisitesGeneral familiarity with Linux, MPI, and OpenMP.