Advanced OpenMP: Performance and 4.5 Features
Performance Analysis and Optimization
Programming Models & Languages
System Software & Runtime Systems
TimeSunday, June 24th2pm - 6pm
DescriptionWith the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather from the lack of depth with which it is employed. Our “Advanced OpenMP Programming” tutorial addresses this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.
While we quickly review the basics of OpenMP programming, we assume attendees understand basic parallelization concepts and will easily grasp those basics. In two parts we discuss language features in-depth, with emphasis on advanced features like vectorization and compute acceleration. In the first part, we focus on performance aspects, such as data and thread locality on NUMA architectures, and exploitation of the comparably new language features. The second part is a presentation of the directives for attached compute accelerators.
Content Level 10% introductory: quick overview of OpenMP 50% intermediate: advanced language features 40% advanced: correctness pitfalls, performance pitfalls, performance use cases
Target AudienceOur primary target is HPC programmers with some knowledge of OpenMP that want to implement efficient shared-memory code.
PrerequisitesCommon knowledge of general computer architecture concepts (e.g., SMT, multi-core, and NUMA). A basic knowledge of OpenMP. Good knowledge of either C, C++ or Fortran.
HPC Group Lead OpenMP CEO Advisory Software Developer Chief Technology Officer (CTO)