Mastering Tasking with OpenMP
Performance Analysis and Optimization
Programming Models & Languages
Scientific Software Development
System Software & Runtime Systems
TimeSunday, June 24th9am - 1pm
DescriptionWith the increasing prevalence of multi-core processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Since version 3.0 released in 2008, OpenMP offers tasking to support the creation of composable parallel software blocks and the parallelization of irregular algorithms. Developers usually find OpenMP easy to learn. However, mastering the tasking concept of OpenMP requires a change in the way developers reason about the structure of their code and how to expose the parallelism of it. Our tutorial addresses this critical aspect by examining the tasking concept in detail and presenting patterns as solutions to many common problems.
We assume attendees understand basic parallelization concepts and know the fundamentals of OpenMP.We present the OpenMP tasking language features in detail and focus on performance aspects, such as introducing cut-off mechanisms, exploiting task dependencies, and preserving locality. All aspects are accompanied by extensive case studies. Throughout all topics we present the recent additions of OpenMP 4.5 and extensions that have been subsequently adopted by the OpenMP Language Committee.
Content Level 25% introductory: quick overview of OpenMP features 50% intermediate: tasking language features 25% advanced: performance pitfalls, performance use cases
Target Audience Our primary target is HPC programmers with some knowledge of OpenMP that want to implement efficient shared-memory code, particularly for irregular algorithms or composable parallel software components.
PrerequisitesCommon knowledge of general computer architecture concepts (e.g., SMT, multi-core, and NUMA). Basic knowledge of OpenMP. Good knowledge of either C, C++, or Fortran.
HPC Group Lead OpenMP CEO Chief Technology Officer (CTO)