PhD Forum
(PhD03) Accelerators in a Hybrid HPC World: How Can Applications Benefit?
SessionPhD Forum
Poster Author
Event Type
PhD Forum
Containerized HPC
HPC Accelerators
HPC workflows
TimeMonday, June 25th1:13pm - 1:17pm
LocationAnalog 1, 2
DescriptionFor quite some time, so-called accelerators are considered to be the core components of a successful high-performance computing systems. While the power of GPU integration paved the way, today’s approaches based on FPGA, ASIC, and even Quantum Computing technology become more and more critical. The result is a powerful but complex and hybrid HPC world. As a consequence, programming such systems is THE challenge. The de-facto HPC programming standards (OpenMP and MPI) that computer scientists still use are not appropriate for researchers coming from a different field. Life science researchers will prefer high-level support such as given by Python and R environments, while in the industry we can still find demands for application-specific Java interfaces. Our approach bridges user needs from all three communities by providing tailor-made interface layers that share commonly needed system components.

Research objectives can be summarized as follows:

- Allow scientists to exploit the real computational capabilities of an HPC System in a transparent manner hiding the system/environment configuration complexity.

- Empower users from different communities to access HPC resources without changing the way they carry on their experiments.

- Achieve reproducibility proposing a systematic approach to the experiments.

- Using a modular architecture for better interfacing the diverse systems and technologies currently used and reduce the effort needed for including emerging ones.

The proposed approach is based on designing and implementing an architecture composed of three main parts: An “interface” for defining an experiment and sending the execution request to a “web server” which then establishes and manages the communication with the “remote HPC system” that will run the experiment. Re-using the components provided by PROVA! ( for managing HTTP requests from the interface (PROVA! Web Server) and SSH connections to the remote system (PROVA! Back-End), it is possible to implement different interfaces creating more execution workflows.

In order to evaluate the Ph.D. work, a set of experiments/applications from both academics and industry has been selected. The prototype system has already been used for running a stencil compilers benchmark: using PROVA!, the experiment was reproduced on remote HPC systems from different universities comparing the results obtained. Other applications from various fields such as Life Science (Deep Neural Networks for Anomaly Detection in Lung Imaging), Finance (Supervised machine learning for American Option Pricing), Business Management (Stochastic Optimization for Supply Chain Management) will be accelerated and executed through the system for evaluating the other execution workflows.

The results currently achieved are:

- Allowing experiment execution and reproduction on different remote systems: it is possible to choose between a solution based on a software build and installation framework (EasyBuild) or one based on containers (Docker/Singularity).

- Implementation of a client/server extension for Jupyter Notebook interfacing with the PROVA! Web Server and allowing the remote execution of Jupyter Notebook cells.

At the end of the work, a quantitative and qualitative analysis of system performances and user acceptance will be needed.
Poster PDF
Poster Author