Now that we have successfully released the new aion supercomputer, and before leaving my current position at the University, I wanted to share the experience learned from these long HPC developments (I also wanted to ensure that they will be later credited to the right persons at the origin of these works).

In complement to the ACM PEARC’22 and the IEEE ISPDC22 papers, the article “Management of an Academic HPC Research Computing Facility: The ULHPC Experience 2.0” aims as being a wrapup of all recent HPC developments under my supervision until the release of aion, to serve as the new reference article to cite when using the ULHPC facility [1].

The paper was (remotely) presented during the ACM HPCCT22 conference (6st ACM High Performance Computing and Cluster Technologies Conference) held in Fuzhou (China) on July 08-10, 2022, which I attended remotely from Boston.

  1. S. Varrette, H. Cartiaux, S. Peter, E. Kieffer, T. Valette, and A. Olloh, “Management of an Academic HPC & Research Computing Facility: The ULHPC Experience 2.0,” in Proc. of the 6^\textth ACM High Performance Computing and Cluster Technologies Conf. (HPCCT 2022), Fuzhou, China, 2022.
    doi

   Management of an Academic HPC Research Computing Facility: The ULHPC Experience 2.0

Abstract:

With the advent of the technological revolution and the digital transformation that made all scientific disciplines becoming computational, the need for High Performance Computing (HPC) has become and a strategic and critical asset to leverage new research and business in all domains requiring computing and storage performance. Since 2007, the University of Luxembourg operates a large academic HPC facility which remains the reference implementation within the country. This paper provides a general description of the current platform implementation as well as its operational management choices which have been adapted to the integration of a new liquid-cooled supercomputer, named Aion, released in 2021. The administration of a HPC facility to provide state-of-art computing systems, storage and software is indeed a complex and dynamic enterprise with the soul purpose to offer an enhanced user experience for intensive research computing and large-scale analytic workflows. Most design choices and feedback described in this work have been motivated by several years of experience in addressing in a flexible and convenient way the heterogeneous needs inherent to an academic environment towards research excellence. The different layers and stacks used within the operated facilities are reviewed, in particular with regards the user software management, or the adaptation of the Slurm Resource and Job Management System (RJMS) configuration with novel incentives mechanisms. In practice, the described and implemented environment brought concrete and measurable improvements with regards the platform utilization (+12,64%), jobs efficiency (average Wall-time Request Accuracy improved by 110,81%), the management and funding (increased by 10%). Thorough performance evaluation of the facility is also presented in this paper through reference benchmarks such as HPL, HPCG, Graph500, IOR or IO500. It reveals sustainable and scalable performance comparable to the most powerful supercomputers in the world, including for energy-efficient metrics (for instance, 5,19 GFlops/W (resp. 6,14 MTEPS/W) were demonstrated for full HPL (resp. Graph500) runs across all Aion nodes).