Quoting Wikipedia:

Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously.

As regards High Performance Computing (HPC)

High Performance Computing (HPC)most generally refers to the practice of aggregating computing power in a way that delivers much higher performance than one could get out of a typical desktop computer or workstation in order to solve large problems in science, engineering, or business.

The above picture has been taken in one of the server room of the UL HPC platform.

   UL MICS Moodle Interface

Course Description: Today, parallel computing is omnipresent across a large spectrum of computing platforms. At the microscopic level, processor cores have used multiple functional units in concurrent and pipelined fashions for years, and multiple-core chips are now commonplace with a trend toward rapidly increasing numbers of cores per chip. At this level, GPU also start to enter the area. At a more macroscopic level, one can now build clusters of hundreds to thousands of individual (multi-core) computers. Such distributed-memory systems have become mainstream and affordable in the form of commodity clusters. Furthermore, advances in network technology and infrastructures have made it possible to aggregate parallel computing platforms across wide-area networks in so-called grids. And in parallel, the advent of the Cloud Computing paradigm opens new perspectives and challenges to tackle

An efficient exploitation of parallel and distributed platforms requires a deep understanding of both architecture, software and infrastructure mechanisms and of advanced algorithmic principles. The aim of this course is thus twofold:

  1. It aims at introducing the main trends and principles in the area of high performance computing infrastructures, illustrated by examples of the current state of the art.
  2. It intends to provide a rigorous yet accessible treatment of parallel algorithms, including theoretical models of parallel computation, parallel algorithm design for homogeneous and heterogeneous platforms, complexity and performance analysis, and fundamental notions of scheduling and work-stealing