top of page

High Performance Computing

High-performance computing (HPC) evolved due to meet increasing demands for processing speed. HPC brings together hardware and software to solve advanced problems effectively. HPC focuses on developing parallel processing algorithms to divide big problems into small pieces. Each piece is solved on a separate processor and then the results are combined.

The first language I learned was PASCAL as an undergraduate. Since then I have toyed with MATLAB, Python, Julia and a little Assembly. If you are a student who would like to do some work with me, you can use any language you like, on Linux or Windows, it is entirely up to you.

My current interests in HPC include the numerical simulation of quantum confinement, soliton behaviour, earthquake modelling, neural networks, shockwaves, fluid dynamics, CUDA, HPC in the Cloud, the Fermi Pasta Ulam Problem, protein folding, the Ising model of magnetism, molecular dynamics and ballistics.

​

If you are interested in any of these problems, HPC, Scientific Computing, or just want a chat about Projects, take a look at the Video and Sample Reports sections of this website to learn more.

​

Then get in touch with me! I’m looking forward to hearing from you........

HPC hardware platforms available in the School of Electrical and Electronic Engineering

SEEE High Performance Computing Blade

​

This is a Dell PowerEdge R730 Blade. This machine is used for a wide variety of research activities in the School.  I have run a number of Projects on this system in the area of Protein Folding and Molecular Dynamics. Access is via your desktop and a Virtual Machine.

The system is managed by Mr. Andrew Dillon, above right, HPC System Administrator. You can email him at andrew.dillon@tudublin.ie with queries or to apply for an Account.

 

Blade.JPG

CUDA Platform

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.

In GPU-accelerated applications, the sequential part of the workload runs on the CPU, which is optimized for single-threaded performance. The compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Python or  MATLAB® and express parallelism through extensions in the form of a few basic keywords.

The system is managed by Mr. Andrew Dillon, HPC System Administrator. You can email him at andrew.dillon@tudublin.ie with queries or to apply for an Account.

GPU.JPG

Beowulf cluster

​

A Beowulf cluster is a system comprising of multiple PCs or nodes connected in a Local Area Network (LAN) that shares processing power between the nodes. Our cluster, built in 2018 by James Lowe, alumnus of the EU PRACE Summer of HPC Programme and a student in the School,  the cluster consists of 8 nodes connected via an eight-port gigabit switch.

Each node has Linux Ubuntu installed. There are three other software requirements for the Beowulf cluster to operate successfully.

Message Passing Interface standard – MPI is the de facto standard in parallel computing. It gives the cluster the ability to run a program over several nodes simultaneously.

Secure Shell (SSH) – SSH establishes connections to each of the slave nodes from the master node and is required for the nodes to be able to communicate with each other.

Network File Sharing Server – A Network File Sharing server establishes a common folder on each of the nodes.

Each node in the Beowulf cluster has 4GB of Random Access Memory (RAM) and a quad-core Intel Xeon 2.33GHz CPU. So, the cluster has a total of 32GB of RAM and 32 cores in total.  

The LINPACK benchmark is the standard used for comparing the performance of HPC systems. The Top 500 list of the fastest supercomputers in the world is compiled using LINPACK. 1 Flop is equal to 1 floating point operation per second.

LINPACK results for a variety of Compute platforms, including our Beowulf Cluster, are shown below.

Irish Centre for High End Computing (ICHEC)

This is a National facility. Kay is comprised of a number of components:

"Cluster" -  A cluster of 336 nodes where each node has 2x 20-core 2.4 GHz Intel Xeon Gold 6148 (Skylake) processors, 192 GiB of RAM, a 400 GiB local SSD for scratch space and a 100Gbit OmniPath network adaptor. This partition has a total of 13,440 cores and 63 TiB of distributed memory.
"GPU" - A partition of 16 nodes with the same specification as above, plus 2x NVIDIA Tesla V100 16GB PCIe (Volta architecture) GPUs on each node. Each GPU has 5,120 CUDA cores and 640 Tensor Cores.
"Phi" - A partition of 16 nodes, each containing 1x self-hosted Intel Xeon Phi Processor 7210 (Knights Landing or KNL architecture) with 64 cores @ 1.3 GHz, 192 GiB RAM and a 400 GiB local SSD for scratch space.
"High Memory" - A set of 6 nodes each containing 1.5 TiB of RAM, 2x 20-core 2.4 GHz Intel Xeon Gold 6148 (Skylake) processors and 1 TiB of dedicated local SSD for scratch storage.
"Service & Storage" - A set of service and administrative nodes to provide user login, batch scheduling, management, networking, etc. Storage is provided via Lustre filesystems on a high-performance DDN SFA14k system with 1 PiB of capacity.

A number of my students have successfully applied for time on ICHEC facilities.

Scientific Computing

If you are interested in Computational Physics, take a look at the book, Computational Physics, by Nicholas Giordano and Hisao Nakanishi, available in the library. I have writtten a FREE Book, Computational Physics with MATLAB®, to accompany this book. Below is some eye candy from the book!

Student Projects

Here are some examples of my student's work. Hopefully your work will appear here soon.

Check out the Sample Reports and Video sections of this website for more!

Ising model of magnetism

The Ising model is used to simulate the behaviour of ferromagnets. In the model, a spin can only have 2 possible values, up or down. The spins are located on a highly ordered regular lattice.

In Iron at low temperatures, the spins will be pointing in the same direction and the sample is said to be ferromagnetic. The energy of the bonds is low. As the temperature is increased some spins begin to flip until a critical temperature is reached, the Curie temperature. Above this point, the spin orientation appears to be random and the sample is now paramagnetic.

The Ising model can be simulated for multiple different lattice sizes and dimensions, limited only by the amount of processing power available.

Protein folding

Proteins are complex macromolecules which find their shape (or conformation) within milliseconds. It is not fully understood how they find their native conformation so quickly and reliably. For this reason, protein folding simulations are used to investigate the kinetics of protein folding.

Square (2D) and cubic lattice (3D) models are investigated using Monte Carlo methods.

Molecular Dynamics

Molecular dynamics is the simulation of molecular motion on a computer model. Generally the Verlet method is used to solve the Newtonian equations of motion for each particle.

bottom of page