This audio was created using Microsoft Azure Speech Services
It reads like a paragraph from a Philip K. Dick Sci-Fi novel; high performance computing (HPC) can perform quadrillions of calculations per second. Quadrillions, a word we seldom here or even fully comprehend. But here we are, HPC can achieve it, catapulting us into world with groundbreaking inventions, innovations and complex calculations.
To place it into perspective, a laptop or desktop with a 3 GHz processor can perform around three billion calculations per second. While that is much faster than any human can achieve, it pales in comparison to HPC solution.
Supercomputers are probably the best known HPC solutions; it contains thousands of compute nodes that work together to complete one or more tasks. This is called parallel processing.
As mentioned, HPC is crucial across various domains, from scientific research to financial modelling and gaming development. For example, in the financial sector, HPC is used for virtually predicting market trends, involving the processing of vast datasets to identify patterns and insights.
In gaming, the demand for high-performance machines at home underscores the even greater need for robust HPC infrastructure for game development and rendering. The development of 4K and 8K content, whether for gaming or streaming services like Netflix, relies heavily on HPC to manage the enormous computational requirements.
A strong mind needs a body
Like Vision in Marvel’s Avengers saga, HPC needs a body or rather a data centre to function optimally. And building these data centres come at quite the cost; it requires careful operational, financial and technical consideration.
The above also a makes a case for organisations turning to hyperscale providers like Amazon and Microsoft which provide HPC-as-a-service, therefore, allowing organisations to rent computational power on demand. It enables organisations to expand their HPC capabilities without significant upfront investments.
But for those who intend to go the HPC data centre route, the following should be carefully considered:
- Computing – this is the processing power required to execute complex calculations. It not only demands powerful processors but also efficient interconnectivity to ensure seamless communication between computing nodes.
- Storage – HPC applications generate and manipulate vast amounts of data. Storage solutions should therefore be capable of handling massive datasets and providing quick access to information.
- Network – the network infrastructure is the backbone of HPC, facilitating communication between various components of the system. High-speed, low-latency networks are crucial for ensuring data transfer efficiency and minimising bottlenecks.
- Cooling facilities – the intense computational activities in an HPC environment generate substantial heat, necessitating advanced solutions such as liquid cooling and precision air-conditioning. HPC data centres are power-intensive, often requiring triple the power of traditional data centres.
Liquid cooling, in particular, is gaining prominence for its ability to directly cool high-power components, such as processors and GPUs, reducing the overall thermal load on the system. This not only enhances energy efficiency but also allows for more densely packed computing clusters which is ideal for HPC
HPC and cooling in action
Schneider Electric together with power and cooling expert, Total Power Solutions designed and delivered a new, high efficiency cooling system to help reduce the PUE (power usage effectiveness) of University College Dublin’s (UCD) main production data centre.
UCD’s data centre was originally designed to accommodate HPC clusters and provides a platform for research at its university campus.
Total Power Solutions and Schneider Electric replaced the existing data centre cooling system with our Uniflair InRow Direct Expansion (DX) solution. Schneider Electric’s InRow DX cooling technology offers many benefits such as modular design, more predictable cooling, and variable speed fans which help to reduce energy consumption.
The solution at UCD includes 10 independent InRow DX cooling units, which are rightsized to the server load to optimise efficiency. The system is scalable to enable UCD to add further HPC clusters and accommodate future innovations in technology. This includes the introduction of increasingly powerful central processing units (CPUs) and graphics processing units (GPUs).
The InRow DX cooling units work in conjunction with UCD’s existing Schneider Electric EcoStruxure Row Data Centre System and provides a highly efficient, close-coupled design that is suited to high density loads.
Add a comment