SaaS could soon mean “Supercomputing-as-a-Service”

Supercomputers have since played an essential role in computational science. We Suse them for a wide range of compute-intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, and physical simulations.

Supercomputing League Tables

Since 1993, the TOP500 project ranks and details the 500 most powerful non-distributed computer systems globally. The Top 10 systems on this list are primarily for research purposes. So, it is a matter of pride that here in Asia, we have the Dammam-7, ranked 10th, installed at Saudi Aramco in Saudi Arabia – it is the second commercial supercomputer in the current top 10.

PARAM was India’s first supercomputer, and we have made a lot of progress since. In 2015, India announced a “National Supercomputing Mission” (NSM) to install 73 indigenous supercomputers throughout the country by 2022. NSM is a 7-year program whose aim is to create a cluster of geographically distributed HPC centres linked over a high-speed network, connecting various academic and research institutions across India (National Knowledge Network). As of Nov 2020, there are three systems based in India on the TOP500 supercomputer list – Centre for Development of Advanced Computing (No. 62), Indian Institute of Tropical Meteorology (No. 77), and National Centre for Medium-Range Weather Forecasting (No. 144). 

HPC for the masses

Examples of supercomputers in fiction include HAL-9000, Multivac, WOPR, and Deep Thought — safe to say that outside fiction; supercomputing wasn’t an easily accessible technology

The high cost of entry has remained the barrier to applying HPC to a broader range of business applications. The power of supercomputers has primarily been the preserve of government and medical researchers, academics, and innovative moviemakers. Due to some of the mysteries around supercomputers, they became essential characters in science fiction writing. Much of it deals with the relations of humans with the computers they build and with the possibility of conflict eventually developing between them. Examples of supercomputers in fiction include HAL-9000, Multivac, WOPR, and Deep Thought. It’s safe to say that outside fiction, supercomputing wasn’t an easily accessible technology.

COVID has changed that forever. It needed very high computing to be available for the entire scientific and medical community – all at the same time. This demand for computing prowess led to the COVID-19 High-Performance Computing (HPC) Consortium; a group made up of industry, academia, US federal agencies, and others that made the world’s most powerful computers available for free to researchers trying to battle the virus. More than 100 projects using 600 petaflops with 6.8M CPU cores already rely on them, with new researchers joining weekly. The consortium is a first step to making HPC easily accessible and affordable for all. 

Added Kicker

An artificial intelligence methodology referred to as swarm learning is being applied to COVID-19 research data. Swarm learning combines the resources of dispersed HPC systems for better insights. If one hospital in the UK has data on hundreds of patients, that’s not enough to train any machine learning algorithm. But if that data could be seen along with data from hospitals in other countries, such as France, Spain, the USA, and India, you would have enough for machine learning purposes. Swarm learning enables this without sharing any patient data, thus overcoming privacy issues.

COVID-19 Research – not just the domain of medicine

Rickett, Maschhoff, and Sukumar investigated potential therapies for COVID-19 when they discovered that individuals with COVID-19 who had previously vaccinated for tetanus showed less severe symptoms. A recent study of pregnant women found that 88% of those who tested positive for the virus were asymptomatic, a rate approximately 2X that of the general population. Could the TDaP vaccine, commonly administered to pregnant women, offered an unexpected level of immunity? Their article, detailing the research and proposing the theory, has been accepted for publication in the journal, Medical Hypotheses.

What’s interesting about these findings, aside from their sheer novelty, is that they are not medical researchers. They’re engineers for Cray, the supercomputing arm of Hewlett Packard Enterprise. Before COVID-19, they had no experience with medical research. Yet, earlier this year, they theorized that the capabilities of Cray’s parallel-processing graph database could be leveraged to investigate therapies for the emerging COVID-19 pandemic — and on a far more efficient scale than had ever been done before.

They came up with the idea to do a protein sequence analysis; a comparison for similarities between one protein sequence (COVID-19 Spike protein) and the rest of them in the known universe. Their hypothesis – if they could find a way to map that information back to something that medicine already knew more about, experts could narrow down compounds that are more useful as treatments as they target a similar protein.

 

Medical research has become as much of a big data computational problem as it is a medical research one. Today, you are as likely to see a supercomputer in the laboratory as you are racks of tissue samples.

HPC as a Service

According to Hyperion, 20% of all HPC jobs were executed through cloud services in 2019 and this market with grow at a CAGR of 16.8% between 2019 and 2024

As businesses discover the power of supercomputing, they are gravitating to the cloud for some of their workloads. Enterprises are finding cloud services handy, with benefits like rapid deployment, the ability to handle erratic spikes in demand, and a pay-per-use model. According to Hyperion, 20% of all HPC jobs were executed through cloud services in 2019. They project this market to increase at a CAGR of 16.8% between 2019 and 2024. And Hyperion anticipates new HPC demand to combat COVID-19 may cause public cloud computing for HPC workloads to grow even faster.

The next generation of supercomputers will be Quantum

Unlike traditional computers, which work on bits and bytes, quantum computers use quantum bits (qubits). It uses quantum mechanics concepts like super-positioning and entanglement to process information, leading to incredible processing speeds as it can account for uncertainty in its calculations compared to purely binary choices. Under lab conditions, a quantum computer has even processed computations that would typically take the world’s fastest machines nearly 10,000 years to solve. This ability might have minimal practical uses, but it is a start.

Accenture has documented over 150 use cases, focusing on finding those that are the most promising in various industries. They also have forged partnerships with the world’s leading quantum cloud vendors to induce trial amongst their clients and help them embrace the upcoming quantum computing revolution.

Experts predict that the early 2020s would be a critical phase for developments in quantum computing. A report from Allied Market Research says that the global enterprise quantum computing market may grow by about 30% annually from 2018–2025 and will reach about $5.9 billion by 2025.

Antonio Neri, CEO, HPE talks about how their industry-leading Green Lake everything-as-a-service platform could become better known than HPE itself in the next 5 years

Putting computing power in the hands of end users and making it super easy for all to use is the future where everyone consumers technology as a service. Antonio Neri, CEO, HPE further highlighted this trend at HPE Discover 2021 when he talked about how their industry-leading Green Lake everything-as-a-service platform could become better known than HPE itself in the next 5 years. In the not-so-distant future, we could very well imagine a world where school kids use HPC workloads to do homework while quantum machines gain their place in labs.

What do you think?

Leave a Comment