"We don't just procure a new computer"

The flagship supercomputer of the CSCS, "Piz Daint", needs to be replaced. Installation of the successor computer, "Alps", is taking place in three phases and will be completed in 2023. CSCS Director Thomas Schulthess explains in an interview why the new computer is so special.
Since 1 October 2008 Thomas Schulthess is Director of the Swiss National Supercomputing Centre (CSCS) at Lugano. As CSCS director, he is also full Professor of computational physics at ETH Zurich. (Photograph: CSCS / A. Della Bella)

ETH News: Last year, the first phase of installing the new supercomputer "Alps", the successor to "Piz Daint", started. How has the work been going so far during the Corona crisis?
Thomas Schulthess: We had to change our approach, but we were more or less able to keep to the schedule, even though there were a few minor delays. During the first lockdown, it was not possible to bring the first four cabinets of the new computer from the USA to Switzerland as planned. But the manufacturing company HPE still managed to build the hardware, so we had access to our machines in the US in June and July and were able to work on them. Therefore, the acceptance of the computer cabinets in Lugano in autumn went off without any major problems.

The expansion of "Alps" will last until spring 2023. Why is it taking so long?
Because the product we want is not fully developed yet. The central element of the new computer is the Cray Shasta software stack, which we participated in developing with other data centres. This software stack is now operational, but it will take another two years until the desired computer infrastructure will be completely ready.

What is so special about the new computer infrastructure?
With the Cray Shasta software stack, we have opted for a software-defined infrastructure. That is the decisive point for me — without this software stack, the new machine would lose a lot of its value for me. "Alps" would still be the best variant in high-performance computing that will be available on the market in the foreseeable future, but we clearly have higher goals that we would not achieve without the improved infrastructure. That would be a big disappointment.

What does that mean exactly?
At CSCS, we primarily operate a research infrastructure, which we make available to researchers as a User Lab, among other things. That is our core mission. However, in contrast to other research infrastructures, such as the SwissFEL at PSI, we do practically no research of our own on our instruments. We therefore have to find a creative way to expand our own expertise in order to be able to further develop the research infrastructure. Hence we collaborate closely with researchers from Swiss universities.

So, the CSCS does not see itself primarily as a service provider?
It does, but not in the sense of an IT company that merely operates computers in order to be able to offer computing time. For us, the computer is the means to an end, and the end is the research infrastructure, which we build and develop together with researchers with funding from the ETH Board; the operation costs are covered by contributions from ETH Zurich. We now want to further develop this research infrastructure with a program with the code name “Kathmandu”.

What exactly is this Kathmandu program about?
The Kathmandu program is an important part of the new procurement mentioned above. We are not simply procuring a new computer that will be integrated into the unchanged computer centre — we are retrofitting the computer centre in several expansion phases. Today, we operate different computer systems for different needs at CSCS, but in the future there will be only one infrastructure. For MeteoSwiss, for example, we have operated a dedicated computer up to now. In the future, MeteoSwiss will compute on one or more partitions of this new infrastructure.

«We are retrofitting the whole computer centre in several expansion phases.»      Thomas Schulthess

What is the advantage of this solution?
At CSCS, we have always offered various services for researchers, but the system architecture was not service-oriented. Therefore, we always had to expend a lot of human resources to define these services according to the requirements of the users and the architecture of the machines. This process is now easier because we have a software-defined infrastructure.

What does a software-defined infrastructure mean?
If we do everything right together with HPE, our hardware will be very flexible. This means that, in the future, we will define which services we offer via the software, no longer via the hardware. To do this, we combine so-called microservices. In this way, we define the partitions for the various users, which we then make available to them via standardised interfaces. These can be virtual ad-hoc clusters for individual users, but also predefined clusters that research infrastructures such as the PSI put together with us and then operate themselves. We can also create data platforms with the microservices. For example, we are planning to develop a so-called domain platform for weather and climate simulations with various partners.

You explicitly mention the field of weather and climate. Will "Alps" also be useful for other fields of science?
Yes, with "Alps" we are developing a "general purpose supercomputer". Our goals do not stop at climate simulations at all. However, these are an extremely good means to an end, since the problem is very clearly formulated in climate simulations. In addition, they represent all the requirements of a modern supercomputing and data infrastructure. We respond to these very specific requirements with an infrastructure that we can then also offer to other research areas.

What does this change for the users of the User Lab?
Previous users of "Piz Daint" will be able to use the new system without any adjustments. It should even be easier for them. We will also further develop the HPC platform for the User Lab as a virtual area within the computer system. This will make the resources more powerful, and they will cover larger parts of the workflow. The scientists will not only be able to carry out simulations, but also pre-process or post-process their data. This makes the whole workflow more efficient for them.

So not much will change for the users. What does the conversion mean for the staff at CSCS?
The new strategy will require fundamentally rethinking certain areas. The engineers providing system and user support will have to adjust, for example, because our previous computers that we operated in addition to "Piz Daint" will be virtual clusters in the future. Some staff will develop and maintain microservices, while others will combine these microservices into virtual clusters or applications that are then made available as services to researchers.

«The new strategy will require fundamentally rethinking certain areas.»      Thomas Schulthess

Great efforts are currently being made in Europe to further advance high-performance computing. This includes, in particular, the EU's Pre-Exascale Initiative. How is CSCS involved in these efforts?
CSCS is a member of the LUMI consortium, which is part of the Pre-Exascale Initiative. The acronym stands for "Large Unified Modern Infrastructure". This is a new pre-exascale supercomputer that will be located in Finland. The LUMI consortium has ten member states, including the Scandinavian countries, where the conditions for producing cheap, CO2-free electricity and cooling the computers are optimal.

Why is this aspect so important?
This can be explained using climate research as an example. Our goal is to develop climate models that can map convective clouds such as thunderclouds. "Alps", the successor to "Piz Daint", will have a connected load of 5 to 10 megawatts. However, a computer infrastructure that is to productively deliver the above-mentioned resolution for climate science must have 50 times the power. Since we can no longer achieve performance gains through Moore's Law, we need a machine 50 times larger than "Alps", which will also increase energy consumption accordingly. It therefore makes sense to build such a computer infrastructure where the required energy can be generated cheaply and in an environmentally friendly way. We do not have such locations in Central Europe, but we do in Northern Europe.

What is the timetable for LUMI?
The LUMI computer is also being built by HPE and is scheduled to reach the pre-exascale performance class in autumn 2021 before going into operation in spring 2022. Our "Alps" system will be installed one and a half years later, at the end of 2022, and will replace "Piz Daint" completely by April 2023. But our new services will already be available this spring and wll be expanded later this year during the first expansion phases; and we will try to also integrate LUMI. We will then have a very strong overall infrastructure available, running on two sub-infrastructures, “Alps” and LUMI. We will move faster in this direction than others in Europe.

Does this mean that Swiss computing resources will be relocated abroad in the future?
No, but we must be realistic: We will never operate computers of 100 MW or more in Switzerland. We have to focus the local computing resources in Switzerland on innovative pilot projects and integrate them intoi a larger network for production. Our intention is to develop software platforms that run on both infrastructures so that users practically don't notice whether their application is running in Finland or in Lugano.

The Swiss National Supercomputing Centre

The Swiss National Supercomputing Centre (CSCS) develops and provides the infrastructure and know-how in the field of high-performance computing (HPC) to solve important scientific and societal problems. It implements the national strategy for high-performance computing and networks (HPCN), which was passed by the Swiss parliament in 2009. Since 2011, CSCS has had a dedicated User Lab for supercomputing, and it is part of the Swiss Research Infrastructure Roadmap. Since 2020, it has also been a member of the European LUMI consortium, which is building a European supercomputer of the pre-exascale performance class.