Dienste des ZIH
Backup Datennetz Exchange Hochleistungsrechnen Internet-Anbindung
Lizenz-Server Login Shibboleth Software-Verteilung

Weitere zentrale Dienste der TUD
FIS SAP Selma WCMS

Informationen zur ausgewählten Ankündigung / Meldung

zurück


Planned HPC shutdown and more (beendet)
From: 24.02.2016 / 09:00
To: 18.03.2016 / 09:00
Dear HPC users,



please let me inform you about these topics:



complete shutdown of Venus and Taurus on March 14-16, software updates,

monthly accounting of CPU quota,

new mount for Lustre file system,

ZIH talks: HPC overview and introduction.

Once a year, the power systems in the new building LZR are tested. For this, all unbuffered machines ? e.g. HPC servers - are powered off. We will shut down Taurus and Venus at about 3 p.m. on March 14. After the ?Black Building Test?, we will update the Linux software stack. This should fix the recently published glibc bug and encountered Lustre problems. Additionally, we will update the CUDA drivers, so re-compiling of GPU code might be necessary. To allow for additional maintainance tasks, like firmware updates on all components, we plan to bring up the machines until March 16. As soon as they are operational, you will receive an email.


Starting on March 1, we will change the accounting of CPU quotas from an annual to a monthly base. Once your project quota is used up, the priority of a submitted job is low, but it can run as long as it fits without blocking other jobs. Every first of the month, all CPU quotas will be reset. With this, we hope to make Taurus easier to handle for users and supporters. (For running projects we simply divide the annual CPUh of the project to get its monthly quota.)


The current setup with the Lustre automounter has lead to quite a few questions. We thus statically link the large scratch file system /scratch => /lustre/scratch2. Users who want to use our parallel SSD scratch file system can do this at /lustre/ssd. Please be aware that this file system is perfect for a high rate of I/O operations (not for streaming data!), but it is much smaller (23 TB) than /scratch (1.8 PB). That's why we urge you to delete your files after usage.


On March 17, we will give a brief overview over Supercomputing at ZIH (9:00 WIL A317). This might be interesting for scientists who do not use HPC yet. The ?traditional? and technical ?Introduction into HPC at ZIH? will be held on 31 of March. Please make sure that all HPC members of your project group have heard this introduction at least once. Please register here: http://web.tu-dresden.de/urzfp/Scripts/anmelden.asp?KN=Z01

--
Aktuelle Ankündigungen >
Abgelaufene Ankündigungen >

RSS feed - TUD / ZIH >

::. grafische History .::


Login