All systems are operational

About This Site

Status page for the Harvard FAS Research Computing cluster and other resources.

Please scroll down to see details on any Incidents.

Past Incidents

21st November 2018

No incidents reported

20th November 2018

No incidents reported

19th November 2018

Regal Issues

UPDATE: 10:45 Regal back to normal.

The regal scratch filesystem is experiencing performance issues, and causing jobs to be stuck and nodes to be closed. We are actively working on this issue.

18th November 2018

No incidents reported

17th November 2018

No incidents reported

16th November 2018

Regal Issues

UPDATE: 5:12 PM: The Lustre management tools have caught up and are reporting normally. Moving status to operational.

UPDATE: 5:02 PM: Reboot is being reconsidered as performance seems OK. Suspicion that the diagnostics are not reporting up-to-date info.

UPDATE 4:50 PM: A full reboot of Regal is necessary at this point. Jobs will be suspended, but due to timeouts which have likely already happened, some jobs reading from/writing to Regal may fail.

Regal scratch is experiencing performance issues again. We are actively working on the problem. Updates to follow.

15th November 2018

Regal Issues

UPDATE: Regal appears to be back to normal. We will continue to monitor for issues. NOTE: a plan to replace Regal is already under way. Thank you for your understanding and patience.

UPDATE: Regal is responding, but may become slow again as we restart various storage modules.

Regal scratch is experiencing performance issues. We are actively working on the problem.

14th November 2018

No incidents reported

13th November 2018

No incidents reported

For issues not shown here, please contact FASRC via
https://portal.rc.fas.harvard.edu or email rchelp@rc.fas.harvard.edu