Some systems are experiencing issues

About This Site

Status page for the Harvard FAS Research Computing cluster and other resources.

Please scroll down to see details on any Incidents or maintenance notices.

Scheduled Maintenance
Monthly Maintenance

Date: February 3rd, 2020 Time: 7:00AM-11:00AM EST

Monthly cluster maintenance takes place on the first Monday of each month (unless it is a holiday) from 7am-11am.

This includes scratchlfs02 90-day file retention cleanup (https://www.rc.fas.harvard.edu/policy-scratch/) and rebooting of the login nodes.

Details of all the tasks we will be performing can be found at https://www.rc.fas.harvard.edu/monthly-maintenance

Past Incidents

29th January 2020

No incidents reported

28th January 2020

No incidents reported

27th January 2020

ScratchLFS02 scratchlfs02

3:16pm: scratchlfs should be operational

3:06pm: looking into performance issues on scratchlfs02

HolyLFS holylfs cycling

1/28: Holylfs instability is due to the fact that the new Cannon cluster has outstripped the ability of holylfs to keep up with jobs. We are actively working on a replacement for this system but it will not be available until later this year. In the meantime we ask that users on holylfs not run production jobs on holylfs and only use it for long term storage. Instead please use our scratch filesystems for production jobs (see upcoming maintenance email for more information on this). If you need assistance with rearchitecting your workflow please contact us.

Update 2:40pm: General high load on holylfs is causing instability. We are looking into this further

11:46am: looking into performance issues on holylfs

10:11am: holylfs has been rebooted 9:52am: holylfs is being rebooted for the next 15 minutes due to stuck jobs

26th January 2020

No incidents reported

25th January 2020

No incidents reported

For issues not shown here, please contact FASRC via
https://portal.rc.fas.harvard.edu or email rchelp@rc.fas.harvard.edu