FAS Research Computing - Notice history

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Operational

SLURM Scheduler - Cannon - Operational

Cannon Compute Cluster (Holyoke) - Operational

Boston Compute Nodes - Operational

GPU nodes (Holyoke) - Operational

seas_compute - Operational

Operational

SLURM Scheduler - FASSE - Operational

FASSE Compute Cluster (Holyoke) - Operational

Operational

Kempner Cluster CPU - Operational

Kempner Cluster GPU - Operational

Operational

Login Nodes - Boston - Operational

Login Nodes - Holyoke - Operational

FASSE login nodes - Operational

Operational

Cannon Open OnDemand/VDI - Operational

FASSE Open OnDemand/VDI - Operational

Operational

Netscratch (Global Scratch) - Operational

Home Directory Storage - Boston - Operational

Holylabs - Operational

HolyLFS06 (Tier 0) - Operational

HolyLFS04 (Tier 0) - Operational

HolyLFS05 (Tier 0) - Operational

Holystore01 (Tier 0) - Operational

Isilon Storage Holyoke (Tier 1) - Operational

Holyoke Tier 2 NFS (new) - Operational

100% - uptime
Oct 2022 · 100.0%Nov · 100.0%Dec · 100.0%
Oct 2022
Nov 2022
Dec 2022

Holyoke Specialty Storage - Operational

holECS - Operational

BosLFS02 (Tier 0) - Operational

Isilon Storage Boston (Tier 1) - Operational

Boston Specialty Storage - Operational

Boston Tier 2 NFS (new) - Operational

100% - uptime
Oct 2022 · 100.0%Nov · 100.0%Dec · 100.0%
Oct 2022
Nov 2022
Dec 2022

CEPH Storage Boston (Tier 2) - Operational

bosECS - Operational

Tape - (Tier 3) - Operational

Samba Cluster - Operational

Globus Data Transfer - Operational

Notice history

Dec 2022

Monthly maintenance Dec 5th 2022 7am-11am
  • Completed
    December 05, 2022 at 7:50 PM
    Completed
    December 05, 2022 at 7:50 PM

    Maintenance has completed successfully.

  • In progress
    December 05, 2022 at 2:55 PM
    In progress
    December 05, 2022 at 2:55 PM

    Apologies. The maintenance event on the status page did not start automatically. Maintenance is already underway and will complete at 11am.

  • Planned
    December 05, 2022 at 12:00 PM
    Planned
    December 05, 2022 at 12:00 PM

    NOTICES

    GPUTEST and REMOTEVIZ PARTITIONS Due to failed nodes, the gputest partition is down to 2 nodes and the (single node) remoteviz partition is down at the moment. We are working with the vendor to replace hardware, but this is still unresolved and no ETA at this time. Updates and QoS changes on our status page when we have them: https://status.rc.fas.harvard.edu/cl8a94kcf17664hvoj8oksxanx

    GLOBUS PERSONAL CLIENT - UPDATE BY DEC 17
    If you are using the Globus Connect Personal client on your machine, please ensure you have updated and are running version 3.2 or greater by December 17th, 2022. You will not be able to use version 3.1 or below after that date. https://docs.globus.org/ca-update-2022/#globusconnectpersonal

    HOLIDAY NOTICES NOVEMBER:
    Office Hours will be held on 11/23 prior to the Thanksgiving break, but will run only from 12-2pm. FASRC staff will be unavailable Nov. 16th from 12-3pm for a staff event. Thur/Fri Nov. 24th and 25th are university holidays (Thanksgiving).

    HOLIDAY NOTICES DECEMBER:
    Office Hours will not be held on Dec. 21st and will resume Jan. 4th, 2023. Winter break runs Dec. 23 - Jan. 2nd. FASRC will monitor for emergencies during this time, but general questions/tickets will be held until we return on Jan. 3rd, 2023.

    SLUMR SCHEDULER UPDATE NOTES:
    Given a bug in the previous versions of Slurm, this upgrade will create a situation where jobs launched on the previous version will get stuck in COMPLETING state until the node is rebooted (see: https://bugs.schedmd.com/show_bug.cgi?id=15078). This means in the week(s) following the upgrade there will rolling reboots of the nodes to clear these stuck jobs.

    Users should be aware that any jobs stuck in COMPLETING state will remain so until the node the job lives on is rebooted, and any node that is labelled COMPLETING will not be able to receive jobs until it is rebooted. This is due to a Slurm bug and nothing to do with the users code or jobs and thus the users cannot do anything to clear this state faster. FASRC admins will reboot nodes as soon as they are clear of work to fix this issue.

    GENERAL MAINTENANCE

    • Slurm scheduler update (22.05.x) - See notes above
      -- Audience: All cluster job users
      -- Impact: =See notes above.= The scheduler and job will be paused during upgrade.

    • Partition decommissioning
      -- Audience: narayan and holymeissner partition users
      -- Impact: This partition(s) will no longer be available

    • Domain controller DHCP updates
      -- Audience: All users
      -- Impact: No impact expected

    • Holyscratch01 firmware updates
      -- Audience: All users of scratch
      -- Impact: Scratch will be unavailable for short periods

    • Login node and VDI node reboots and firmware updates -- Audience: Anyone logged into a a login node or VDI/OOD node -- Impact: Login and VDI/OOD nodes will be unavailable while updating and rebooting

    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ ) -- Audience: Cluster users -- Impact: Files older than 90 days will be removed.

    Reminder: Scratch 90-day file retention purging runs occur regularly not just during maintenance periods.

    Updates on our status page: https://status.rc.fas.harvard.edu

    Thanks!
    FAS Research Computing
    Department and Service Catalog: https://www.rc.fas.harvard.edu/
    Documentation: https://docs.rc.fas.harvard.edu/
    Status Page: https://status.rc.fas.harvard.edu/

Nov 2022

Monthly maintenance Nov 7th 2022 7am-11am
  • Completed
    November 16, 2022 at 5:12 PM
    Completed
    November 16, 2022 at 5:12 PM

    Maintenance has completed successfully.

  • In progress
    November 07, 2022 at 12:00 PM
    In progress
    November 07, 2022 at 12:00 PM

    NOTICES

    • GPUTEST and REMOTEVIZ PARTITIONS Due to failed nodes, the gputest partition is down to 2 nodes and the (single node) remoteviz partition is down at the moment. We are working with the vendor to replace hardware, but this is still unresolved and no ETA at this time. Updates and QoS changes on our status page when we have them:https://status.rc.fas.harvard.edu/cl8a94kcf17664hvoj8oksxanx

    • GLOBUS PERSONAL CLIENT - UPDATE BY DEC 17 If you are using the Globus Connect Personal client on your machine, please ensure you have updated and are running version 3.2 or greater by December 17th, 2022. You will not be able to use version 3.1 or below after that date. https://docs.globus.org/ca-update-2022/#globusconnectpersonal

    • TRAINING New training sessions, including monthly new user training, are available. You can find a list and links to sign up here: https://www.rc.fas.harvard.edu/upcoming-training/

    • NERC informs us that they will be performing maintenance on Nov 2nd from 8am-12pm. If you use NERC's services, you can find more information on their status page: https://nerc.instatus.com/

    FASRC REGULAR MAINTENANCE

    • Network maintenance - HIGH IMPACT POSSIBLE -- Audience: All users -- Impact: Network maintenance will take place during this time and is expected to run the full 4 hours. Work on the firewalls and fibre links will be involved, so impact may be felt across both data centers.

    • Samba admin node reboots -- Audience: All users of Samba (desktop) mounts -- Impact: Impact is expected to be transparent or minimal. Active Samba mounts could be affected but only briefly and should recover on their own.

    • Globus update -- Audience: All users of Globus -- Impact: Globus may be unavailable for short periods.

    • Domain Controller Updates -- Audience: All users -- Impact: Minimal impact is expected.

    • Login node and VDI node reboots -- Audience: Anyone logged into a a login node or VDI/OOD node -- Impact: Login and VDI/OOD nodes will be unavailable while updating and rebooting

    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ ) -- Audience: Cluster users -- Impact: Files older than 90 days will be removed. -- Reminder: Scratch 90-day file retention purging runs occur regularly not just during maintenance periods.

    Thanks, FAS Research Computing Department and Service Catalog: https://www.rc.fas.harvard.edu/ Documentation: https://docs.rc.fas.harvard.edu/ Status Page: https://status.rc.fas.harvard.edu/

Oct 2022

FASRC Monthly maintenance Oct 3rd, 2022 7am-11am + Important Notices
  • Completed
    October 03, 2022 at 3:00 PM
    Completed
    October 03, 2022 at 3:00 PM

    Maintenance has completed successfully

  • In progress
    October 03, 2022 at 11:00 AM
    In progress
    October 03, 2022 at 11:00 AM

    Maintenance is now in progress

  • Planned
    October 03, 2022 at 11:00 AM
    Planned
    October 03, 2022 at 11:00 AM

    FASRC regular monthly maintenance will occur on Monday October 3rd, 2022 from 7am-11am. https://www.rc.fas.harvard.edu/monthly-maintenance IMPORTANT: There are several important notices below.

    NETWORK MAINTENANCE TUES 9/27 6pm-8pm

    Network maintenance on the MGHPCC/Holyoke fibre links and VPN will take place Tuesday Sept. 27th from 6pm until 8 pm.

    • VPN updates will occur during 6pm-7pm VPN connections may drop during this time.
    • Maintenance on the fibre links will take place 7pm-8pm. Brief disconnects may occur, especially with login and access nodes.
    • See maintenance event: https://status.rc.fas.harvard.edu/cl898lt5b25823hkojo3eslk1x

    NOTICES

    GPU_TEST and REMOTEVIZ PARTITIONS

    Due to failed nodes, the gpu_test partition is down to 2 nodes at the moment. We are working with the vendor to revive these nodes, but no ETA at this time. We are also investigating the root cause for multiple failures.  

    • We had already planned to modify the QoS (job limitations) on the gputest partition due to misuse of the partition, but the failures forced us to implement this yesterday.   Going forward, gputest is limited to 1 job per user. That job is limited to a maximum 16 cores, 90GB memory, and 1 GPU.

    • Please note: gputest should not be used to avoid the GPU queues and scheduling. See Running Jobs (https://docs.rc.fas.harvard.edu/kb/running-jobs/#Slurmpartitions) for a list of available partitions, including the gpu partition(s).

    • The remoteviz node/partition is also down for the same reason. The remoteviz partition is only one node, QoS remains the same.  

    Updates on our status page:  https://status.rc.fas.harvard.edu/cl8a94kcf17664hvoj8oksxanx

    GLOBUS PERSONAL CLIENT - UPDATE BY DEC 17

    • If you are using the Globus Connect Personal client on your machine, please ensure you have updated and are running version 3.2 or greater by December 17th, 2022. You will not be able to use version 3.1 or below after that date. https://docs.globus.org/ca-update-2022/#globusconnectpersonal

    TRAINING

    • New training sessions, including monthly new user training, are available.   You can find a list and links to sign up here: https://www.rc.fas.harvard.edu/upcoming-training/

    REGULAR MAINTENANCE

    • Login node and VDI node reboots
      -- Audience: Anyone logged into a a login node or VDI/OOD node
      -- Impact: Login and VDI/OOD nodes will be unavailable while updating and rebooting
       
    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )
      -- Audience: Cluster users
      -- Impact: Files older than 90 days will be removed.
      -- Reminder: Scratch 90-day file retention purging runs occur regularly not just during maintenance periods.

    Thanks!
    FAS Research Computing
    https://www.rc.fas.harvard.edu
    https://status.rc.fas.harvard.edu

Oct 2022 to Dec 2022

Next