Experiencing partially degraded performance

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Cannon Cluster

Operational

SLURM Scheduler - Cannon

Operational

Cannon Compute Cluster (Holyoke)

Operational

Boston Compute Nodes

Operational

GPU nodes (Holyoke)

Operational

SEAS compute partition

Operational

FASSE Cluster

Operational

SLURM Scheduler - FASSE

Operational

FASSE Compute Cluster (Holyoke)

Operational

Kempner Cluster

Operational

Kempner Cluster CPU

Operational

Kempner Cluster GPU

Operational

Login Nodes

Operational

Login Nodes - Boston

Operational

Login Nodes - Holyoke

Operational

FASSE login nodes

Operational

VDI/OpenOnDemand

Operational

Cannon VDI (Open OnDemand)

Operational

FASSE VDI (Open OnDemand)

Operational

Storage

Degraded performance

Holyscratch01 (Global Scratch)

Degraded performance

Home Directory Storage - Boston

Operational

HolyLFS03 (Tier 0)

Operational

HolyLFS04 (Tier 0)

Operational

HolyLFS05 (Tier 0)

Operational

Holystore01 (Tier 0)

Operational

Holylabs

Operational

BosLFS02 (Tier 0)

Operational

Isilon Storage Boston (Tier 1)

Operational

Isilon Storage Holyoke (Tier 1)

Operational

CEPH Storage Boston (Tier 2)

Operational

Tape - (Tier 3)

Operational

Boston Specialty Storage

Operational

Holyoke Specialty Storage

Degraded performance

Samba Cluster

Operational

Globus Data Transfer

Operational

bosECS

Operational

holECS

Operational

Notice history

Feb 2023

Monthly Maintenance Feb. 6th, 2023 7am-11am
  • Completed
    February 06, 2023 at 4:00 PM
    Completed
    February 06, 2023 at 4:00 PM

    Maintenance has completed successfully

  • In progress
    February 06, 2023 at 12:00 PM
    In progress
    February 06, 2023 at 12:00 PM

    Maintenance is now in progress

  • Planned
    February 06, 2023 at 12:00 PM
    Planned
    February 06, 2023 at 12:00 PM

    NOTICES

    GPU PARTITIONS
    The gpu_test partition is back in service. Job limits are now 64 cores, 8 GPU's, and 750G of RAM. Users can run up to 2 jobs.

    HOLIDAY NOTICE
    February 20th is a university holiday (Presidents' Day)

    GENERAL MAINTENANCE

    • OnDemand Version upgrade to 2.0.29
      Audience: VDI/OpenOnDemand users
      Impact: VDI will be unavailable during this and the above Slurm upgrade

    • Domain controller updates
      Audience: All cluster
      Impact: Could briefly impact some older systems, otherwise no impact expected

    • Login node and VDI node reboots and firmware updates
      Audience: Anyone logged into a a login node or VDI/OOD node
      Impact: Login and VDI/OOD nodes will be unavailable while updating and rebooting

    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )
      Audience: Cluster users
      Impact: Files older than 90 days will be removed.

    Reminder: Scratch 90-day file retention purging runs occur regularly not just during maintenance periods.

    Thanks!
    FAS Research Computing
    Department and Service Catalog: https://www.rc.fas.harvard.edu/
    Documentation: https://docs.rc.fas.harvard.edu/
    Status Page: https://status.rc.fas.harvard.edu/

Jan 2023

Monthly Maintenance Jan. 9th, 2023 7am-11am
  • Completed
    January 10, 2023 at 4:00 AM
    Completed
    January 10, 2023 at 4:00 AM

    Maintenance has completed successfully

  • In progress
    January 10, 2023 at 12:00 AM
    In progress
    January 10, 2023 at 12:00 AM

    Maintenance is now in progress

  • Planned
    January 10, 2023 at 12:00 AM
    Planned
    January 10, 2023 at 12:00 AM

    NOTICES

    GPU PARTITIONS
    The gputest partition is back in service. Job limits are now 64 cores, 8 GPU's, and 750G of RAM. Users can run up to 2 jobs.

    GLOBUS PERSONAL CLIENT - 3.1 Client Deprecated
    If you are using the Globus Connect Personal client on your machine, please ensure you have updated and are running version 3.2 or greater. Version 3.1 and below are deprecated and will not work as of December 17th, 2022. https://docs.globus.org/ca-update-2022/#globus
    connect_personal

    HOLIDAY NOTICE
    January 16th is a university holiday (MLK Day)

    GENERAL MAINTENANCE

    * Slurm upgrade
    Audience: Cluster users
    Impact: Jobs will be paused during upgrade

    * OnDemand Version upgrade to 2.0.29
    Audience: VDI/OpenOnDemand users
    Impact: VDI will be unavailable during this and the above Slurm upgrade

    * Domain controller updates
    Audience: All cluster
    Impact: Could briefly impact some older systems, otherwise no impact expected

    * Login node and VDI node reboots and firmware updates
    Audience: Anyone logged into a a login node or VDI/OOD node
    Impact: Login and VDI/OOD nodes will be unavailable while updating and rebooting

    * Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )
    Audience: Cluster users
    Impact: Files older than 90 days will be removed.

    Reminder: Scratch 90-day file retention purging runs occur regularly not just during maintenance periods.

    Thanks!
    FAS Research Computing
    Department and Service Catalog: https://www.rc.fas.harvard.edu/
    Documentation: https://docs.rc.fas.harvard.edu/
    Status Page: https://status.rc.fas.harvard.edu/

Dec 2022

Monthly maintenance Dec 5th 2022 7am-11am
  • Completed
    December 05, 2022 at 7:50 PM
    Completed
    December 05, 2022 at 7:50 PM

    Maintenance has completed successfully.

  • In progress
    December 05, 2022 at 2:55 PM
    In progress
    December 05, 2022 at 2:55 PM

    Apologies. The maintenance event on the status page did not start automatically. Maintenance is already underway and will complete at 11am.

  • Planned
    December 05, 2022 at 12:00 PM
    Planned
    December 05, 2022 at 12:00 PM

    NOTICES

    GPUTEST and REMOTEVIZ PARTITIONS Due to failed nodes, the gputest partition is down to 2 nodes and the (single node) remoteviz partition is down at the moment. We are working with the vendor to replace hardware, but this is still unresolved and no ETA at this time. Updates and QoS changes on our status page when we have them: https://status.rc.fas.harvard.edu/cl8a94kcf17664hvoj8oksxanx

    GLOBUS PERSONAL CLIENT - UPDATE BY DEC 17
    If you are using the Globus Connect Personal client on your machine, please ensure you have updated and are running version 3.2 or greater by December 17th, 2022. You will not be able to use version 3.1 or below after that date. https://docs.globus.org/ca-update-2022/#globusconnectpersonal

    HOLIDAY NOTICES NOVEMBER:
    Office Hours will be held on 11/23 prior to the Thanksgiving break, but will run only from 12-2pm. FASRC staff will be unavailable Nov. 16th from 12-3pm for a staff event. Thur/Fri Nov. 24th and 25th are university holidays (Thanksgiving).

    HOLIDAY NOTICES DECEMBER:
    Office Hours will not be held on Dec. 21st and will resume Jan. 4th, 2023. Winter break runs Dec. 23 - Jan. 2nd. FASRC will monitor for emergencies during this time, but general questions/tickets will be held until we return on Jan. 3rd, 2023.

    SLUMR SCHEDULER UPDATE NOTES:
    Given a bug in the previous versions of Slurm, this upgrade will create a situation where jobs launched on the previous version will get stuck in COMPLETING state until the node is rebooted (see: https://bugs.schedmd.com/show_bug.cgi?id=15078). This means in the week(s) following the upgrade there will rolling reboots of the nodes to clear these stuck jobs.

    Users should be aware that any jobs stuck in COMPLETING state will remain so until the node the job lives on is rebooted, and any node that is labelled COMPLETING will not be able to receive jobs until it is rebooted. This is due to a Slurm bug and nothing to do with the users code or jobs and thus the users cannot do anything to clear this state faster. FASRC admins will reboot nodes as soon as they are clear of work to fix this issue.

    GENERAL MAINTENANCE

    • Slurm scheduler update (22.05.x) - See notes above
      -- Audience: All cluster job users
      -- Impact: =See notes above.= The scheduler and job will be paused during upgrade.

    • Partition decommissioning
      -- Audience: narayan and holymeissner partition users
      -- Impact: This partition(s) will no longer be available

    • Domain controller DHCP updates
      -- Audience: All users
      -- Impact: No impact expected

    • Holyscratch01 firmware updates
      -- Audience: All users of scratch
      -- Impact: Scratch will be unavailable for short periods

    • Login node and VDI node reboots and firmware updates -- Audience: Anyone logged into a a login node or VDI/OOD node -- Impact: Login and VDI/OOD nodes will be unavailable while updating and rebooting

    • Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ ) -- Audience: Cluster users -- Impact: Files older than 90 days will be removed.

    Reminder: Scratch 90-day file retention purging runs occur regularly not just during maintenance periods.

    Updates on our status page: https://status.rc.fas.harvard.edu

    Thanks!
    FAS Research Computing
    Department and Service Catalog: https://www.rc.fas.harvard.edu/
    Documentation: https://docs.rc.fas.harvard.edu/
    Status Page: https://status.rc.fas.harvard.edu/

Dec 2022 to Feb 2023