Holyoke Specialty Storage experiencing degraded performance

Status page for the Harvard FAS Research Computing cluster and other resources.

Cluster Utilization (VPN and FASRC login required): Cannon | FASSE | Academic


Please scroll down to see details on any Incidents or maintenance notices.
Monthly maintenance occurs on the first Monday of the month (except holidays).

GETTING HELP
https://docs.rc.fas.harvard.edu | https://portal.rc.fas.harvard.edu | Email: rchelp@rc.fas.harvard.edu


The colors shown in the bars below were chosen to increase visibility for color-blind visitors.
For higher contrast, switch to light mode at the bottom of this page if the background is dark and colors are muted.

Monthly Maintenance Jan. 9th, 2023 7am-11am

Completed
Scheduled for January 10, 2023 at 12:00 AM – 4:00 AM

Affects

Cannon Cluster
SLURM Scheduler - Cannon
Cannon Compute Cluster (Holyoke)
Login Nodes
Login Nodes - Boston
Login Nodes - Holyoke
Updates
  • Completed
    January 10, 2023 at 4:00 AM
    Completed
    January 10, 2023 at 4:00 AM

    Maintenance has completed successfully

  • In progress
    January 10, 2023 at 12:00 AM
    In progress
    January 10, 2023 at 12:00 AM

    Maintenance is now in progress

  • Planned
    January 10, 2023 at 12:00 AM
    Planned
    January 10, 2023 at 12:00 AM

    NOTICES

    GPU PARTITIONS
    The gputest partition is back in service. Job limits are now 64 cores, 8 GPU's, and 750G of RAM. Users can run up to 2 jobs.

    GLOBUS PERSONAL CLIENT - 3.1 Client Deprecated
    If you are using the Globus Connect Personal client on your machine, please ensure you have updated and are running version 3.2 or greater. Version 3.1 and below are deprecated and will not work as of December 17th, 2022. https://docs.globus.org/ca-update-2022/#globus
    connect_personal

    HOLIDAY NOTICE
    January 16th is a university holiday (MLK Day)

    GENERAL MAINTENANCE

    * Slurm upgrade
    Audience: Cluster users
    Impact: Jobs will be paused during upgrade

    * OnDemand Version upgrade to 2.0.29
    Audience: VDI/OpenOnDemand users
    Impact: VDI will be unavailable during this and the above Slurm upgrade

    * Domain controller updates
    Audience: All cluster
    Impact: Could briefly impact some older systems, otherwise no impact expected

    * Login node and VDI node reboots and firmware updates
    Audience: Anyone logged into a a login node or VDI/OOD node
    Impact: Login and VDI/OOD nodes will be unavailable while updating and rebooting

    * Scratch cleanup ( https://docs.rc.fas.harvard.edu/kb/policy-scratch/ )
    Audience: Cluster users
    Impact: Files older than 90 days will be removed.

    Reminder: Scratch 90-day file retention purging runs occur regularly not just during maintenance periods.

    Thanks!
    FAS Research Computing
    Department and Service Catalog: https://www.rc.fas.harvard.edu/
    Documentation: https://docs.rc.fas.harvard.edu/
    Status Page: https://status.rc.fas.harvard.edu/