Resilimap

Multi-cloud reliability monitoring

Site generated: 2026-03-06 19:03:56 UTC 3/6/2026, 7:03:56 PM local
Back to all providers

Circleci

Last updated: 3/6/2026, 9:03:58 AM

✓ All Systems Operational
Active Incidents
0
Resolved
21
Scheduled Maint.
0
Total Incidents
21
Total Maint.
4
Critical
0

Recently Resolved Incidents

Mar 2, 18:21 UTC
Resolved - Looks like everything is good. Thank you for your patience.

Mar 2, 18:15 UTC
Monitoring - The fix has gone out. We will continue to monitor the situation. Sorry for any inconvenience.

Mar 2, 17:59 UTC
Identified - We're seeing issues with docker on some convenience images (cimg:*) and a docker update we just pushed this morning. We are working on a fix and will let you know when we have more information. Thank you for your patience.

AI Analysis
Impact: minor
Categories: docker, compute
Users: all-users
Root Cause: A docker update caused issues with some convenience images (cimg:*)
Started: 3/2/2026, 6:21:06 PM Resolved: N/A Duration: 96h 43m

Feb 27, 00:17 UTC
Resolved - GitHub have acknowledged and resolved an incident affecting their platform.
https://www.githubstatus.com/incidents/vd3xqfq36rgm
Users should no longer experience intermittent checkout failures.

Feb 26, 22:57 UTC
Update - We are continuing to investigate this issue.

Feb 26, 22:56 UTC
Update - We are continuing to investigate this issue.

Feb 26, 22:50 UTC
Identified - Some users are currently experiencing inconsistent failures when checking out.
We believe this to be an issue with an upstream VCS provider and are investigating.

AI Analysis
Impact: minor
Categories: vcs, checkout
Users: all-users
Root Cause: An issue with an upstream VCS provider caused inconsistent failures when checking out
Started: 2/27/2026, 12:17:31 AM Resolved: N/A Duration: 186h 47m

Feb 22, 19:05 UTC
Resolved - Our authentication provider has confirmed the earlier service disruption has been fully mitigated: https://status.auth0.com/incidents/kknl0nbdzbvx

Email/password login functionality has been restored and is operating normally.

Feb 22, 18:53 UTC
Monitoring - Our downstream authentication provider has implemented mitigation actions and is reporting a reduction in errors. We are seeing corresponding improvement in email/password login success rates. We are continuing to monitor closely to ensure stability before marking this incident resolved.

Feb 22, 17:37 UTC
Identified - We’ve identified that the issue affecting email/password sign-ins is caused by a degradation in service from our downstream authentication provider, Auth0: https://status.auth0.com/incidents/kknl0nbdzbvx

Users should still be able to sign in using GitHub or Bitbucket OAuth, which remain operational. We continue to monitor and will update as we learn more.

Feb 22, 17:25 UTC
Investigating - We are currently investigating an issue affecting users attempting to log in with email and password credentials. Sign-in via GitHub or Bitbucket OAuth is not affected and remains fully operational.

This issue appears to be related to an outage with a downstream authentication provider: https://status.auth0.com/

AI Analysis
Impact: major
Categories: auth
Users: all-users
Root Cause: A service disruption with the downstream authentication provider Auth0 caused issues with email/password logins
Started: 2/22/2026, 7:05:41 PM Resolved: N/A Duration: 287h 59m

Feb 20, 17:44 UTC
Resolved - This incident has been resolved. Between approximately 16:00 UTC and 17:10 UTC, users may have experienced slow load times on the Pipelines page, errors when viewing pipeline and workflow history, and delays in GitHub commit status checks being delivered.

Jobs and builds were not impacted during this time.

We apologize for the disruption and thank you for your patience.

Feb 20, 17:31 UTC
Monitoring - We have deployed a fix to stabilize internal services that were experiencing elevated load. Affected components are recovering and we are closely monitoring to confirm full resolution.

Users may still experience some residual slow load times on the Pipelines page, errors when viewing pipeline and workflow history, and delays in GitHub commit status checks being delivered as systems continue to catch up and stabilize.

Jobs and builds remain unaffected and continue to run as expected.

We thank you for your patience while our engineers worked to stabilize the affected services. We will provide an update within 30 minutes or earlier.

Feb 20, 17:19 UTC
Identified - We have deployed a fix to stabilize internal services that were experiencing elevated load since approximately 16:00 UTC. Affected components appear to be recovering and we are closely monitoring the situation.

Users may still experience some slow load times on the Pipelines page, errors when viewing pipeline and workflow history, and delays in GitHub commit status checks being delivered as systems continue to stabilize.

Jobs and builds remain unaffected and continue to run as expected.

We will provide our next update within 30 minutes or sooner if things change. We thank you for your patience while we continue to work to resolve this issue.

Feb 20, 16:56 UTC
Update - At approximately 16:00 UTC, we began seeing elevated load on internal data systems that serve read operations across CircleCI. This has resulted in degraded performance on the Pipelines page, difficulty loading historical workflow and pipeline data, and delays in GitHub commit status checks being posted.

Write operations and job execution are unaffected — builds are queuing and running as expected. We are actively working to stabilize affected internal services and restore full read performance across the platform.

We will provide an update within 30 minutes. Thank you for your patience while we work to investigate this issue

Feb 20, 16:25 UTC
Investigating - We are currently investigating two issues affecting CircleCI services. Users may experience slow load times on the Pipelines page and errors when attempting to view workflow details. Additionally, GitHub commit status checks may not be updating as expected for some users.

Builds and jobs are continuing to run and are not impacted at this time.

We will provide an update within 30 minutes. Thank you for your patience while we work to investigate this issue

AI Analysis
Impact: minor
Categories: ui, vcs
Users: all-users
Root Cause: Elevated load on internal data systems caused degraded performance on the Pipelines page, difficulty loading historical data, and delays in GitHub commit status checks
Started: 2/20/2026, 5:44:25 PM Resolved: N/A Duration: 337h 20m

Feb 17, 10:30 UTC
Resolved - From 11:48 to 17:16UTC some Bitbucket users may have experienced their organization not present in the CircleCI UI. Additionally some users may have encountered an issue when using `circleci config validate`. We identified the root cause of this issue and have since reverted the change. All systems for Bitbucket users are fully operational as of 17:16 UTC.

AI Analysis
Impact: minor
Categories: ui, integration
Users: all-users
Root Cause: A change was made that caused Bitbucket organizations to be unavailable in the CircleCI UI
Started: 2/17/2026, 10:30:00 AM Resolved: N/A Duration: 416h 34m

Delay in Jobs starting

Resolved Unknown

Feb 10, 16:13 UTC
Resolved - This incident has been resolved.

Feb 10, 15:54 UTC
Monitoring - We're recovered and are clearing up the backlog.

Feb 10, 15:42 UTC
Update - The issue has been identified and we are pushing out a fix. Thank you for your patience.

Feb 10, 15:41 UTC
Identified - The issue has been identified and we are pushing out a fix. Thank you for your patience.

Feb 10, 15:32 UTC
Update - We are continuing to investigate this issue.

Feb 10, 15:26 UTC
Update - We are continuing to investigate this issue.

Feb 10, 15:21 UTC
Investigating - We are investigating delays of jobs.
We will continue to update while investigating this.

AI Analysis
Impact: major
Categories: compute
Users: all-users
Root Cause: Capacity constraints in the infrastructure provider caused delays in starting Linux and Remote Docker jobs
Started: 2/10/2026, 4:13:30 PM Resolved: N/A Duration: 578h 51m

Feb 9, 20:25 UTC
Resolved - Following GitHub's service recovery, all CircleCI functionality has returned to normal operation. Jobs are processing as expected across all resource classes.

Feb 9, 20:13 UTC
Update - Customers using Github may continue to experience delays in Pipelines & UI.

Feb 9, 20:05 UTC
Monitoring - We are starting to see some recovery and jobs are beginning to be triggered by Github again.

Feb 9, 19:59 UTC
Update - We are currently impacted by an ongoing Github outage. We will update once we have more information.

Feb 9, 19:48 UTC
Investigating - We will update as soon as we have more information

AI Analysis
Impact: major
Categories: integration, network
Users: github-users
Root Cause: An ongoing GitHub outage caused delays in Pipelines and the CircleCI UI
Started: 2/9/2026, 8:25:25 PM Resolved: N/A Duration: 598h 39m

Feb 3, 18:58 UTC
Resolved - This incident has been resolved. Thank you for your patience.

Feb 3, 18:19 UTC
Monitoring - The capacity constraints affecting Linux and Remote Docker job execution have been mitigated. Jobs are now starting within expected timeframes. We continue to monitor the situation to ensure stability.

- What's impacted: Linux and Remote Docker job execution  - working within normal parameters
- What's happening: Service levels have returned to normal after implementing mitigation measures

We will provide an update within 15 minutes or sooner if conditions change.  Thank you for your patience while our engineers worked to resolve this issue.

Feb 3, 17:46 UTC
Update - We are continuing to work on a fix for this issue.

Feb 3, 17:46 UTC
Identified - We have identified delays affecting Linux and Remote Docker job execution. Customers are currently experiencing approximately 3-6 minute delays for these jobs to start due to capacity constraints in our infrastructure provider. All other compute resource classes are operating normally.

We are actively mitigating this issue by routing a portion of traffic to an alternate region and continue working to restore normal service levels.

- What's impacted: Linux and Remote Docker job execution
- What's happening: Jobs are experiencing 3-6 minute delays starting execution due to upstream capacity constraints

We will provide an update with 30 minutes.  Thank you for your patience while we work to reduce these delays.

AI Analysis
Impact: major
Categories: compute
Users: all-users
Root Cause: Delays in job starting due to capacity constraints in the infrastructure provider
Started: 2/3/2026, 6:58:08 PM Resolved: N/A Duration: 744h 6m

Feb 2, 22:18 UTC
Resolved - The issue affecting email notifications has been resolved. Build completion emails and plan-related notifications are now being delivered normally. We apologize for any inconvenience this may have caused.

Feb 2, 22:05 UTC
Update - We are continuing to monitor for any further issues.

Feb 2, 22:05 UTC
Monitoring - Our upstream provider has resolved the issue affecting their system. We are currently monitoring email notification delivery to confirm full restoration. Build completion emails and plan-related notifications should begin flowing normally. All other notification types and build results through the CircleCI web interface and GitHub checks continue to function normally.

We will provide a final update within 15 minutes.

Feb 2, 21:48 UTC
Update - We are continuing to work with our upstream provider to restore email notification delivery. Build completion emails and plan-related notifications remain impacted. All other notification types and build results through the CircleCI web interface and GitHub checks continue to function normally.

We will provide an update within 30 minutes.

Feb 2, 21:12 UTC
Update - We continue to work with our vendor on restoring email notification delivery. Build completion emails and plan-related notifications remain impacted. All other notification types and build results through the CircleCI web interface and GitHub checks continue to function normally.

We will provide an update within 30 minutes.

Feb 2, 20:34 UTC
Identified - We have identified the issue affecting email notifications. Our notification delivery system is experiencing disruptions that are preventing build completion emails and plan-related notifications from being sent. We are actively working with our vendor to restore service. All other notification types, including Slack and webhook notifications, continue to function normally, and build results remain accessible through the CircleCI web interface and GitHub checks.

We will provide an update within 30 minutes.

Feb 2, 20:07 UTC
Investigating - We are currently experiencing issues with email notifications across CircleCI. Build completion emails and plan-related notifications are not being delivered as expected. All other notification types, including Slack and webhook notifications, continue to function normally. Build results remain accessible through the CircleCI web interface and GitHub checks.

We are actively investigating the issue with our notification delivery system and will provide an update within 30 minutes.

AI Analysis
Impact: minor
Categories: notifications
Users: all-users
Root Cause: Disruptions in the notification delivery system prevented build completion emails and plan-related notifications from being sent
Started: 2/2/2026, 10:18:54 PM Resolved: N/A Duration: 764h 45m

Jan 29, 19:37 UTC
Resolved - This incident has been resolved.

Jan 29, 19:35 UTC
Update - We are continuing to monitor for any further issues.

Jan 29, 19:35 UTC
Monitoring - A fix has been implemented and we are monitoring the results.

Jan 29, 19:29 UTC
Identified - We are experiencing capacity constraints affecting arm-medium, large, xlarge, 2xl, resulting in job start delays of up to 5 minutes.

Current Status:
- arm-medium, large, xlarge, 2xl: Experiencing delays up to 5 minutes due to capacity constraints
- Docker, Mac, Windows, and Android jobs: Operating normally without delays

Our engineering team is actively working to address these constraints and expand available capacity. All jobs will continue to run normally after the initial delay.

We appreciate your patience as we work to resolve this issue. Next Update: Within 30 minutes or as the situation changes.

AI Analysis
Impact: major
Categories: compute
Users: all-users
Root Cause: Capacity constraints affecting arm-medium, large, xlarge, 2xl resources, resulting in job start delays of up to 5 minutes
Started: 1/29/2026, 7:37:12 PM Resolved: N/A Duration: 863h 27m

Showing 10 of 21 resolved incidents

Scheduled Maintenance

Feb 24, 06:00 UTC
Completed - The scheduled maintenance has been completed.

Feb 24, 02:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.

Feb 20, 20:38 UTC
Scheduled - Mac Cloud will be undergoing scheduled maintenance on Tuesday, February 24th from 2:00 AM to 6:00 AM UTC. During this window, m4pro.medium and m4pro.large resource classes may be affected. While we do not anticipate significant disruption, customers could experience job queuing, and any jobs running during the maintenance window may fail and be automatically retried.

We will post an update once maintenance is complete.

AI Analysis
Impact: minor
Categories: compute
Users: m4pro.medium, m4pro.large
Root Cause: Scheduled maintenance on the Mac Cloud resource classes
Started: 2/24/2026, 6:00:07 AM Completed: 2/24/2026, 6:00:00 AM Duration: -1m

Jan 27, 00:00 UTC
Completed - The scheduled maintenance has been completed.

Jan 26, 00:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.

Dec 11, 23:10 UTC
Scheduled - As part of our ongoing infrastructure improvements, we will be deprecating Mac M1 and M2 resource classes on February 16th, 2026.

Ahead of the deprecation date, we will be performing 24-hour brownouts from 00:00:01 to 23:59:59 UTC, during which these resources will be unavailable.

You can opt out of brownouts by disabling the setting in Organization Settings > Advanced > Enable resource class brownouts. However, please note that access to these resources will be entirely removed on February 16th, 2026, regardless of brownout settings.

Please plan to transition to M4 instances (m4pro.medium and m4pro.large) ahead of the deprecation date.

M4 resources will be made available to organizations on free plans starting November 10th, 2025.

AI Analysis
Impact: major
Categories: compute
Users: all-users
Root Cause: Deprecation of Mac M1 and M2 resource classes
Started: 1/27/2026, 12:00:21 AM Completed: 1/27/2026, 12:00:00 AM Duration: -1m

Jan 13, 00:00 UTC
Completed - The scheduled maintenance has been completed.

Jan 12, 00:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.

Dec 11, 23:07 UTC
Scheduled - As part of our ongoing infrastructure improvements, we will be deprecating Mac M1 and M2 resource classes on February 16th, 2026.

Ahead of the deprecation date, we will be performing 24-hour brownouts from 00:00:01 to 23:59:59 UTC, during which these resources will be unavailable.

You can opt out of brownouts by disabling the setting in Organization Settings > Advanced > Enable image brownouts. However, please note that access to these resources will be entirely removed on February 16th, 2026, regardless of brownout settings.

Please plan to transition to M4 instances (m4pro.medium and m4pro.large) ahead of the deprecation date.

M4 resources will be made available to organizations on free plans starting November 10th, 2025.

AI Analysis
Impact: major
Categories: compute
Users: all-users
Root Cause: Deprecation of Mac M1 and M2 resource classes
Started: 1/13/2026, 12:00:23 AM Completed: 1/13/2026, 12:00:00 AM Duration: -1m

Dec 16, 00:00 UTC
Completed - The scheduled maintenance has been completed.

Dec 15, 16:43 UTC
Update - During our maintenance, we identified an issue with the opt-out feature for the Mac M1/M2 brownouts. Organizations attempting to disable brownouts via Organization Settings > Advanced > Enable image brownouts may have found that the setting was non-functional.

A fix has been deployed and the opt-out feature should now be working as expected.

Dec 15, 00:00 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.

Dec 11, 23:01 UTC
Scheduled - As part of our ongoing infrastructure improvements, we will be deprecating Mac M1 and M2 resource classes on February 16th, 2026.

Ahead of the deprecation date, we will be performing 24-hour brownouts from 00:00:01 to 23:59:59 UTC, during which these resources will be unavailable.

You can opt out of brownouts by disabling the setting in Organization Settings > Advanced > Enable image brownouts. However, please note that access to these resources will be entirely removed on February 16th, 2026, regardless of brownout settings.

Please plan to transition to M4 instances (m4pro.medium and m4pro.large) ahead of the deprecation date.

M4 resources will be made available to organizations on free plans starting November 10th, 2025.

AI Analysis
Impact: major
Categories: compute
Users: all-users
Root Cause: Scheduled deprecation of Mac M1 and M2 resource classes
Started: 12/16/2025, 12:00:21 AM Completed: 12/16/2025, 12:00:00 AM Duration: -1m