Atlassian
Last updated: 3/6/2026, 9:03:10 AM
Recently Resolved Incidents
Issues with 403 user authentication errors across Atlassian products
Mar 4, 07:31 UTC
Resolved - We are aware of an issue that was impacting user authentication to Atlassian services between 6:10AM Tuesday 4th March UTC and 6:35AM Tuesday 4th March UTC.
Users that were already logged in would not have been impacted by this issue.
A deployment suspected of causing this issue was rolled back and the problem was subsequently resolved.
Slowness in Jira.
Sep 10, 13:15 UTC
Resolved - We have identified the root cause of the issue with an internal infrastructure component that has been impacting the provisioning process for the Mumbai Region with several Atlassian Cloud products. All affected products are now back online and no further impact has been observed.
Sep 2, 18:33 UTC
Update - We have identified the root cause of an issue with an internal infrastructure component that has been impacting the provisioning process for the Mumbai Region.
This issue has led to a performance impact or slowness in product provisioning.
We are working on a fix to resolve the issue and recovery is in progress.
Sep 2, 10:56 UTC
Monitoring - The slowness issue has been resolved, and we are actively monitoring the systems to ensure stability.
Sep 2, 09:23 UTC
Investigating - Our team is currently investigating the issue. We are working to identify the cause and will provide updates as soon as possible.
Sep 10, 12:43 UTC
Resolved - Systems have recovered, and the issue has been mitigated. All affected products are fully operational, with no further impact observed.
Sep 10, 10:26 UTC
Investigating - We are investigating issues with slow loading and long responsive time for all products (Jira, JSM, JSW and Confluence) and will provide updates here soon.
High RDS CPU on multiple environments
Sep 9, 18:12 UTC
Resolved - Earlier today, we experienced RDS problems for Jira, Jira Service Management, and Jira Work Management. The issue has been resolved, and the services are operating normally.
Sep 9, 17:13 UTC
Monitoring - We identified the root cause of the RDS problems causing issues for Jira, Jira Service Management, and Jira Work Management. We mitigated the issue and are now monitoring the fix.
Sep 9, 16:19 UTC
Investigating - We are observing different RDS problems in different environments for Jira, Jira Service Management, and Jira Work Management. We are actively working on this and will provide more updates here soon.
Sep 9, 16:01 UTC
Identified - We are observing different RDS problems in different environments for Jira, Jira Service Management, and Jira Work Management. We are actively working on this and will provide more updates here soon.
Sep 9, 10:56 UTC
Monitoring - We’ve taken steps to address the issue by scaling up the RDS instances, and we will continue to monitor the systems to ensure everything remains stable.
Sep 9, 08:56 UTC
Investigating - We are investigating issues with Latency and 5xx errors for Jira, Jira Service Management, Jira Work Management due to High RDS CPU on several EU-Central environment and will provide updates here soon.
Data residency migrations halted.
Aug 23, 11:15 UTC
Resolved - The issue with scheduling App data residency migrations has been resolved and the service is operating normally.
Aug 20, 11:46 UTC
Investigating - Data residency migrations are currently not possible for Cloud customers. If you attempt a Data residency migration, you will encounter an error message, “We are unable to schedule your move” when you get to the “Review and submit your request” page.
An update will be shared here as soon as this service is resumed.
Some products are hard down
Jul 4, 03:54 UTC
Resolved - Between 03-07-2024 20:08 UTC to 03-07-2024 20:31 UTC, we experienced downtime for some of the products . JIRA product impacted with issue creation and project creation. While Confluence was impacted with public link access by anonymous user and the mission control page. Confluence issue was fixed at 04-07-2024 02:30 UTC. The issue has been resolved and the service is operating normally.
Jul 3, 21:45 UTC
Monitoring - We have mitigated the problem and continue looking into the root cause.
The outage was between 8:08pm 03/07 UTC - 08:31pm 03/07 UTC
We are now monitoring closely.
Jul 3, 20:51 UTC
Investigating - We are investigating an issue with
Some of the www.atlassian.com pages are inaccessible
Jun 6, 04:30 UTC
Resolved - Between 05 June 23:30 UTC and 06 June 04:34 UTC, we experienced issues with some of the pages on www.atlassian.com. The issue has been resolved, and the pages are operating normally.
Jun 6, 03:12 UTC
Identified - We continue to work on resolving the issue with www.atlassian.com. We have identified the root cause and expect recovery shortly.
Jun 6, 01:30 UTC
Investigating - We are investigating reports of intermittent errors on some Atlassian, Opsgenie, Atlassian Access, and Compass-related www.atlassian.com web pages. Once we identify the root cause, we will provide more details.
Service Disruptions Affecting Atlassian Products
Feb 14, 23:32 UTC
Resolved - We experienced increased errors on Confluence, Jira Work Management, Jira Service Management, Jira Software, Opsgenie, Trello, Atlassian Bitbucket, Atlassian Access, Jira Align, Jira Product Discovery, Atlas, Compass, and Atlassian Analytics. The issue has been resolved and the services are operating normally.
Feb 14, 22:55 UTC
Monitoring - We have identified the root cause of the Service Disruptions affecting all Atlassian products and have mitigated the problem. We are now monitoring this closely.
Feb 14, 22:31 UTC
Identified - We have identified the root cause of the increased errors and have mitigated the problem. We continue to work on resolving the issue and monitoring this closely.
Feb 14, 21:57 UTC
Investigating - We are investigating reports of intermittent errors for all Cloud Customers across all Atlassian products. We will provide more details once we identify the root cause.
User searches failing
Feb 7, 17:44 UTC
Resolved - Between 15:40 UTC to 15:57 UTC customers experienced intermittent failures when searching for users in Atlassian cloud services: Confluence, Jira Work Management, Jira Service Management, Jira Software, Atlassian Bitbucket, Jira Product Discovery, and Compass. The issue has been resolved and the service is operating normally.
Feb 7, 17:05 UTC
Monitoring - We have mitigated the problem. We are now monitoring closely.
Feb 7, 16:40 UTC
Investigating - We are investigating an issue with our user search service that is impacting Atlassian cloud service: Confluence, Jira Work Management, Jira Service Management, Jira Software, Atlassian Bitbucket, Jira Product Discovery, and Compass. We will provide more details within the next hour.
Jan 12, 17:13 UTC
Resolved - We experienced degraded SCIM provisioning from external Identity Providers for Confluence, Jira Work Management, Jira Service Management, Jira Software, and Atlassian Access. The issue has been resolved and the service is operating normally.
Jan 12, 06:55 UTC
Update - A fix for the bottleneck identified in the Group synchronization process for SCIM Provisioning has been made. We are seeing processing start to return to normal levels and will be monitoring over the next few hours. The team is all hands on deck to keep improving the situation.
Jan 11, 22:36 UTC
Update - The bottleneck in the Group synchronization process for SCIM Provisioning has seen considerable improvement due to recent changes, but it is still under work. The team is all hands on deck to keep improving the situation.
Jan 11, 18:54 UTC
Update - The bottleneck in the Group synchronization process for SCIM Provisioning is still under work. The team is all hands on deck to improve the situation.
Jan 10, 19:56 UTC
Update - The bottleneck in the Group synchronization process for SCIM Provisioning is still under work. We are closely monitoring the service.
Jan 10, 03:17 UTC
Monitoring - We have identified a bottleneck in the Group synchronisation process for SCIM Provisioning. We have increased the resources allocated to the process in order to mitigate the issue. We are now monitoring the service.
Jan 9, 22:03 UTC
Investigating - We are investigating cases of degraded performance when SCIM provisioning users/groups for Confluence, Jira Work Management, Jira Service Management, Jira Software, and Atlassian Access Cloud customers.
We will provide more details shortly.
Showing 10 of 20 resolved incidents
Scheduled Maintenance
MICROSCORE-4348 Micros API outage to upgrade RDSs from db.t4g.large to db.m8g.large
Apr 30, 06:43 UTC
Completed - okie dokie, RDSs have been upgraded as quickly as expected, nothing related seems to have exploded in the time since, so i think we're in the clear, yay
Apr 30, 06:31 UTC
Update - update: from the AWS console for commercial production:
- April 30, 2025, 16:09 (UTC+10:00) Multi-AZ instance failover completed
- April 30, 2025, 16:09 (UTC+10:00) The RDS instance was modified by customer.
- April 30, 2025, 16:08 (UTC+10:00) DB instance restarted
- April 30, 2025, 16:08 (UTC+10:00) The parameter max_wal_senders was set to a value incompatible with replication. It has been adjusted from 20 to 65.
- April 30, 2025, 16:08 (UTC+10:00) Multi-AZ instance failover started.
- April 30, 2025, 16:01 (UTC+10:00) Applying modification to database instance class
so it looks like that took 2 minutes from the RDS perspective, and we did see some 5XX responses from the Micros API during this time
Apr 30, 06:04 UTC
Update - update: we've kicked off the deployments:
- commercial production: https://deployment-bamboo.internal.atlassian.com/deploy/viewDeploymentResult.action?deploymentResultId=3456381285
- FedRAMP-moderate production: https://deployment-bamboo.internal.atlassian.com/deploy/viewDeploymentResult.action?deploymentResultId=3456381287
- it's hard to predict exactly when, but there should be separate 2-3 minute outages, and those should start 10-15 minutes from now
Apr 30, 05:32 UTC
Scheduled - - 2x separate periods of outage, each expected to be 2-3 minutes
- Micros API for commercial production from db.t4g.large to db.m8g.large
- Micros API for FedRAMP-moderate from db.t4g.large to db.m8g.large
- https://hello.atlassian.net/wiki/spaces/MCORE/pages/5218781089/LDR+UA-13147+AWS+deadline+versus+micros-server+RDSs
- https://hello.jira.atlassian.cloud/browse/MICROSCORE-4348
Marketplace Maintenance Window
Dec 22, 09:02 UTC
Completed - Scheduled maintenance is now complete. The writes operations to Marketplace were stopped for 20 minutes. Now, all the operations are back to normal.
Dec 22, 06:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Dec 19, 11:43 UTC
Scheduled - Marketplace will be undergoing a scheduled maintenance during this time window. We will be in read-only mode during this time.
confluence.atlassian.com Scheduled Maintenance
Dec 8, 21:17 UTC
Completed - The scheduled maintenance has been completed.
Dec 8, 21:10 UTC
Verifying - Verification is currently underway for the maintenance items.
Dec 8, 20:13 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Dec 8, 20:12 UTC
Scheduled - Atlassian Knowledge Base (fonelucen.atlassian.com) scheduled maintenance window:
- Starting at 8:00 PM UTC Sunday, December 8th
- Ending at 10:00 PM UTC Sunday, December 8th
Marketplace Maintenance Window
Jun 25, 05:30 UTC
Completed - The scheduled maintenance has been completed.
Jun 25, 03:30 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 24, 04:47 UTC
Scheduled - We are planning a scheduled maintenance during this time. It is possible that Marketplace webpages may experience downtime or reduced performance during this period.
Marketplace Maintenance Window
Jun 16, 09:32 UTC
Completed - The scheduled maintenance has been completed.
Jun 16, 07:40 UTC
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 16, 07:38 UTC
Scheduled - We will be undergoing a scheduled maintenance in this window.