This page will be updated as frequently as possible. Last update Thu, MAY 14, 9:20
Some services provided by ARCHIBUS SOLUTION CENTER HOSTING SERVICES OÜ are temporarily unavailable due to an infrastructure incident that occurred at the data center of one of our Authorized Hosting Partners: IBM Cloud – Amsterdam 3 Data Center.
Below, you will find further information regarding the nature of the incident, the estimated timeframes for service restoration, and any potential personal data breach implications under applicable privacy legislation.
The majority of the affected projects have now been restored from our Frankfurt backup site into our Frankfurt failover site (when we got the official permision from the client). Some features remain temporarily unavailable, and our team is actively working to restore the previous level of performance and functionality.
We sincerely apologize for the inconvenience caused and remain available to provide any further information or support through the communication channels indicated below.
Last news & updates
UPDATE#12 – AscHS got Access to AMS03 – MAY 13, 6:00 am
Good news, during the last night we got finally access to the Amsterdam datacenter, which means IBM Cloud team is recovering.
We are testing one by one, as the elements of our Archibus Hosting Services Private Cloud are getting access.
The estimate time is to have all our services up and runing by the end of tomorrow Thursday 15 MAY 2026, and fine tune performance during weekend.
Important Remark: INM did not provide any official ETA or ETR.
In any case we depend on the evolution of the IBM Cloud teams on site.
Work in progfress, with good expectations.
UPDATE#11 – LATEST COMMUNICATION FROM IBM – MAY 12, 6:44 am
From: IBM Cloud <no-reply@cloud.ibm.com>
Sent: Tuesday, May 12, 2026 6:44:00 AM
To: ASC-HS Support <support@asc-hs.com>
Subject: Incident – Severity 1: IDENTIFIED: Provider Power Infrastructure – Catastrophic Power Loss – AMS03
| IBM Cloud !!!URGENT ACTION REQUIRED!!! For customers with Cloud Services in the affected Amsterdam data center (AMS03), when we are ready to restore your environment, we plan to restore it to the state it was in prior to the outage (to the extent possible). If you do not wish to have your environment restored as part of the data center restoration and have not already notified us of this direction, please do so by May 12, 2026, 17:30 UTC. If we do not hear from you by then, we will restore your environment to its pre-outage state (to the extent possible) when we are ready in our restoration process. To engage IBM Cloud Support (Support center), please update the current case, or open a request via the Cloud Support Center with your preference. What happened? Starting at 06:33 UTC on May 7, 2026, a fire occurred at an Amsterdam data center facility (AMS03) operated by an independent provider (NorthC), hosting some of our IBM Cloud infrastructure. Due to the fire and the resulting power loss, all equipment and servers in this facility went offline, resulting in loss of access to IBM Cloud Services operating out of this facility. NorthC’s emergency response teams are working to restore power to the facility and essential services. In parallel, IBM is moving equipment into server rooms with restored access to power. Common questions: Q: Is this an IBM-owned data center? A: No. This facility is owned by third party provider, NorthC. IBM is one of the tenants in this facility. Q: When will my services be restored? A: Due to the severe nature of the fire, restoration of services requires IBM to move equipment to alternate server rooms in the facility and reconfigure the control plane. Teams are working 24 hours a day on restoration, and we will provides updates as they become available. IBM Cloud expects a subset of services to be restored on or before May 14, 2026. While currently in progress, additional time will be required to complete the migration of equipment from one room that is still without power, to alternate rooms within the facility. Q: Was the fire caused by malicious activities? A: NorthC, the facility provider, has indicated the fire was not caused by malicious activities. Q: Do we know what caused the fire? A: The facility provider (NorthC) will undertake a full root cause analysis once restoration activities are complete. Q: Where is the recovery status for Software offerings, specifically Planning Analytics and Controller posted? A: For further details on Software DR recovery activities covering Planning Analytics and Controller: https://status.ai-apps-comms.ibm.com/planninganalytics https://status.ai-apps-comms.ibm.com/controller STATUS – 2026-05-12 03:40 UTC – IDENTIFIED – See the !!!URGENT ACTION REQUIRED!!! note in the description above. Activities to restore customer access to resources and move equipment following a fire in NorthC’s Amsterdam data center, in which IBM is a tenant, continue to progress as anticipated. IBM continues to recommend that customers invoke or remain on their disaster recovery paths while repair actions are underway. Please contact IBM Cloud support if you require further assistance. For further details on Software DR recovery activities covering Planning Analytics and Controller: https://status.ai-apps-comms.ibm.com/planninganalytics https://status.ai-apps-comms.ibm.com/controller – 2026-05-12 00:16 UTC – IDENTIFIED – See the !!!URGENT ACTION REQUIRED!!! note in the description above. Activities to restore customer access to resources and move equipment following a fire in NorthC’s Amsterdam data center, in which IBM is a tenant, continue to progress as anticipated. IBM continues to recommend that customers invoke or remain on their disaster recovery paths while repair actions are underway. Please contact IBM Cloud support if you require further assistance. – 2026-05-11 19:50 UTC – IDENTIFIED – See the !!!URGENT ACTION REQUIRED!!! note in the description above. Activities to restore customer access to resources and move equipment following a fire in NorthC’s Amsterdam data center, in which IBM is a tenant, continue to progress as anticipated. IBM continues to recommend that customers invoke or remain on their disaster recovery paths while repair actions are underway. Please contact IBM Cloud support if you require further assistance. You received this email because you are subscribed to updates for Incident events. Get notified about other items, such as platform announcements or incidents and maintenance related to your resource activity. © Copyright IBM Corporation 2014, 2026. |
UPDATE#10 – LATEST NEWS FROM NorthC – Mon MAY 11 – 9:00
The temporary, redundant power supply at our location in Almere will be available by Wednesday at 12:00 at the latest. From that moment, customers will be able to switch their systems back on in a controlled manner.
This is later than the timeframe we communicated on Friday, May 8. The reason is a delay in the delivery of a critical component required for the redundant setup. The component is on its way from the European supplier and will arrive in Almere on Tuesday morning. Immediately upon arrival, our technicians will start the installation.
We are deliberately setting up the power supply redundantly from the start, so that the subsequent transition to the regular power grid can take place without any additional interruption.
UPDATE#9 – NEWS FROM AscHS – Mon MAY 11 – 10:30
Our IBM Cloud Frankfurt Failover site is ready, and majority (78%) of the main projects from IBM Cloud Amsterdam that were not transferred to our Failover site in AWS Frankfurt, are now uploaded in IBM Cloud Frankfurt.
UPDATE #8 – LATEST NEWS FROM IBM
STATUS:
– 2026-05-10 15:00 UTC – IDENTIFIED – The activities to restore customer access to resources in Server Room 2 and Server Room 3, as well as the relocation of equipment from Server Room 1 to Server Room 3, continue to progress as planned. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions. IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-10 11:00 UTC – IDENTIFIED – The activities to restore customer access to resources in Server Room 2 and Server Room 3, as well as the movement of equipment from Server Room 1 to Server Room 3, continue to progress as planned. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions. IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-10 07:00 UTC – IDENTIFIED – The activities to restore customer access to resources in Server Room 2 and Server Room 3, as well as the movement of equipment from Server Room 1 to Server Room 3, continue to progress as planned. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions. IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-10 03:00 UTC – IDENTIFIED – The activities to restore customer access to resources in Server Room 2 and Server Room 3, as well as the movement of equipment from Server Room 1 to Server Room 3, continue to progress as planned. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions. IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-09 23:00 UTC – IDENTIFIED – The activities to restore customer access to resources in Server Room 2 and Server Room 3, as well as the movement of equipment from Server Room 1 to Server Room 3, continue to progress. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions.IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-09 19:00 UTC – IDENTIFIED – The activities to restore customer access to resources in Server Room 2 and Server Room 3 are progressing according to plan. In parallel, we are progressing in preparing to move equipment from Server Room 1 to Server Room 3. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions.
IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-09 15:00 UTC – IDENTIFIED – The recovery plan to restore customer access to resources in Server Room 2 and Server Room 3 is progressing according to expectations. In parallel, we are preparing to move equipment from Server Room 1 to Server Room 3. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions.
IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-09 11:00 UTC – IDENTIFIED – The recovery plan to restore customer access to resources in Server Room 2 and Server Room 3 is progressing according to expectations. In parallel, we are planning the migration of equipment and workloads from Server Room 1 to Server Room 3. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions.
IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-09 07:00 UTC – IDENTIFIED – The recovery plan to restore customer access to resources in Server Room 2 and Server Room 3, while planning the migration of equipment and workloads from Server Room 1 to Server Room 3, is progressing as expected. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions. IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-09 03:00 UTC – IDENTIFIED – After the initial IBM walkthrough and assessment, all three server rooms are intact, with no visible smoke or water damage.
– Server Room 1 remains without power due to non‑viable adjacent utility and cooling systems.
– The utility and cooling systems for Server Room 2 and Server Room 3 appear to be intact; however, additional control‑plane actions are required to restore customer access.
The recovery plan to restore customer access to resources in Server Room 2 and Server Room 3, while planning the migration of equipment and workloads from Server Room 1 to Server Room 3, is progressing as expected. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions.
IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
– 2026-05-08 23:00 UTC – INVESTIGATING – After the initial IBM walkthrough and assessment, all three server rooms are intact, with no visible smoke or water damage.
– Server Room 1 remains without power due to non‑viable adjacent utility and cooling systems.
– The utility and cooling systems for Server Room 2 and Server Room 3 appear to be intact; however, additional control‑plane actions are required to restore customer access.
The recovery plan to restore customer access to resources in Server Room 2 and Server Room 3, while planning the migration of equipment and workloads from Server Room 1 to Server Room 3, is progressing as expected. IBM is working 24 hours a day, and we expect these actions will take several days to complete. IBM remains committed to transparent communications and will provide updates as we make progress on these actions.
IBM continues to recommend that customers invoke or remain on their disaster recovery paths while assessment and potential repair actions are under review. Please contact IBM Cloud support if you require further assistance.
UPDATE #7 – LATE EVENING – Fri 8 MAY
Update from NorthC in their LinkedIn posts
“Based on the initial findings of the technical investigation, we can provide you with more clarity about the recovery time.
The work to install the power supply and restore power and cooling to the customer equipment in phases is expected to take up to 72 hours. Installing a large number of generators, UPS systems and distributors, as well as pulling more than a kilometre of cable, is a complex operation.
Our teams are working day and night to achieve this as quickly as possible.”
Our AscHS team, in collaboration with our Clients (Archibus Business Partners) has fully restored from our Frankfurt failover endpoint (IBM Cloud), to our alternative datacenter AWS Frankfurt (Amazon Web Services). Only projects which clients accepted the transferr.
UPDATE #6 – EARLY MORNING – Fri 8 MAY
NorthC has shared a new update on their LinkedIn page. Initial inspections during the night confirmed that smoke spread within the rooms was limited and the fire compartmentation held. Teams on site are now working on restoring power and connectivity, preparing external emergency power to safely bring customer equipment back online: https://www.linkedin.com/embed/feed/update/urn:li:share:7458430761025634304?collapsed=1
UPDATE #5 – EARLY MORNING – Fri 8 MAY
NorthC continues to make progress toward restoring temporary power capabilities to initiate basic services at the Amsterdam data center. Site preparations are ongoing to support potential access, with IBM Cloud clearance still pending.
UPDATE 4# – LATE EVENING – Thus 7 MAY
Preliminary access to the facility to be restored during the night; mitigation operations are ongoing.
The fire in the data center is under control, and preliminary access to the facility will be restored during the night.
The data center’s own technical team will then be able to assess the damage and let us have an ETA for restoring power and connectivity.
UPDATE #3 – EARLY MORNING – Thus 7 MAY
NorthC has posted a follow-up update on their LinkedIn page confirming that still on fire, but the fire has been downgraded to GRIP 1 and is now under control: https://www.linkedin.com/embed/feed/update/urn:li:share:7458190986964381697?collapsed=1
UPDATE #3 – EARLY MORNING – Thus 7 MAY
Our team is currently implementing our Dissaster Recovery to External Datacenter procedure, a mitigating workaround to restore services as soon as possible.
Our AscHS team, in coordination with and the consent of our clients is moving services to our AWS Frankfurt data center.
UPDATE #1 – EARLY MORNING – Thus 7 MAY
NordicC, the Datacenter owner where IBM Cloud and our AscHS projects in Central Europe are hosted, has posted updates on their LinkedIn page. =>
https://www.linkedin.com/embed/feed/update/urn:li:share:7458147072479903744?collapsed=1
Date and time the incident was discovered
On 07.05.2026 at approximately 9:00 a.m., eDisplay srl became aware of a malfunction of our services attributable to a major incident that occurred at the Amsterdam 03 Data Center (Netherlands), operated by NorthC and used by IBM Cloud — our data processor pursuant to Art. 28 GDPR — to host the infrastructure on which our services rely.
Location of the incident
NorthC Data Center – Amsterdam 03 – Netherlands (IBM incident ID: INC11282490)
Type of personal data breach pursuant to Reg. EU 679/2016 (GDPR)
This is a temporary loss of data availability. No data loss or theft has been recorded: the data is intact but temporarily inaccessible. The personal data affected is that connected to the following services: API, web platform, database, authentication. Each user of the services is in a position to know the volume of data involved. Specific assessments of the circumstances of the incident are underway, in particular regarding the likelihood and severity of the impact of the unavailability of personal data on the rights and freedoms of natural persons, in order to verify whether the incident requires specific notification to the supervisory authority.
Brief description of the nature of the incident
On 07.05.2026 at approximately 9:00 a.m., eDisplay srl became aware of a malfunction of its services, immediately traced to a major incident (fire) that occurred at the supplier hosting the infrastructure we rely on for service delivery. For this reason, our services are temporarily unavailable.
Cause of the incident
Infrastructure incident at the data center – fire.
Status of the incident
At 6:00 p.m. on 07.05.2026, the supplier reported that firefighters were continuing to work to manage the fire and that no estimate was available regarding the time required to restore access to the building.
IBM (our infrastructure supplier, hosted in the affected data center) informed us that their technical team will gain access to the data center at 5:00 UTC (Amsterdam time) and will then report their assessment regarding the damage and the recovery timeframes.
As a result of this event, our services are currently unavailable. Follow the situation in real time below.
Expected timeline
- Initial assessment Completed
- Damage assessment In progress
- Infrastructure recovery planning In progress
- Service restoration In progress
Brief description of actions taken upon discovery of the incident
Our technical team, in constant contact with IBM Cloud thorugh our account manager in Poland, activated the emergency procedures without delay, in line with IBM’s formal recommendation to its clients to make use of alternative disaster recovery paths. In particular:
- AscHS triggered the internal Disaster Recovery in External Dataceenter Procedure
the rebuilding of services at an alternative data center in Frankfurt was initiated at 10:30; - the integrity of the data and systems involved has been verified;
- an internal crisis committee has been established with continuous oversight to manage the incident and related communications;
- all our Team was requested without hourly constrains to operate on the recovery of services and communication with clients;
Activities are proceeding in the order suggested from each of our customers (Archibus Business Partners / Implementers – [ABP]).
Our affected ABP has been requested to contact their oun End Users, affected by the service outage, to get permissions to restore and in case, transfer their projects to new locations.
Incident timeline (EEST – Eastern European Summer Time in Tallinn, Estonia – UTC + 3)
Thursday 7 May 10:20
Monitoring systems flagged an unresponsive state across all services hosted in Amsterdam. Incident formally declared and response procedures initiated.
Thursday 7 May 10:27
A support case is open through our support channel in IBM Cloud. Case nr: CS4496299.
Thursday 7 May 10:29
First clients (ABPs) reported unaccessibility from their End Clients.
Thursday 7 May 10:31
First clients (ABPs) reported unaccessibility from their End Clients.
Thursday 7 May 10:35
After internal organization, we tried to reach infromation from IBM, to have clear understanding of the situation before triggering any protocol. Also we requested information to reroute traffic in case it was a network or internet issue.
Thursday 7 May 11:46
We get the first message from IBM: “IBM is aware of an outage in our Amsterdam location. We are currently assessing the situation and additional details will be forthcoming”.
Thursday 7 May 15:09
We get the first confirmation about a fire in the datacenter, from our Account Manager in Poland. Bo ETA (Estimatted Time for Assessment).
Thursday 7 May 17:59
We get the first Official Incident Communication from IBM Cloud. Several updates, almost every two hours were subsequently received, but there were not clear answers about solutions, and is fully understandable after reviewing in detail the incident and the impact on the operation of the datacenter.
Current situation
As described above we are waiting for the restoration of service, but there is not any official ETA or ETR (Estimatted Time for Recovery).
Published News
https://www.techzine.eu/news/infrastructure/141131/fire-at-northc-data-center-all-personnel-evacuated-in-time/
https://www.datacenterdynamics.com/en/news/northc-data-center-outside-amsterdam-suffers-fire/
https://www.theregister.com/off-prem/2026/05/07/ibm-cloud-evaporates-as-datacenter-loses-power/5234835
https://www.thestack.technology/data-centre-fire-knocks-ibms-cloud-service-offline/
(3) NorthC Datacenters: Posts | LinkedIn