Some eNom customers have experienced almost two days of downtime after a planned data center migration went titsup, leading to DNS failures hitting what users suspect must have been thousands of domains.
We’ve disabled DNS updates via the Enom Control Panel and API for all domains to ensure we don’t interfere with our ongoing efforts to restore DNS records for domains impacted by the DNS issue. New domain registrations and renewals are working as intended.
There is a good argument that rewiring a house with the mains still connected, is not a good idea. I can excuse the disconnection of users’ ability to make changes in the middle of the mess.
But why was the migration target system not fully up and tested before they started?
Given the industry practise these days is to say nothing that might be used in any legal action - I don’t suppose we will ever know what actually happened.
But I suspect that this event will, force a rethink on procedures and process. ISPs with compromised users should take the opportunity to sell them a .uk complement as part of a stanbdby/backup/safety net.
My reading of it is they’re consolidating the DC infra not platforms.
I imagine they’ve brought up the enom platform in their own DC, and were then doing a DB transfer which took longer than expected, combined with perhaps a bad DNS cluster sync to new hardware in their own DC - thinking it was in-sync they then shutdown DNS nodes in the old DC which took down a portion of the DNS Records not properly synced over.
But only a guess.
(From what I know of both platforms I’m pretty sure it’d be impossible to import enom into the tucows backend)