Original title: No article title provided.
Article
Raw resolver traces and status updates show a real DNSSEC integrity failure affecting the .de namespace, not a zone transfer or delegation absence. Queries to the root and parent hints indicate the . and de trust chain remained present, with delegation NS sets and DS records being returned, but signed zone validation failed for a large portion of .de data. Commenters attribute the outage to a malformed or broken RRSIG over an NSEC3 record after a DENIC DNSSEC key rollover, with the problematic key in the chain referenced as 33834, while older signatures still worked on some authoritative instances. The result was intermittent SERVFAIL behavior for validating resolvers, because some paths landed on nodes serving the bad signature while others still had pre-rollover or healthy state. This produced the paradox of some users and networks reaching sites normally while others saw broad breakage. DENIC’s status pages moved from partial to full service disruption, and public checks and third-party dashboards tracked widespread validation failures across prominent .de names. Public analysis pointed to cryptographic chain validation, not missing zone data, as the core failure mode. By the evening, traces and reports suggest partial recovery as authoritative and signer state converged again. The incident ended with a formal root-cause investigation still pending at the time of reporting.
Users quickly converged on DNSSEC as the cause, noting that non-validating lookups to services like 8.8.8.8 appeared to work while validating paths failed with malformed signature errors. Several people reported that disabling validation for the de zone in local resolvers (for example in Unbound) restored functionality, and some observed that different resolvers gave different outcomes depending on anycast server state and cache behavior. Multiple comments echoed concern that one failed central signing operation at a country TLD can create broad collateral impact across unrelated services, with comparisons to prior DNSSEC criticism and lists of major DNSSEC outages. There was visible frustration over downtime, especially from users debugging local infra and from businesses seeing major brands unreachable, but also relief as some services recovered or were reachable via alternate paths. Additional themes include skepticism about change management, calls for stronger disaster recovery and cold-start planning, and criticism of centralized dependency chains in infrastructure operations. A few participants added humor and dark references, while others provided practical evidence and links to analyzers, status pages, and historical outage trackers. The overall thread moves from immediate troubleshooting and confusion to broader reliability lessons, with mixed expectations about how quickly this class of outage can be corrected and how often it should happen.