Introduction
This started as part of our broader research into internal domain name collisions. While reviewing TLD zone files (which contain the authoritative nameserver records for all domains under a top-level domain) we noticed several domains had authoritative nameservers pointing to aX-YZ.akam.ne, a clear typo of the expected aX-YZ.akam.net, which is commonly used by Akamai.
At first glance, it might seem harmless. But .ne is the country-code top-level domain (ccTLD) for Niger, and like any publicly delegated TLD, domains under .ne can be purchased.
Domain search result.
That typo (a missing “t”) turned out to be more than just a cosmetic error. We acquired the domain akam.ne for €280 to see if any traffic would reach us.
How did it start?
Akamai typically assigns authoritative nameservers like a9-67.akam.net and a20-66.akam.net to customers using its CDN, managed DNS, or security services. These are configured during domain onboarding, and customers are expected to update their domain registrar with the assigned NS records. This step is often manual, customers copy and paste values from Akamai’s control panel or API into their registrar. In many cases, it appeared that someone had mistakenly entered nsxx.akam.ne, likely due to a keyboard slip or bad copy / paste.
Initially, we assumed the impact would be limited to domains in the various TLD zone files whose NS records pointed directly to the typo’ed domain. But we quickly realized there were far more typos than we had anticipated. In reality, much of the risk came from delegated subdomains, which are not visible in public TLD zone files and are far harder to identify or audit.
What began as a typo turned into a gateway into legitimate DNS infrastructure that had quietly been misconfigured for years in some cases.
Digging deeper…
Once we registered the domain and took control of *.akam.ne, far more DNS queries than we expected started to flow in. Within the first few days, we recorded over 6.4 million queries, primarily for hostnames like a4-64.akam.ne, a9-67.akam.ne, and similar variants. Based on our analysis of the query patterns, we estimated this represented traffic from thousands of misconfigured domains across hundreds of organizations worldwide.
At first, it wasn’t clear what domains these queries were associated with. What we were seeing were lookups for nameserver hostnames, not direct queries for the misconfigured domains themselves. These aX-YZ.akam.ne patterns are typos of legitimate Akamai nameservers (e.g., a4-64.akam.net), suggesting that NS records had been misconfigured, but we didn’t yet know which domains were doing the delegating.
To trace the source, we pointed those aX-YZ.akam.ne records to a functioning nameserver under our control (we configured a wildcard record to redirect anything *.akam.ne). Within the first 5 minutes, we recorded over 300,000 DNS queries. A clear sign that the issue was both active and widespread.
Once the wildcard was active, full resolution requests began arriving, revealing the actual domains that were delegating to the invalid nameservers. The screenshot above includes domain lookups from organizations across the globe, from Australia (.com.au) to Argentina (.com.ar) and even .edu domains in the U.S.
Technical readers will also recognize indicators of internal service lookups leaking externally (e.g. _kerberos._tcp.* or _ldap._tcp.pdc._msdcs.*). These were attributed to a financial institution in Argentina and a University in the US. Several global companies and e-commerce websites (blurred for privacy) were also caught in the crossfire.
That’s when the scale of the issue became clear. This wasn’t just about collecting DNS logs. Having authoritative control over these NS records would allow a threat actor to:
- Respond to DNS queries authoritatively, potentially spoofing legitimate records
- Request and obtain domain-validated SSL certificates, depending on the CA’s validation process
- Redirect traffic to attacker-controlled infrastructure for impersonation, phishing, or man-in-the-middle (MITM) attacks
- Intercept sensitive communications, including authentication tokens, API requests, and payment flows
What began as a one-character typo had quietly turned into a wide-scale security risk that could have been silently exploited for years
Of course, we didn’t keep the record active for long. Without hosting a proper DNS zone, these queries would fail, potentially causing resolution delays or outages. While DNS clients typically fall back to secondary name servers when one doesn’t respond (which was happening before we configured a record *.akam.ne), introducing a “responsive” but empty server can disrupt that failover behavior. So, in the interest of avoiding harm, we collected enough data for attribution and then disabled the wildcard.
The 1-out-of-X misconception
A common misconception is that if a domain lists multiple authoritative nameservers, having control over just one of them only gives you access to a small portion of the traffic (say 1 out of 4 servers equals 25% of queries). That assumption is based on an older DNS resolution model, where clients queried authoritative nameservers directly and distributed their queries evenly across the list. But that’s not how most DNS resolution works today.
These days, the vast majority of DNS clients no longer talk directly to authoritative servers. Instead, they rely on public recursive resolvers like Google (8.8.8.8), Cloudflare (1.1.1.1), or DNS resolvers provided by their ISP. These recursive resolvers handle the heavy lifting: they query the authoritative nameservers on behalf of clients, cache the responses, and serve cached results to anyone else asking for the same record.
For instance, if Google's resolver queries the typo’ed nameserver that we controlled for example.com and caches a malicious response with a 24-hour TTL, every user who queries Google DNS for example.com (which can be millions of users) over the next 24 hours will receive our spoofed answer.
In practice, that means controlling just one authoritative server can give you full control over resolution, at least for a significant portion of users, depending on resolver cache timing and load distribution.
This caching behavior amplifies the impact of a single typo in ways many teams don’t anticipate. It also makes these issues especially dangerous because they don’t need persistent presence or active exploitation. Even a brief window where a recursive resolver queries and caches your response can redirect traffic at scale.
So no, controlling one server doesn’t just give you 1/X of the traffic. With the right timing, it could give you all of it.
Disclosure
One of the most difficult aspects of this kind of research is disclosure. When a single domain is affected, responsible disclosure is usually straightforward. But in this case, we were looking at thousands of domains, all misconfigured in similar ways and all sending traffic our way. Notifying each one individually was simply not realistic.
We initially focused on identifying a few larger, high-profile organizations. One of them was Mastercard, whose infrastructure was clearly affected. We sent an email to Vulnerability_Management@mastercard.com disclosing the issue. There was no reply. The misconfiguration was quietly fixed, without acknowledgment, clarification, or follow-up.
A few days later, we posted a brief LinkedIn update referencing the incident and framing it as a “classic case of how not to handle vulnerability disclosure.”
That post caught the attention of Bugcrowd, who reached out asking us to take it down on Mastercard’s behalf.
The message was a bit... odd. They referred to us as a “Bugcrowd researcher,” despite the fact that we’ve never submitted anything through their platform. To this day, we’re still not sure whether that label is assigned because someone browsed their site once, but it certainly felt like a stretch. Needless to say, that while the tone was polite, the implication that we were acting unethically didn’t sit well with us…
Brian Krebs covered this story pretty well (“Mastercard DNS Error Went Unnoticed for Years”) if you want to know more about this.
Several weeks later, our second disclosure attempt was with Fanatics, as one of their e-commerce websites was also affected.
With no obvious security contact available, we reached out directly to their CISO, Larry Dolan, on LinkedIn on November 27th. Larry responded on December 2nd, directing us to their disclosure portal at fanatics.responsibledisclosure.com.
Despite some concerns over their terms and conditions (see our LinkedIn post here), we submitted the issue the same day.
By December 17th, the issue was still unresolved. For what amounted to a nameserver typo, over two weeks seemed like an excessive SLA, especially considering the implications for PCI DSS compliance.
We followed up with a message expressing concern about the delay, highlighting the potential risk to users and referencing relevant PCI DSS requirements (6.6 and 8.4). We informed them that if not addressed, we would escalate to Cybersource and US-CERT. The issue was remediated shortly afterward.
These examples highlight the friction researchers still face when disclosing widespread or systemic misconfigurations, even with respectful, timely communication.
Months after our initial research, Brian Krebs' article brought wider attention, including from Akamai, who reached out to us directly. They asked if we’d transfer the akam.ne domain to them. We agreed, and Akamai reimbursed us €280 + tax for registration fees.
It was a refreshing example of how disclosure should be: clear, professional coordination to reduce risk and fix the problem.
Closing Thoughts
If there’s one lesson here, it’s this: typos in DNS configs aren’t just harmless mistakes. This research started with a one-character typo and led to the discovery of widespread DNS misconfigurations affecting major infrastructure.
This research also highlighted very real challenges around disclosure, not just technically, but logistically and culturally. While some vendors are responsive and professional, others stay silent or treat well-intentioned researchers as a thorn in their side…
All of this because someone forgot the "t" in .net