The $40 supply chain attack: How internal domain collisions became a CI/CD nightmare

The $40 supply chain attack: How internal domain collisions became a CI/CD nightmare

For $40 in domain registration fees, we gained the technical capability to inject malicious code into AbsorbLMS’s CI/CD pipeline. Over six months, their build systems sent 99,351 queries to domains we controlled, each request representing an opportunity to deliver compromised software packages that could have ultimately reached 34 million learners.

Internal domain collision vulnerabilities represent one of the most underestimated supply chain risks in modern infrastructure. During our ongoing research, we've identified this vulnerability pattern affecting thousands of organizations, from Fortune 500 companies to critical infrastructure providers. Unlike traditional supply chain attacks that target public repositories and trigger security alerts, these attacks exploit trust in internal domains, making them nearly impossible to detect with standard security controls. This case-study is a technical deep-dive into how this seemingly minor misconfiguration created exploitable access to supply chain infrastructure that could have silently affected millions of users.

TL;DR?

  • Cost of attack: $40 (domain registration)
  • Duration: 180 days(*) of continuous access
  • Traffic observed:
    • 2.15 million requests from 8,672 unique IPs
    • 99,351 requests for software packages
    • 26 developer accounts clear-text credentials
    • 10,152 JWT tokens
    • 171 yaml deployment specifications received
  • Impact: Complete CI/CD pipeline exposure - GitLab webhooks → Bamboo builds → Artifactory packages
  • Response: "It's just QA, behind a VPN, low risk"
  • Current status: Domains expired January 4, 2026. Following publication of this case study, Absorb requested we renew and transfer the domains back to them, which we did on January 23, 2026.

This case study isn’t about DNS leaks or WPAD attacks. It’s about what happens when automated build pipelines trust “internal” domains that someone else controls in the public namespace.

(*) A note on timeline: Our tooling automatically registered the domains defensively in January 2025 after discovering the potential collision. We began reporting the issue to AbsorbLMS in May 2025 and offered to transfer the domains at no cost. The 180-day observation period reflects the time between activating our sinkhole (to document the scope of exposure) and the domains’ eventual expiration, during which we repeatedly attempted coordinated disclosure and remediation.

A Note on This Publication

Before publishing, we provided AbsorbLMS with the opportunity to comment on our findings. Their responses appear throughout in pink-highlighted blocks, representing their internal investigation results and risk assessment. We've included these comments to present both perspectives on this case study.

This work was vulnerability research, not authorized penetration testing. We reason from observable conditions outward, considering how adversaries might exploit them. Absorb's assessment is based on assumptions about how their architecture and controls should function, assumptions we couldn't validate without authorization. Absorb reports commissioning an independent penetration test, but without access to scope, methodology, or findings, we cannot verify whether it tested the specific attack primitives we documented.

Who is AbsorbLMS?

AbsorbLMS is a cloud-based Learning Management System trusted by over 2,900 organizations and used by more than 34 million learners worldwide.

Their customers include Fortune 500 companies like Sony, Toyota, Johnson & Johnson, and Samsung. The platform handles employee training, customer education, compliance management, and certification programs, making it a high-value target with access to sensitive organizational data.

image

A brief refresher on internal domain name collisions

An internal domain name collision occurs when a hostname or service name intended to resolve only within a private network is inadvertently resolved through the public Domain Name System (DNS).

As an analogy: imagine calling out “John” in a small office where there’s only one John. Now imagine shouting “John” in a crowded airport. Chances are someone other than your “office John” will respond. The context that made the name unambiguous in the small office no longer applies in public.

To prevent this, best practice has long been to use non-routable, reserved suffixes such as .local for internal domains. Unfortunately, many organizations built their internal infrastructure years ago using domain names under top-level domains (TLDs) that didn't exist at the time or that they didn't know existed (like .cloud, .dev, or .ad). They assumed these domains were safe because they weren't routable. But as ICANN expanded the DNS root zone and introduced hundreds of new TLDs, those assumptions quietly broke. The problem has gotten worse with remote work. When employees worked on-premise, their devices always used internal DNS servers. Now, with hybrid work and VPNs that don't always force-tunnel all traffic, corporate devices increasingly use public DNS resolvers, causing internal queries to leak externally.

The critical risk most people miss is that it's not just about leaked credentials or proxy configurations. In modern DevOps environments, these collisions can compromise your entire software supply chain.

Initial discovery: Following the TLS certificates

As part of our broader research into internal domain name collisions, we developed several techniques and tools to identify internal namespaces that are leaking externally. One particularly effective approach involves the analysis of TLS certificates issued by internal Certificate Authorities (CAs).

Unlike public CAs, which require proof of domain ownership before issuing a certificate, internal CAs can issue certificates for any Fully Qualified Domain Name (FQDN) that is assumed to exist within the organization’s private namespace regardless of whether that domain is actually registered or owned publicly.

By design, the internal domain name of the issuing CA, as well as internal hostnames and service names appear in the certificate subject and/or in the Subject Alternative Name (SAN) extension. This makes these certificates a useful signal for identifying internal namespaces that may overlap with the public DNS.

During this analysis, we discovered certificates issued by an internal CA named AbsorbLMS CA, with certificates issued to the domain absorb.ad. The SAN extension revealed additional internal hostnames, including references to a second internal namespace, blatantmedia.ad, alongside multiple subdomains (e.g., sandbox, integrator, artifactory, etc.)

image
image

At this point, two things immediately stood out:

  1. Both absorb.ad and blatantmedia.ad were valid, publicly routable Fully Qualified Domain Names under the .ad country-code TLD.
  2. Neither domain was publicly registered at the time.
image
image

This is the textbook definition of an internal domain collision: the organization using a domain name internally and the entity owning the same domain in the public namespace are different.

Defensive Registration: $40 and a DNS Server

To prevent third-party exploitation (i.e., malicious threat actors) and measure the real-world impact, we registered both domains for approximately $40 total ($20 per domain).

As with all the other domains registered during our research, our intent was explicit and consistent:

  • prevent threat actors from exploiting the collision,
  • document the scope of exposure,
  • responsibly report the issue to the affected organization,
  • and collaboratively transfer the domains back to their rightful owner.

We configured dedicated nameservers to passively log all incoming DNS queries. Given the volume we anticipated, we ingested all telemetry into Elasticsearch for analysis.

image
image

Once the domains were registered and delegated to our nameservers, the traffic started flowing immediately. Not just sporadic background noise, but sustained traffic characteristic of a major internal infrastructure.

This included name resolution attempts typically associated with corporate environments, such as wpad lookups, as well as hostnames resembling internal CI/CD environments (bamboo.absorb.ad, artifactory.absorb.ad) and clearly structured production-style systems (euw1-prd-app-01.absorb.ad, can1-prd-app-01.absorb.ad, and similar patterns).

Initial disclosure attempts: A 90 days journey

Before diving into what we found, here's what happened when we tried to report it.

May 9, 2025 - Sent detailed report to security@absorblms.com

image

Result: Ticket creation failed.

Absorb comment: the failure occurred in the automated Jira ticket creation workflow but the message itself was received

May 12, 2025 - Reached out via LinkedIn to AbsorbLMS and AbsorbLMS's CTO

Result: No response

May 15, 2025 - Public LinkedIn post tagging Absorb Software

image

Result: CTO responded: "Please email security@absorblms.com"

image
image

May 15, 2025 - Sent detailed report to security@absorblms.com (again)

Result: Ticket creation failed (again).

image
Absorb comment: The GRC team escalated this report to IT on May 21, 2025

May 19, 2025 - Sent a short message via website contact form Result: Within 5 minutes, phone call from Marcela... from sales. We explained the vulnerability. She thanked us and added us to the marketing mailing list.

image

June 3, 2025 - Sent a follow up message to AbsorbLMS’s CTO via LinkedIn

Result: No guaranteed response-time SLA… We will respond within 90 days.

image
Absorb comment: Directing the report to formal security intake was intended to authenticate the submission and ensure proper handling through established security processes, not to delay action.

June 4, 2025 - Receive a generic acknowledgement email from Absorb’s security team

image

August 7, 2025 - Substantive response after 90 days But here's the twist: AbsorbLMS framed our vulnerability disclosure as a "bug bounty submission" and offered $2,000 under their bug bounty program terms, terms we never agreed to and that would have restricted our ability to publish this research.

image
image
Absorb comment: The report was triaged through our bug bounty program as our standard mechanism for tracking vulnerability disclosures, which allows up to 90 days for initial response. The $2,000 offer was intended to acknowledge the report consistent with our established process, not to restrict independent research. Researchers can decline bug bounty participation and associated terms.

We clarified this was independent research, not a bug bounty submission. At that point, the conversation shifted from technical teams to legal leadership. And the narrative became "QA environment only," "behind a VPN," and "low risk.”

Absorb comment: Information Security reports to Legal at Absorb, so legal coordination is standard procedure and doesn't indicate deprioritization. We conducted internal review and commissioned an independent penetration test to assess CI/CD integrity and environment boundaries. We don't dispute the observed traffic logs, but notes that external observations don't necessarily establish whether controls could be traversed to reach production systems.

Meanwhile, our logs told a different story.

image

Deeper traffic analysis: Not low risk

Given the lack of traction following our initial disclosure attempts, we proceeded with a deeper analysis of the traffic reaching the registered domains. The goal was not to speculate about impact, but to identify concrete, observable behaviors and determine whether the exposure resulted in meaningful attack primitives.

To do this, in addition to collecting DNS resolution attempts, the domains were configured to accept connections at the application layer, allowing us to passively observe how internal systems attempted to interact with what they believed were trusted internal services. DNS telemetry alone can show that a hostname is being resolved, but analyzing application-layer traffic reveals how that hostname is being used, whether by browsers, automated tooling, build systems, authentication mechanisms, or operating systems.

Between July 8, 2025 and January 4, 2026 (180 days), we recorded:

  • 2,150,450 total requests
  • 11,624 requests per day average
  • 8,672 unique source IPs

We also learned more about AbsorbLMS internal infrastructure:

  • bamboo.absorb.ad: 398,867 requests - Their CI/CD orchestration system
  • artifactory.absorb.ad: 222,460 requests - Their package manager repository
  • sonarqube.absorb.ad: 925,058 requests - Their code quality analysis
  • 111 GitLab webhook events - Every code push notified the absorb.ad domain

Every layer of the DevOps pipeline was exposed.

Absorb comment: The observed traffic was associated with our QA environment using dummy/test data. We commissioned an independent penetration test focused on CI/CD integrity and environment boundaries, which found no viable path to penetrate our systems, introduce malicious code into our builds, or pivot from QA to production. Observable requests to internal services don't necessarily establish that an external party could authenticate, issue commands, or deliver packages accepted downstream.

The real story: Supply chain takeover via package managers

Here's what most people miss about domain collisions: they're not just about intercepting DNS queries or stealing credentials via wpad. In modern CI/CD environments, it may become software supply chain attacks.

The CI/CD automation chain

Modern software development follows this pattern:

  1. Developer commits code to GitLab
  2. GitLab sends webhook to Bamboo CI (to trigger build)
  3. Bamboo CI checks out code and runs build
  4. Bamboo CI retrieves code dependencies from JFrog Artifactory
  5. JFrog Artifactory serves packages (npm, nuget, etc.)
  6. Bamboo CI compiles the code into deployable artifacts
  7. Bamboo CI deploys the artifacts based on the YAML deployment configs
  8. Artifacts deployed to PROD if QA works as intended

The problem: Steps 2, 4, and 5 were all hitting domains we controlled.

The diagram below illustrates how domain collision inserted us into this CI/CD pipeline. Notice the red-highlighted path, this is where artifactory.absorb.ad queries gave us the ability to serve malicious packages directly into their build process:

image

Package manager exploitation

When we sinkholed artifactory.absorb.ad, we started receiving package manager queries:

GET /artifactory/api/nuget/v3/nuget/FindPackagesById()?id='Absorb.Common.Utilities'&semVerLevel=2.0.0

We recorded 1,585 unique NuGet packages requested. These weren't just public packages being mirrored, names like Absorb.Common.Utilities, Absorb.Core.Logging, and Absorb.Infrastructure.* were clearly proprietary internal libraries.

The canary experiment: proof of exploitation

Simply seeing package requests isn't enough, we needed to prove we could influence what was being built. Referring back to the CI/CD flow diagram above, steps 4-5 represent the moment where our controlled artifactory.absorb.addomain could serve malicious responses. But we couldn't actually serve malicious packages (this was vulnerability research, not authorized penetration testing). So we designed a test…

When a NuGet client (like a build system) wants to download packages, it first requests a service index, a JSON metadata file that tells it where to find various package resources. This index contains URLs for different operations:

  • /flatcontainer/ - The package download endpoint where actual .nupkg files are retrieved
  • /registration/ - The package metadata endpoint for querying package information, versions, and dependencies

The client trusts this index and uses these URLs for all subsequent package operations. Here's what a legitimate service index looks like:

Our test was simple: modify the service index URLs slightly, then see if build systems requested packages using our modified URLs. If they did, it would prove they parsed and trusted our malicious index.

Step 1: Modify the service index

We changed the legitimate URLs to include canary markers:

  • /flatcontainer/ became /flatcontainer1/
  • /registration/ became /registration2/

These "canary URLs" were our proof mechanism. If we later saw requests to flatcontainer1 or registration2 in our logs, it would prove that build systems parsed our forged service index and trusted it.

Step 2: Serve the modified service index

Every time a build system requested the service index from artifactory.absorb.ad, we responded with our modified version.

Step 3: Monitor for canary URL requests

Between July 8, 2025 and January 4, 2026, we recorded 99,351 requests for 1,585 different NuGet packages to the flatcontainer1 canary URL, the package download endpoint. Each of these requests represented a moment where a build system:

  • Received our malicious service index
  • Parsed it
  • Trusted the modified URLs
  • Attempted to download a package file from our specified location
  • Would have installed whatever package we served

This was no longer a theoretical risk. This was 99,351 real opportunities to inject malicious code into the software supply chain.

Absorb comment: While the canary methodology demonstrates that NuGet clients parsed and followed the malicious service index, translating this into production impact would require crossing multiple control boundaries: authenticated access to internal feeds, environment segmentation between QA and production, and release promotion gates. Our investigation found no evidence these boundaries could be bypassed.

Could this attack be detected?

Detecting malicious packages served through our controlled artifactory.absorb.ad domain would be extremely difficult, if not impossible, with standard security controls:

  • The package appears to be coming from the "internal" trusted Artifactory server
  • SSL certificates validate (since we own the public domain, we can obtain valid certificates from public CAs)
  • Build logs show "success"
  • Package names and versions match exactly what was requested
  • Package hash verification passes (since we control the service index that provides package metadata, we also control the hash values that clients use for verification).
  • No warnings, no errors, no anomalies in standard monitoring

Unlike traditional supply chain attacks that target public repositories (triggering security alerts, repository trust warnings, or package verification failures), domain collision attacks exploit infrastructure that organizations have already whitelisted and trust implicitly. Standard security controls won't flag packages from "internal" sources.

Absorb comment: While this class of issue may evade traditional detection signals, layered controls (build-agent egress restrictions, environment segmentation, access controls on artifact promotion, and authentication on package sources) can limit impact. In response, we tightened DNS/egress controls and enabled always-on VPN.

Beyond package injection: Authentication traffic

While analyzing the traffic patterns, we discovered that the domain collision wasn't just affecting package manager queries. Build systems and developers were actively authenticating to services on the domains we controlled, sending credentials and tokens that we passively captured. Between July 8, 2025 and January 4, 2026, we observed:

Developer Authentication Attempts

Build systems and developer workstations attempted to authenticate to Bamboo CI on our controlled domain:

  • 26 unique developer accounts sent authentication requests
  • 14 developers sent credentials via POST requests with os_username and os_password parameters
  • 12 developers sent credentials via HTTP Basic Authentication headers
image

JWT Token Collection

We captured 10,152 unique JSON Web Tokens (JWTs) from various services attempting to authenticate with systems on the domains we controlled:

  • 10,090 tokens issued by https://myabsorb.com/
  • 20 tokens from Communities Integration Service
  • 10 tokens from Recommendations Microservice
  • 6 tokens from Artifactory (JFrog) - build tools authenticating for package repository access
  • 2 tokens from BambooHR integration service
  • 1 token from AWS Cognito

Infrastructure Configuration Exposure

We received 171 complete Bamboo CI/CD specifications automatically uploaded to the domain bamboo.absorb.ad . These disclosed operational blueprints for AbsorbLMS's deployment pipelines.

  • 24 deployment configurations, some with explicit descriptions:
  • - description: Starts EC2 Image Builder Pipeline which bakes AMI for UsProduction
    - description: Performs Major LMS Release (myabsorb.com)
    - description: AWS US-East (myabsorb.com / use1-prd-dms-01.absorb.ad)
    - description: AWS EU-West (myabsorb.eu / euw1-prd-app-01.absorb.ad)
  • 24 deployment permission specifications
  • 123 environment permission specifications

While many sensitive values (like database passwords and AWS secret keys) were properly stored as Bamboo variables rather than hardcoded, we found notable exceptions: the AWS Cognito OAuth client secrets for “Content Partner” plateforms were hardcoded as literal values in the YAML files for production environments across all four global regions (US, CA, AU, EU)

Absorb comment: The observed authentication traffic was investigated as part of our internal review and penetration testing. We maintain that captured credentials and tokens don't automatically establish actionable access, as utility depends on network segmentation, token validation controls, environment isolation, and promotion gates. Our testing concluded our controls prevented escalation to production.

The response disconnect: "It's just QA"

After we clarified this wasn't a bug bounty submission and detailed the risks, AbsorbLMS's response focused on environment scoping and risk classification:

"[We] confirmed that the data in question was limited to a QA testing environment, with no exposure of client data, production systems, or other sensitive information, making it a low-risk scenario overall. To address it fully, we've now enforced always-on VPN for all employees."

Reality check

⇒ "QA only"

We observed:

  • 62,256 requests to use1-prd-dms-01.absorb.ad (prd = production??)
  • Packages named Absorb.Core.*, Absorb.Common.*, Absorb.Infrastructure.* (likely used in production code)
  • 171 deployment YAML files showing production environment configurations
  • 10,090 JWT tokens issued by https://myabsorb.com/
  • 99,351 package installation attempts over 180 days
Absorb comment: We maintain the affected systems were QA-only using dummy/test data. QA pipelines commonly reference production-like naming conventions and build production code without having privileges or network access to modify production systems. Our commissioned penetration test could not pivot from QA to production or introduce malicious code into production releases.

⇒ "Always-on VPN"

We observed:

  • Traffic continued at the same volume post-VPN implementation
  • 11,624 requests per day for 6 months
  • 8,672 unique source IPs
  • Sustained Bamboo CI authentication attempts
  • Ongoing package manager queries
Absorb comment: Always-on VPN was implemented to route traffic through protected network controls. Continued traffic post-implementation can reflect cached configurations, automated retries, and systems not in scope. Our risk assessment was based on segmentation and access controls, not VPN alone.

⇒ "Low risk"

We observed:

  • 99,351 opportunities to inject malicious packages
  • 26 developer credentials actively being sent to the domain we controlled
  • 10,152 authentication tokens (JWT)
  • Control over CI/CD orchestration (Gitlab → Bamboo → Artifactory)
  • 171 deployment configurations revealing their entire infrastructure
Absorb comment: Our "low risk" determination was based on: QA scope with test data, environment segmentation preventing QA-to-production pivots, no evidence of successful compromise, and no verified exposure of client data or production systems. We acknowledge the logs demonstrate request volume and breadth, but maintain this doesn't prove an external party could traverse our internal trust boundaries to alter production deliverables.

The CNAME Chain: Production Impact

We also discovered that production-facing domains depended on the internal namespace:

image

Control over absorb.ad meant control over a production subdomain of absorblms.com.

Absorb comment: CNAME dependencies don't automatically grant production control. Actual impact depends on whether the hostname routes end-user traffic externally (vs internal-only resolution), controls at higher layers (CDN/WAF, certificate validation, authentication), and whether the dependency serves critical application traffic versus auxiliary services.

Why this matters beyond AbsorbLMS

This vulnerability pattern isn't unique to AbsorbLMS. During our broader research, we've identified similar issues affecting:

  • Major European cities
  • U.S. law enforcement agencies
  • Critical infrastructure companies
  • Fortune 500 enterprises

The contributing factors are systemic:

1. Legacy infrastructure meets new TLDs

Organizations built internal naming conventions before TLD expansion. Domains like company.cloud or internal.dev seemed safe when those TLDs didn't exist.

2. DevOps automation amplifies risk

Manual processes might catch a DNS issue. Automated CI/CD pipelines don't. They trust, they build, they deploy at machine speed, thousands of times per day.

3. Polyglot environments = larger attack surface

Modern applications use multiple languages and package managers (NuGet, npm, pip, Maven, etc.). Each is another vector for supply chain injection.

4. Remote work exposes internal traffic

VPNs don't always force-tunnel all traffic. Split-tunnel VPNs, home networks, coffee shop WiFi, all create scenarios where internal DNS queries leak externally.

5. "It's internal, so It's trusted"

The most dangerous assumption: internal domains are inherently trustworthy. Package managers, build systems, and CI/CD tools often whitelist "internal" artifact repositories without additional validation.

Current status: unresolved risk

Despite repeated attempts to articulate the technical risks and underlying mechanics of this exposure, we were unable to reach alignment on the necessity of taking ownership of the affected domains as a mitigation step.

Our objective throughout this process was never financial. As with many other internal domains registered during the course of this research, our intent was to transfer the domains back to the affected organization at no cost, once appropriate technical ownership and remediation were in place. The offer of a bug bounty payout, which we declined, was not the obstacle though. The challenge was the absence of collaboration with AbsorbLMS technical team and a shared understanding of the risk posed by publicly routable internal namespaces.

This highlights a broader issue in vulnerability disclosure: organizations must be willing to trust empirical evidence over architectural assumptions. Systems behave as they are actually configured, not as we intend them to be. When researchers present logs, traffic captures, and measurable indicators of exposure, the response should prioritize investigating "why is this happening?" over explaining "why this shouldn't be possible.” We document this case not to assign blame, but to emphasize the importance of treating researcher-provided evidence as a starting point for investigation rather than a claim to be refuted.

Unfortunately, we cannot maintain defensive registration indefinitely, so as of January 4, 2026, both absorb.ad and blatantmedia.ad have expired and entered the registrar grace period.

This outcome is unsatisfying. Not because a vulnerability went unacknowledged but because an opportunity to fully eliminate a systemic risk was missed.

Update (January 2026): Following publication discussions and the domain expiration on January 4, 2026, AbsorbLMS requested we renew and transfer the domains back to them. We agreed immediately.

On January 15, 2026, we restored both absorb.ad and blatantmedia.ad from the registrar's grace period at our own expense and completed the transfer process. The domains are now under Absorb's control, eliminating any risk of malicious third-party acquisition.

This outcome represents what we sought from the beginning: cooperative transfer of the domains to their rightful owner, conducted outside of bug bounty terms that were never agreed to for this research.

A constructive next step: As part of the resolution, we kindly requested inclusion in Absorb's next penetration testing RFP, an opportunity to validate the security controls and segmentation boundaries they described in their responses through authorized adversarial testing. The attack primitives we documented (99,351 package installation attempts, canary service index acceptance, credential and token exposure) represent exactly the kind of conditions that benefit from independent verification through comprehensive penetration testing.

Lessons learned

For Organizations

✓ Audit your internal FQDNs

  • List all internal domain names in use
  • Check if they use valid public TLDs (.cloud, .dev, .ad, etc.)
  • Verify that you own the public registrations

✓ Don't rely on environment labels for security scoping

  • "It's just QA" doesn't mean it's low risk
  • QA systems often have production credentials or data
  • Supply chain attacks don't care about environment names

✓ Monitor DNS for unexpected external resolution

  • Alert when internal hostnames resolve to unexpected IPs
  • Log package manager activity
  • Track CI/CD webhook destinations

✓ Review CI/CD pipelines for internal domain dependencies

  • Artifactory, Nexus, custom package feeds
  • Webhook endpoints for GitLab, Jenkins, Bamboo
  • Service discovery mechanisms

✓ Use reserved TLDs for internal naming

  • RFC 6761: Use .local, .internal, .test, etc.
  • If public FQDN is in used, ensure ownership of that domain

✓ Validate controls through adversarial testing

  • Architectural assumptions about segmentation and access controls should be tested
  • Independent penetration testing can validate whether observed conditions are exploitable
  • When researchers document attack primitives, use authorized testing to prove controls work

Respond to vulnerability researchers

  • Engage technically before legally
  • Investigate impact independently
  • Thank researchers for their work

✓ Accept unsolicited vulnerability reports gracefully

  • Make sure you have a security@ email address to receive vulnerability reports
  • Respond to vulnerability researchers
  • Engage technically before legally
  • Investigate impact independently
  • Thank researchers for their work

✓ Trust evidence over assumptions

  • When researchers present traffic logs, credentials, or configuration data, investigate first
    • System behavior reflects actual configuration, not intended architecture
    • Verify researcher claims empirically rather than dismissing them based on design assumptions
    • Internal teams may have blind spots, researchers provide an external perspective

For Security Researchers

✓ Document every disclosure attempt meticulously

  • Timestamps, channels, recipients
  • Responses (or lack thereof)
  • Shifts in narrative

✓ Don’t agree to retroactive bug bounty terms

  • Vulnerability disclosure ≠ bug bounty submission
  • Terms must be accepted before research, not after
  • Protect your ability to publish findings

✓ Understand the legal landscape

  • Defensive registration isn't hacking, but document intent
  • Consider coordinated disclosure timeline
  • Be prepared for legal teams

✓ Focus on impact, not just vulnerability

  • "DNS leak" sounds minor
  • "Supply chain takeover" communicates real risk
  • Use concrete numbers and scenarios