r/threatintel

Sharing infrastructure-pivot Cypher patterns we use during investigations (46B-node graph, free tier)

We've been running a graph of public internet infrastructure as a research tool for the last ~3 years. 46B data points and 39B edges spanning DNS resolution, BGP routing, WHOIS registration, hosting, and GeoIP, plus 39 threat-intel feeds wired in. Today we opened it as an MCP server so analysts can query it from Claude, Cursor, or any MCP-compatible client.

What it does: ask infrastructure questions in plain English (or Cypher) and get traversal-grade answers in one round trip. Pivot from a suspicious hostname to its IPs, ASN, prefix, co-tenants, and registration history in a single agent turn. Audit per-edge evidence behind any threat score. Track BGP route changes within 5 seconds of them happening.

30-day free trial, no credit card, no query limits during the trial, full graph access. The trial is meant to be real working time, not a teaser.

The pivot I use most often: from a suspicious hostname to every other hostname that has ever shared an IP with it. In a traditional REST stack that's resolve, pull passive DNS, fan out, dedupe, score. Five calls minimum, agent context window gets shredded by call three. In Cypher it's one round trip:

MATCH (start:HOSTNAME {name: "your-target.com"})-[:RESOLVES_TO]->(ip:IPV4)
MATCH (sibling:HOSTNAME)-[:RESOLVES_TO]->(ip)
WHERE sibling <> start
RETURN sibling.name, ip.name
LIMIT 25

Tested live against six domains: 140ms to 275 ms across the full graph.

Two caveats worth naming before you try it:

  1. The pivot returns infrastructure-shared hostnames, not behavioural-similarity ones. A CDN edge IP (CloudFront, Fastly) returns hundreds of co-tenants that aren't related. Filter on ASN, prefix age, or threat-feed presence to extract signal from noise.

  2. Targets that own their infrastructure (large enterprises with their own ASN) return zero co-tenants. Absence is itself a signal; the graph makes it legible. We just ran news.ycombinator.com against the same pattern and it returned one IP on M5HOSTING (AS21581). Boutique-hoster signature.

Other patterns that have been useful:

- whisper.explain(identifier) returns the per-edge evidence chain behind any threat score: which feed, which signal, which timestamp. Not a composite ML number. Lets you audit the score before pivoting on it.

- BGP feed aggregated from ~1200 peers (RIPE RIS, RouteViews, plus our own sessions). Route changes propagate into the graph in under 5 seconds. Useful for tracking infrastructure rotation during an investigation in real time, not the next-morning snapshot.

- The MCP wrapper means agents can chain pivots: this domain to its IPs to ASN reputation to other prefixes from that ASN to fresh registrations on those prefixes runs in a single agent turn instead of dozens of API calls.

Background, since this sub fairly asks. I'm Kaveh Ranjbar, ex-ICANN Board, ran K-root, 15 years at RIPE NCC. My co-founder Soroush and I built this because we got tired of stitching DNS to BGP to WHOIS to GeoIP across multiple sources during real investigations.

Known limits worth knowing:

- Multi-hop queries land in 150ms to 400ms, not microsecond. Single-anchor lookups are much faster.

- WHOIS coverage is partial in some ccTLDs.

- Threat scoring exposes per-edge evidence; no composite black-box score.

Install instructions and the two-minute MCP setup are in the first comment below.

Curious what infrastructure-pivot patterns folks here use that aren't well-served by existing tools. We're building Cypher templates from real analyst workflows, so weird or specific pivots are the most useful feedback.

reddit.com
▲ 3 r/threatintel+1 crossposts

Built a PE Malware Analysis Pipeline to Learn Why Most Detection Tools Suck at Correlation

I've been doing reverse engineering and malware analysis for sometime now, and I noticed something frustrating: every detection tool flags isolated signals separately. One tool screams "entropy is high!" Another yells "found injection APIs!" A third matches a YARA rule. But nobody tells you if these signals actually mean your binary is malicious or just legitimate software doing normal things.

So I built Binary Atlas—a static PE analysis engine that runs 14 detectors but scores confidence instead of just screaming alerts.

Why This Matters:

Most tools have insane false positive rates on legitimate Windows utilities

Single signals (high entropy, API imports, YARA matches) are meaningless in isolation

Correlation > Isolation

How It Works (5 Steps):

Check if Windows trusts it (valid Authenticode signature) → LOW risk

Parse PE headers, sections, imports, strings, hashes

Run 14 detectors (packing, anti-analysis, persistence, shellcode, etc.)

Unified classifier deduplicates findings and weights signals

Score confidence (HIGH/MEDIUM/LOW) + generate detailed reports

What Makes It Different:

Instead of: "Found CreateRemoteThread—FLAGGED!"

Binary Atlas does:

CreateRemoteThread detected ✓ (confidence: MEDIUM—debuggers use this)

WriteProcessMemory detected ✓ (confidence: MEDIUM—could be legitimate)

Registry persistence APIs detected ✓ (confidence: MEDIUM)

Anti-debug checks in strings ✓ (confidence: MEDIUM)

Unified result: "All 4 signals pointing toward injection + persistence = HIGH confidence malware"

The 14 Detectors:

Packing analysis | Anti-analysis detection | Persistence mechanisms | DLL/COM hijacking | Shellcode patterns | Import anomalies | Resource analysis | Mutex signatures | Overlay detection | String entropy | YARA scanning | Compiler identification | Threat classification | Security headers

Static analysis only ( To be honest sandboxin the file confirms everything)

High false positives on some legitimate software

Looking for feedback on:

How to reduce false positives further?

Which detection modules would be most useful?

Any malware researchers want to contribute better YARA rules?

Checkout Github: https://github.com/bilal0x0002-sketch/Binary-Atlas/

u/Ok_Performer1647 — 1 day ago

Would you treat this subdomain takeover path as critical exposure?

Trying to sanity-check the below.

Say an org has an old subdomain with a CNAME pointing to a cloud resource that no longer exists. Pretty standard dangling DNS issue.

Attacker claims the abandoned cloud alias, gets a valid cert for the real subdomain, and hosts a tiny remote resource there.

Now a targeted employee opens an email that loads that resource from the hijacked subdomain. If cookies are scoped broadly to the parent domain, the browser/mail client may send session cookies automatically to the attacker-controlled subdomain.

So the path is basically:

dangling CNAME → claimed cloud alias → valid cert on real subdomain → remote resource loads → parent-domain cookies leak → possible access to internal apps like HR, finance, CRM, support/admin consoles

My question: would you treat this as a critical pre-attack exposure, or just attack-surface hygiene until there is evidence of abuse?

Also curious who usually owns this in your org.

reddit.com
u/Straight-Common-3937 — 2 days ago

https://www.lelibrepenseur.org/samuel-hassine-patron-de-filigran-ecarte-du-voyage-de-macron-pour-pedopornographie/

New elite pedophilia scandal: a brutal fall. Samuel Hassine , founder and CEO of Filigran , a leading French cybersecurity company, was urgently removed from the delegation accompanying Macron to Asia. Aged 39, he is suspected of having purchased child pornography images and videos on the Darknet using cryptocurrencies.

Filigran, founded in 2022, develops cyber threat intelligence and attack simulation tools. The company is used by over 6,000 organizations worldwide, including the FBI , the European Commission, and several US agencies. Hassine, a former ANSSI employee, had raised tens of millions of euros and positioned his startup as a flagship of the French Tech scene.

According to reports, he is among the twenty buyers identified in France in a vast European investigation into a clandestine Darknet platform. Investigators have arrested several suspects for possessing and acquiring particularly serious child pornography. The Élysée Palace reacted swiftly by excluding him from the presidential trip to Japan and South Korea.

This scandal has once again tarnished the reputations of the French Tech scene and those in power. A rising figure in cybersecurity, with close ties to government institutions, finds himself at the heart of a sordid criminal case. The investigation is ongoing. 

additional Links
https://www.leparisien.fr/faits-divers/pedopornographie-un-patron-de-la-french-tech-prevu-dans-la-delegation-demmanuel-macron-en-asie-mis-en-cause-apres-un-vaste-coup-de-filet-03-04-2026-CULCDDQMQNFB5NQQ4WXV2UHEPQ.php (pay walled)
https://x.com/BastionMediaFR/status/2040111909546938799
https://www.instagram.com/p/DWth0BhiGhS/
https://geopolintel.fr/article4521.html

u/SnooEpiphanies6878 — 8 days ago

Hey everyone!

I’ve been building out a distributed honeypot network to track exploitation trends, and the data coming in has been pretty awesome. Over the past two weeks alone, the sensors have logged 3 million records, and this is climbing as sensors are being added!

The goal is to turn this into a collaborative intelligence hub. We’ve already had a few early users successfully track an ADB Mirai botnet before it hit the THN headlines, and we are currently seeing active exploitation attempts for several fresh router-based CVEs that haven’t been widely documented yet.

How it works: I’m opening up the platform for others to explore the data. To keep the network growing and the intel high-quality, it’s a "give-to-get" model:

  • Contribute: Host a sensor/node to feed the network.
  • Access: Once you’re contributing, you get full access to the entire global dataset to run your own queries and research.

If you’re interested in threat intelligence, malware behavior, or just want to see what’s hitting the sensors in real-time, come help us map the data.

Check it out here: boarnet.io

I’m still working through a lot of the data, so I’d love to see what findings you all dig up. Happy to answer any questions about the stack or the sensor deployment in the comments!

u/ZestycloseAirport405 — 6 days ago

I built a small tool that classifies cybersecurity news against the MITRE ATT&CK framework

Hey everyone, not sure if this is the right place to post this, so apologies in advance if it isn't. Mods feel free to remove.

I've been doing threat intelligence work for a while and kept running into the same problem: there's an enormous volume of cybersecurity news every day and figuring out which stories are actually relevant to the techniques you care about is slow and manual.

So, I trained a DistilBERT model to classify text from news articles directly against MITRE ATT&CK tactics and techniques. It chunks each article, runs it through the model, and surfaces the technique tags with a confidence score. I then built a small site around it TTPwire that aggregates RSS feeds from most of the major cybersecurity publications, classifies everything automatically, and lets you subscribe to a daily email digest filtered to just the techniques you follow.

It's genuinely been useful in my own workflow when building threat intelligence reports, instead of manually trawling through 50 articles I get a focused digest of the stories that map to the techniques I'm tracking that day.

It's free, no ads, and I'm not doing anything with your email beyond the digest. Still early days and the model isn't perfect, which is why I built inline feedback directly into the article view. Corrections feed back into the next training round.

Would genuinely love feedback from people who do TI work day to day, especially on whether the technique tagging is actually useful or whether I'm solving the wrong problem entirely.

ttpwire.com
u/DemmSec — 10 days ago

Hi r/threatintel,

I recently received mod approval to share a project I’ve been building called DysruptionHub: https://dysruptionhub.com/

DysruptionHub is a cyber incident tracking and reporting site focused on the United States and its territories. The site has been active since 2024 and focuses on publicly reported cyberattacks and technology disruptions where there may be public-interest, operational or community impact.

The site tracks incidents across six broad categories and displays them on a public incident map: https://dysruptionhub.com/us-map/

  • Critical infrastructure
  • Healthcare
  • Public services
  • Government
  • Education
  • Private sector

DysruptionHub is not a ransomware claim tracking site, and it is not just a scraped incident feed. The site has an inclusion taxonomy for what gets tracked: https://dysruptionhub.com/taxonomy/

The bottom line is that there must be strong signals of a cybersecurity incident and some impact to operations or services. That can include confirmed cyberattacks, suspected cyber-related outages, public-service disruptions, ransomware events, vendor incidents affecting downstream organizations, or other incidents where available public evidence supports tracking.

One of the goals of the project is to connect operational outages to cyber incidents that might otherwise go unreported or underreported. Local governments, schools, utilities, health care providers and other public-facing organizations often disclose “network issues,” “technical difficulties” or service outages without clearly saying whether a cyber incident is involved. DysruptionHub tries to document those cases carefully, connect public evidence where it exists, and improve transparency without overstating what is known.

DysruptionHub combines OSINT collection with human-written investigative reporting. The site uses public notices, local reporting, government updates, social media posts, breach notices, agenda packets, internal documents when available, and direct outreach to document U.S. cyber incidents and suspected cyber-related disruptions.

As an example of the kind of original reporting DysruptionHub does, our most recent original story looked at network issues and a production halt at Foxconn’s Wisconsin operation: https://dysruptionhub.com/foxconn-wisconsin-cyber-outage/

The focus is on operational impact, including what services were disrupted, who was affected, how long recovery took, and what public sources support those conclusions. Articles are human-written and source-reviewed, with an emphasis on attribution and clearly separating confirmed facts from unresolved indicators.

We’re especially interested in incidents that may not receive national attention but still affect services people rely on, such as utility billing, court records, public transit scheduling, library networks, school systems, health care operations, local government services or public safety-adjacent communications.

The core reporting is not paywalled. Articles are free to read, the site is ad-free, and there is also a free weekly summary email of tracked incidents.

For anyone who wants to support the project, optional paid support is available. One tier adds instant alerts, and a higher tier adds additional features, including a watchlist for outages or disruptions that do not yet have confirmed cyber signals. I’m mentioning that for transparency, but the main purpose of this post is to introduce the tracker.

Thanks to the mods for allowing me to share it here. I hope DysruptionHub is useful to others doing threat intelligence, incident tracking, OSINT, or public-sector situational awareness.

u/DysruptionHub — 7 days ago
▲ 15 r/threatintel+1 crossposts

Key Details:

  • Researchers used valid credentials blocked by Conditional Access policies to initiate the attack
  • Exploited the Device Registration Service (DRS) endpoint using device code authentication flow
  • Created a "phantom device" registered with a signed Azure AD certificate and private key
  • Registered the device as a Windows machine despite it being Linux, leveraging MITRE ATT&CK technique T1098.005 (Account Manipulation)
  • Obtained a Primary Refresh Token (PRT) with false device claims that bypassed CA device compliance requirements
  • Successfully accessed production tenant containing over 16,000 users without malware or endpoint interaction
  • Bypassed Intune compliance requirements by claiming hybrid domain-join status
u/forestexplr — 8 days ago

I recently discovered Hister, an open source local search engine that indexes the pages you visit. It has captured my attention because it can become a local-first knowledge base and an accurate RAG-like system if you use the integrated search MCP.

This is indeed an awesome project by the creator of Searx (privacy-focused search engine in ~2014).

Here's my contribution to the tool's blog.

I would like to thank Adam Tauber u/asciimoo who trusted me enough to let me publish on his blog.

hister.org
u/stan_frbd — 12 days ago