r/Infosec

▲ 18 r/Infosec+6 crossposts

AI Tools Are Helping Mediocre North Korean Hackers Steal Millions - One group of hackers used AI for everything from vibe coding their malware to creating fake company websites—and stole as much as $12 million in three months.

u/EchoOfOppenheimer — 2 hours ago

AI data governance platforms for insider threats - detection tool or expensive monitoring layer

u/gosricom — 1 day ago
🔥 Hot ▲ 58 r/Infosec+6 crossposts

America wakes up to AI’s dangerous power - After Mythos, a laissez-faire approach is no longer politically tenable or strategically wise

u/Confident_Salt_8108 — 2 days ago
▲ 3 r/Infosec+2 crossposts

Creo que muchas brechas de seguridad hoy no vienen de “hackers”… sino de algo mucho más simple

u/devseglinux — 2 days ago
▲ 3 r/Infosec+1 crossposts

Technical Breakdown: Enterprise Security Architecture with Defense-in-Depth (WAF, ESA, Sandboxing, and AAA)

u/Born-Winter3050 — 1 day ago

[Deep Dive] The second-order effects of Hardware-Backed Attestation and why standard root detection on Android is functionally obsolete.

u/thezoro66 — 1 day ago

AI data governance for insider threats - actually useful or just expensive monitoring

u/buykafchand — 7 days ago

Limitations of contract audits and the technical effectiveness of open bounty programs

u/webpagemaker — 7 days ago

UEBA feature bloat fixing alert fatigue or just making it worse

u/tingnossu — 2 days ago

AI insider threat detection - genuinely useful or just expensive noise

Been going back and forth on this for a while. The UEBA side of things has genuinely improved, behavioral baselines and dynamic risk scoring are meaningfully better than pure rules-based alerting, and the triage time reduction is real. False positive rates are down significantly on the platforms worth using. But every time I push a vendor on what happens after the alert, the story gets thin fast. No auto-containment, no clean integration with existing response workflows. Just a better alert sitting in a queue. The thing that keeps nagging me is the governance overhead. You get better signal but now you need cross-functional buy-in from HR and legal just, to act on it, and most orgs I talk to still aren't set up for that. Detection improves, response pipeline stays a mess. That gap doesn't close just because the model got smarter. The "AI countering AI" angle is also starting to feel less theoretical. Insiders using LLMs for low-noise obfuscation, subtle session abuse, behavior that stays just inside the baseline, is a real pattern now. Agentic AI makes the attack chains faster and harder to fingerprint. I'm not convinced most platforms have caught up to that yet, and the vendors who claim they have usually can't show me the evidence. Curious if anyone's actually seen the prevention side mature, or if it's still mostly a detection layer you bolt onto a response process that was already broken.

reddit.com
u/ryoumaskuy — 8 hours ago

커뮤니티 내 팁스터 수익률 데이터의 필터링 현상과 신뢰도 문제

u/kembrelstudio — 1 day ago