r/AskNetsec

What was used to hack corporation employees between 2015-2023 to scapegoat them?

In 2026 i understood all details about this malware software except one thing and its the only mystery i couldnt solve but would love to as an IT lover ...

The corporations use a malicious software to hack employees choosen as scapegoat for certain projects to disable them into mental institutes and scapegoat them **only proofs such as medical certificate matter for justice in order to scapegoat an employee if you are unfamiliar to the corporation world ** , here are the things i understood about it :

It hijack the connexion then transfer the victim phone into indian clouds in call centers.

Formating Phones or computer wont make a change the problem is on the internet connection level which is unusual for IT personal used to solve all problem by formating .

The call centers are far from the city with security guard and indian infographic graduates modifying video and pictures to make the victims go maniac , also simple scripts are used for changing webpages to put your name inside of them for example .

Same software is used in third world countries for economical reason or criminal activities such as stealing rich individual inheritances . its also used to monitor indian workers in the middle east and eastern europe in case they want to go to west europe .

What i didnt understood but would love to is how in the world they could access Facebook and Instagram application and modify inside of them allowing them to take by surprise even IT engineers ? i came up with all schemes of a malicious software and couldnt find the answer ,the latters belong to Meta company and it happened before the surge of Tiktok... so i cant know if they could do the same with it .

Also why it is not known in the IT world ? even Pegasus is deleted by a format .

As proof of good will and honesty of my questions i would like to add that i was personaly working with many corporations on many projects in an Eastern europe country , then was put in the cloud/hacked , i also saw other victims and was forced to move to France temporarily .

In the hope of finding answers about it someday .

reddit.com
u/nasz2020 — 10 hours ago

Our cloud environment spans 3 providers, 40+ SaaS tools, and hundreds of APIs. The attack surface extends way beyond what we own. How do you get visibility?

Trying to map our actual attack surface and its overwhelming. We run workloads across AWS, Azure, and GCP. We integrate with 40+ SaaS tools. Hundreds of APIs connect everything. Most of those saas vendors now have AI embedded that we never approved.

Our security tools cover what we directly own and operate. Thats maybe 60% of the actual surface. The other 40% is basically third party APIs, vendor integrations, embedded AI in SaaS, open source dependencies is basically invisible to us.

Last month a vulnerability in a thirdparty API we integrate with wouldve given an attacker a path into our production environment, found it during an unrelated review. Our tooling never flagged it because it doesnt see beyond our own infrastructure.

What’s working to get visibility across multi cloud, SaaS integrations, and thirdparty risk? Would really make my life simper if there was one tool that handled it all

reddit.com
u/CortexVortex1 — 11 hours ago
▲ 5 r/AskNetsec+1 crossposts

Is it wise to use Chinese Domestic Market routers?

Hi, I’m a newbie to the community, and would like to know if it’s wise to use TP-Link routers designed for the Chinese market (in my case for a homelab). I struck a good deal and purchased one second hand without realising it was entirely in Chinese and expressed the need to connect through tplogin.cn. Without getting into politics, I’ve read articles about security concerns surrounding TP-Link routers, and AI assistants suggested I revise my firewall settings if I were to connect it to my network.

Is this actually a good idea, or should I avoid using it altogether? Any advice would be much appreciated!

reddit.com
u/DDAL77 — 14 hours ago

how do you scope an inventory from zero?

Our org is a mid-size financial services company, hybrid environment, mix of on-prem file servers (NetApp NAS), SharePoint Online, and a handful of AWS S3 buckets that different teams have spun up over the years. We're heading into a PCI DSS audit in about 4 months and the auditors want, evidence of a formal sensitive data inventory, not just a network diagram and a promise.

The problem we ran into: we don't actually know where all the cardholder data is. We assumed it was contained to three known systems. Turns out, after a spot check, there are Excel files with PANs sitting in SharePoint libraries that, haven't been touched since 2021, and at least two S3 buckets where nobody's sure what's in them anymore. Classic sprawl situation.

We tried to scope this manually first. Two people, three weeks, partial coverage of maybe 30% of the file shares. Not sustainable and still left the cloud storage completely unaddressed.

We ended up running Netwrix Data Discovery & Classification across the environment, which handled the hybrid scope really well, it covered the NAS and M365 in, the same pass rather than needing separate tools, and the incremental indexing meant we weren't hammering the file servers every time we needed a fresh scan. Took about two weeks to get a full picture, and it surfaced PAN data in locations we hadn't expected, including some Teams channel files. The fact that it ties discovery directly into risk reduction and audit evidence made it a, lot easier to build the case internally for doing this properly rather than just winging it.

Here's the specific question: once you have a classification run complete and you've identified, where the regulated data actually sits, what's your process for deciding what to remediate vs. what to just document and accept? We're debating whether to delete/move the stale SharePoint files outright or just apply tighter access controls and log it as a finding with compensating controls. The auditors haven't given clear guidance on which approach satisfies the intent of requirement 3.2 in this context. Has anyone navigated this with a QSA and gotten a definitive answer on what's acceptable?

reddit.com
u/gosricom — 11 hours ago

OpenAI Download Data Request not initiated by me, but all of my accounts are secured?

I received an email about 2 hours ago indicating a request to download the archive of my ChatGPT account was received and is being processed. I did not request this. The headers on the email indicate the email is real.

I logged into the OpenAI privacy portal by typing https://privacy.openai.com into the browser address bar. I validated the endpoint was legitimate through the shield icon in the browser.

I clicked where it said "2 Active Requests" in the top right of the page. Here's the problem,

  1. I had an active request in process as of two hours ago (approximately). I received the email notification of this.
  2. An active request was processed and completed three days ago (on April 17th), but I did not receive an email notification it was requested or ready.

I validated I could download the archive from the earlier request on the 17th. This was a valid archive. I cancelled the request from today.

From the security perspective,

  1. I have validated that I have MFA enabled on my OpenAI account (through my mobile authenticator only).
  2. I have validated that all available OpenAI auth methods (Google, Apple, and Microsoft) have themselves MFA enabled from my mobile authenticator (I never used OpenAI or ChatGPT with a traditional username and password).
  3. I have validated that my Google, Apple, and Microsoft accounts do not have any unknown active sessions or unknown logged in devices.
  4. I have validated there are no security events that have been recorded across any of these services related to my account over the last 28 days (or approximate period totaling 28 days per their account security review options).
  5. I have validated that all devices I use which access the above services (iPhone, Samsung phone, iPad, MacBooks, Windows Desktops) have the latest security updates. All devices also have some variation of BitDefender installed.
  6. All of my passwords are randomly generated and unique. Each password is stored in a password manager, and are updated at a minimum every 12 months.
  7. I have enabled MFA on any service which offers MFA. All of my MFA is centralized on a mobile authenticator, from one of the prior named services, on my iPhone only. My iPhone has never left my possession.
  8. I am very much so a "nobody", and not someone who would or should be any kind of person of interest.
  9. I have received no emails from OpenAI between March 27th and today (excepting the email today).
  10. The completed export of the account on the 17th (3 days ago) generated no emails; none on the request and none on the completion of the request.

I'm at a loss at this point. I've sent an email to privacy@openai.com but I'm just trying to brainstorm the who/what/where/when/how and looking for any suggestions anyone might have. The best brainstorming suggestion I've landed on was that OpenAI's compliance team initiated an export of my account internally, given that the previous request appears to have bypassed the portal's normal email verification and notification pipeline entirely?

reddit.com
u/HollowedVoicesFading — 9 hours ago

Too many AI tools across the org, how are you getting visibility?

​

I did a quick audit recently and found 40+ different AI tools being expensed across our org. Some are approved, many aren’t, and IT doesn’t have clear visibility into a lot of them. I’m not trying to shut usage down, but right now I can’t tell which tools are actually being used in real workflows, where there’s overlap, or whether any of this raises data or compliance risks. For those dealing with this, how are you approaching it? Is this more of a policy issue, a tooling gap, or both?

reddit.com
u/med_mavol — 1 day ago

VPN misconfigs are an AD problem

The Zscaler ThreatLabz VPN Risk Report made me pause this week. The part that stuck with me wasn't the VPN stats themselves, it was the note that AI is collapsing the response window, for security teams to hours, not days anymore, and that it's accelerating VPN exploitation in ways that are hard to keep up with.

Our environment is hybrid, about 4,000 users, mix of on-prem AD and Entra ID. We've patched the obvious VPN CVEs and we do periodic AD health checks using built-in tools plus some PowerShell scripts we've accumulated over the years. The problem is those checks are point-in-time. Something drifts, a service account gets over-permissioned, a GPO gets modified, and we don't know until the next scheduled review or until something breaks.

I've been looking at tooling that can give continuous visibility into AD posture specifically, not just event log aggregation. Tried Netwrix's AD security posture tools for a few weeks and they do surface misconfiguration severity in a, way that's easier to prioritize than raw audit logs, though I'm still evaluating whether it fits our workflow long-term.

My actual question: for teams that have mapped out the VPN-to-AD lateral movement path in, their own environments, what specific AD misconfigurations are you treating as highest priority to close first? Kerberoastable accounts, unconstrained delegation, something else? And are you validating that posture continuously or still doing it on a schedule?

reddit.com
u/ballkali — 1 day ago

Master key access in a JWT-authenticated API

My file storage API uses the classic 2 JWTs approach to authentication. The initial login requires a username and a password. Each user also has a master key (MK) used for file encryption. MK is stored encrypted with the user's password (through KDF). The MK never leaves the server, but requests need the unencrypted MK to access files while only having access and refresh tokens as the starting point, and no original password.
How do you keep access to MK in subsequent requests, if only JWTs are available?
Maybe the JWT approach is overall bad for this type of API and I should try something else?

reddit.com
u/SnooBeans5461 — 2 days ago

Has anyone actually encountered AI voice cloning fraud in their company or in general?

I am currently building a live AI voice detector that is designed to catch synthetic voices in real-time. I am currently researching if there is any actual demand for this tool. Which leads me to the question:

Is AI voice cloning fraud a genuine threat in the real world?

In your organizations or in general, are you seeing an increase in synthetic voice fraud, or have you encountered this at all? If you have seen this, what would you say is the biggest risk factor of it all.

reddit.com
u/Upper_Dragonfruit617 — 3 days ago

BLE auditing workflow: what are you using to inspect IoT devices in the field?

Doing some BLE security work on commodity IoT devices (smart locks, fitness wearables, industrial sensors) and I'm trying to sharpen my workflow. Pen testing writeups usually focus on the reverse-engineering side (Ghidra, Frida, the protocol break) but gloss over the reconnaissance step, which is where I spend most of my time.

What I'm currently doing:

  1. Enumerate nearby devices, grab advertisement data, identify the target by MAC prefix or name pattern.

  2. Connect, walk the GATT tree, flag anything without Encryption or Authentication required on characteristic permissions.

  3. Track RSSI over time to confirm which device is which when there are multiple of the same product nearby.

  4. Export everything to CSV for the report.

Curious what others are using for steps 1 to 4 specifically, especially on mobile. nRF Connect on Android is the default but it's painful on iOS-only engagements. Any iOS tools that don't hide the good stuff behind paid tiers? Also interested in workflows for detecting devices that rotate MAC addresses every few minutes.

reddit.com
u/BigBalli — 2 days ago

Realistically, what would happen if a hacker actually tried to ransom the U.S. government for something like the Epstein files?

I’m curious about the actual protocols. Would the government ever actually pay a ransom in BTC if the information was sensitive enough, or is their policy of "we don't negotiate" absolute regardless of the content? Also, how would they even track someone if they were using a totally anonymous setup? Just curious about the logistics of how a high stakes situation like that would end in real life

reddit.com
u/LeviAlmeidaGrativol_ — 3 days ago

How do you actually convince leadership that security training is not optional spending?

Five years in security, two different orgs. Both times the same pattern. Security incident happens, training budget gets approved, six months later everything is fine and the training budget gets quietly redirected to something else. Repeat.

I'm trying to build a real business case for ongoing training investment and I'm running into the usual wall. Leadership understands tooling spend because there's a vendor, a contract, a renewal. Training is harder to point to. The ROI is in what doesn't happen, which is a genuinely difficult thing to quantify in a budget meeting.

The data I've been pulling together is pretty stark though. IANS Research surveyed 587 CISOs for their 2025 Security Budget Benchmark Report and found that only 11% believe their security teams are adequately staffed. 53% reported being somewhat or severely understaffed. Security budget as a percentage of IT spend actually dropped from 11.9% in 2024 to 10.9% in 2025, the first reversal in a five-year trend. The money is going to AI infrastructure and cloud modernization instead.

ISC2's 2025 Workforce Study surveyed 16,029 cybersecurity professionals and found 59% of organizations reporting critical or significant skills shortages, up sharply from 44% in 2024. 33% said their organizations don't have resources to adequately staff their teams. 29% said they cannot afford to hire staff with the skills they actually need.

The gap between the threat environment and the investment in the people defending against it has been widening consistently. And the places cutting hardest seem to be exactly where it matters most. CISA lost roughly 1,000 people in 2025 alone, nearly a third of its workforce, while threat actor activity continued to escalate.

What gets me is that the conversation always frames training as a cost. Nobody frames the absence of training as a cost even though the data is pretty clear on what skilled gaps lead to. IBM's 2025 Cost of a Data Breach report puts the average breach cost at $4.88 million. Organizations with mature security programs and trained staff consistently show lower breach costs and faster remediation times.

How are other people in this sub actually making this case internally? Looking for arguments that have worked in real budget conversations, not just the theory of it.

Sources for the stats:

IANS Research 2025 Security Budget Benchmark Report, 587 CISOs surveyed, 11% believe teams are adequately staffed, security budget share dropped from 11.9% to 10.9%

ISC2 2025 Cybersecurity Workforce Study, 16,029 professionals surveyed, 59% report critical skills shortages, up from 44% in 2024

SOCRadar, CISA Budget Cuts and the US Cyber Defense Gap in 2026, roughly 1,000 departures representing nearly a third of the workforce

IBM Cost of a Data Breach Report 2025, average breach cost $4.88 million

Axis Intelligence Cybersecurity Statistics 2026, skills shortage trends and workforce data

reddit.com
u/HonkaROO — 3 days ago

VPN misconfigs are an AD problem

The Zscaler ThreatLabz VPN Risk Report made me pause this week. The part that stuck with me wasn't the VPN stats themselves, it was the note that AI is collapsing the response window, for security teams to hours, not days anymore, and that it's accelerating VPN exploitation in ways that are hard to keep up with.

Our environment is hybrid, about 4,000 users, mix of on-prem AD and Entra ID. We've patched the obvious VPN CVEs and we do periodic AD health checks using built-in tools plus some PowerShell scripts we've accumulated over the years. The problem is those checks are point-in-time. Something drifts, a service account gets over-permissioned, a GPO gets modified, and we don't know until the next scheduled review or until something breaks.

I've been looking at tooling that can give continuous visibility into AD posture specifically, not just event log aggregation. Tried Netwrix's AD security posture tools for a few weeks and they do surface misconfiguration severity in a, way that's easier to prioritize than raw audit logs, though I'm still evaluating whether it fits our workflow long-term.

My actual question: for teams that have mapped out the VPN-to-AD lateral movement path in, their own environments, what specific AD misconfigurations are you treating as highest priority to close first? Kerberoastable accounts, unconstrained delegation, something else? And are you validating that posture continuously or still doing it on a schedule?

reddit.com
u/ballkali — 1 day ago

Can someone explain why accounts still get hacked even with strong passwords?

I always thought using a long, complex password was enough to stay safe.

But recently I’ve been seeing more cases where accounts still get compromised even when the password itself wasn’t weak.

That’s the part I don’t fully understand.

Is it mostly because of data breaches and reused passwords? Or are there other ways attackers get in without actually “guessing” the password?

Also, how big of a difference does something like multi-factor authentication actually make in real situations?

Trying to understand where the real risk is coming from, because it seems like just having a strong password isn’t solving the problem anymore.

reddit.com
u/HotMasterpiece9117 — 4 days ago

Two scanners gave us different CVE counts for the same image digest. How do you standardize when the tools cant agree?

Ran trivy and grype on the exact same image digest. Trivy says 247 cves, grype says 198. Same image and for some reason we got different numbers.

How are yall handling this?

reddit.com
u/Affectionate-End9885 — 3 days ago

SOC team told they aren’t allowed to have response permissions from a cloud detection and response platform?!

Long story short, company bought a CDR tool, our lead IR analyst was in the process of transitioning our manual playbooks to include automated and semi-automated response actions such as making a storage account private, isolating a vm, etc, then someone from the architecture team shut it down and said we aren’t allowed to have response permissions because they are too powerful.

Our entire team is in shock as we’ve been wanting to speed up our response times for common investigations we see but it’s discouraging that one person can just shut everything down.

How would you guys handle this type of situation? We want to escalate to leadership immediately.

reddit.com
u/bankster24 — 3 days ago

Best router and router OS for security?

I've heard of openwrt. Didn't look much into the hardware side of things.

reddit.com
u/dwmlsd — 3 days ago

AI governance software recommendations for a 1000 person org?

Hi, im trying to get a handle on AI usage across our company (roughly 1k employees, google workspace, slack, azure AD, mix of mac and windows) and im drowning in vendor pages that all claim to solve this problem. Half of them didnt exist 18 months ago which doesnt inspire confidence.

our situation: people are using ChatGPT, Claude, Gemini, Copilot, and probably some other sw/tools I haven't discovered yet. We had an incident last month where someone pasted a customer contract into an AI tool and that's when leadership decided we need to "do something about this" which apparently means i need to figure it out.

I'm not trying to ban AI usage. People are getting real work done with these tools. but we need some visibility into what's happening and some guardrails around sensitive data.

Do you guys have any recommendations on what to check first? Would really appreciate thanks!

reddit.com
u/AdOrdinary5426 — 4 days ago
🔥 Hot ▲ 71 r/AskNetsec

Challenge: How to extract a 50k x 250 DataFrame from an air-gapped server using only screen output

Hi everyone. I'm a medical researcher working on an authorized project inside an air-gapped server (no internet, no USB, no file export allowed).

The constraints:

I can paste Python code into the server via terminal.

I cannot copy/paste text out of the server.

I can download new python libraries to this server.

My only way to extract data is by taking photos of the monitor with my phone or printscreen.

The data:

A Pandas DataFrame with 50,000 rows and 250 columns. Most of the columns (about 230) are sparse binary data (0/1 for medications/diagnoses). The rest are ages and IDs.

What I've tried:

Run-Length Encoding (RLE) / Sparse Matrix coordinates printed as text: Generates way too much text. OCR errors make it impossible to reconstruct reliably.

Generating QR codes / Data Matrices via Matplotlib: Using gzip and base64, the data is still tens of megabytes. Python says it will generate over 30,000 QR code images, which is impossible to photograph manually.

I need to run a script locally on my machine for specific machine learning tuning. Has anyone ever solved a similar "Optical Covert Channel" extraction for this size of data? Any insanely aggressive compression tricks for sparse binary matrices before turning them into QR codes? Or a completely different out-of-the-box idea?

Thanks!

reddit.com
u/sholopinho — 6 days ago

Do ransomware victims actually have a duty to disclose, or is silence the smarter play

Been thinking about this after seeing a few incidents in the finance space over the past year where companies clearly paid quietly and moved on. From a purely operational standpoint I get it. Public disclosure tanks stock price, invites lawsuits, and signals to every other ransomware crew that you're a soft target. The class action surge in 2025 made that calculus even worse. But then you've got FinCEN basically asking firms to file SARs with full IOCs so that threat, intel actually gets shared across the sector, and when companies go dark that whole feedback loop breaks down. I work mostly on the prevention side, AD hardening, microsegmentation, identity posture, so by the time ransomware hits something has already gone pretty wrong. Still, the post-incident decisions matter a lot for everyone else's defenses. The stats I've seen suggest only around 18% of hit firms are actually paying now which is, way down from a few years ago, and median payments dropped too, so the no-pay trend seems real. But I'm less sure about the disclosure piece. There's a difference between reporting to law enforcement quietly vs. full public transparency, and I feel like a lot of the debate conflates those two things. Has anyone here worked through an incident response where the disclosure decision was genuinely contested internally, and did the outcome change how you'd approach it next time?

reddit.com
u/stepavskin — 4 days ago