r/AzureSentinel

▲ 76 r/AzureSentinel+1 crossposts

SigninLogs
| where TimeGenerated > ago(24h)
| where ResultType == 0
| where AuthenticationRequirement == "multiFactorAuthentication"
| where RiskLevelDuringSignIn in ("high", "medium")
| extend DeviceId = tostring(DeviceDetail.deviceId)
| summarize
SigninCount = count(),
IPs = make_set(IPAddress),
RiskDetails = make_set(RiskDetail),
Apps = make_set(AppDisplayName),
DeviceId = any(DeviceId),
TimeGenerated = max(TimeGenerated)
by CorrelationId, UserPrincipalName, RiskLevelDuringSignIn
| where array_length(IPs) > 1
or isempty(DeviceId)
| project TimeGenerated, UserPrincipalName, IPs, Apps, RiskLevelDuringSignIn, RiskDetails, CorrelationId, DeviceId, SigninCount
| order by RiskLevelDuringSignIn desc, SigninCount desc

This surfaces successful MFA sign-ins that Entra ID still flags as medium/high risk — the exact pattern many default analytics rules miss because “MFA passed = safe.”If it returns results, investigate immediately.
High risk + MFA satisfied + proxy indicators (multiple IPs on the same CorrelationId or an empty DeviceId) is a classic AiTM phishing signal.

Save it. Run it daily. You’ll catch stuff your alerts don’t.

reddit.com
u/ridgelinecyber — 8 days ago

Identify which MFA methods your users actually use.

A simple KQL query against Sign-in logs gives you visibility into the MFA methods users are actually using:

SigninLogs
| where TimeGenerated > ago(90d)
| where ResultType == 0
| mv-expand AuthDetails = todynamic(AuthenticationDetails)
| extend AuthMethod = tostring(AuthDetails.authenticationMethod)
| where isnotempty(AuthMethod)
| where AuthMethod !in ("Previously satisfied")
| summarize AuthEvents = count(), Users = dcount(UserPrincipalName) by AuthMethod
| order by AuthEvents desc

https://preview.redd.it/nk9rrwqozj0h1.png?width=2664&format=png&auto=webp&s=7b6fab415cec249205902a39a05dd13f8c96e7fe

reddit.com
u/EduardsGrebezs — 2 days ago
▲ 32 r/AzureSentinel+1 crossposts

Detecting BEC Persistence with KQL

The detection rule that catches most BEC persistence (most still miss this one):

OfficeActivity
| where TimeGenerated > ago(1h)
| where Operation in ("New-InboxRule", "Set-InboxRule", "UpdateInboxRules", "Set-Mailbox")
| extend Parsed = parse_json(Parameters)
| mv-expand Parsed
| extend ParamName = tostring(Parsed.Name), ParamValue = tostring(Parsed.Value)
| where ParamName in ("ForwardTo", "RedirectTo", "ForwardAsAttachmentTo", "ForwardingSmtpAddress", "DeleteMessage", "MarkAsRead", "MoveToFolder", "Name")
| summarize 
    RuleActions = make_set(ParamName),
    ForwardDest = make_set(iff(ParamName in ("ForwardTo", " RedirectTo", "ForwardAsAttachmentTo", "ForwardingSmtpAddress"), ParamValue, "")),
    RuleName = max( iff(ParamName == "Name", ParamValue, "") ),
    ClientIP = max(ClientIP)
    by TimeGenerated, UserId, Operation
| where RuleActions has_any ("ForwardTo", "RedirectTo", "ForwardAsAttachmentTo", "ForwardingSmtpAddress")
   and (RuleActions has_any ("DeleteMessage", "MarkAsRead", "MoveToFolder") or array_length(ForwardDest) > 0)
// Optional: add your internal domains filter here to eliminate noise
// | where not(ForwardDest has_any ("@example.com", "@yourdomain.com", ...))
| project TimeGenerated, UserId, Operation, RuleName, ForwardDest, RuleActions, ClientIP
| order by TimeGenerated desc

Deploy this as a Sentinel analytics rule.

Run every 15 minutes. Alert on every hit.

This catches end-user inbox rules that forward to external addresses + hide/delete messages — the #1 BEC persistence trick.

(Pro tip: add your internal domains to kill false positives.)

This single rule would have caught the persistence mechanism in the majority of BEC cases we investigated last year.

There are other ways to address this, but the focus is on detection

reddit.com
u/ridgelinecyber — 7 days ago

Hi all. Sentinel bill is getting harder to defeend and i am tring to be smart about Analytics tier , Basic , Auxillary or...just dropping? (for me, is not a real option. But the others say this).

Right now everything go in Analytics. SigninLogs , AADNonIteractive, OfficeActivity , SecurityEvent, MDE tables, plus network and firewall. NonInteractive is almost half of the volume and i dont know how much real detection value we really get.

Thinking to move AADNonIteractive to Auxillary. If you did this, what detections did you lose? Worth it? Anyone using summary rules (at scale) , it is reliable or buggy? How agresive with DCR transformations. ADX for retention only or you actually run detections on it?

Please. not looking for "Turn It Off" advice , thanks.

reddit.com
u/wenttoibiza — 9 days ago
▲ 10 r/AzureSentinel+1 crossposts

After an attacker gets initial access — phishing, AiTM, whatever — the next 30 minutes are your best detection opportunity. Here's why most SOCs miss it.

The attacker has to do discovery. They have no choice. They just landed on a machine they've never seen before and need to answer basic questions: who am I, where am I, what can I reach, and who has admin. That means running commands. In sequence. Fast.

The discovery pattern looks like this (from real campaign telemetry):

   T+0s    whoami /all
    T+3s    hostname
    T+8s    ipconfig /all
    T+15s   systeminfo
    T+45s   net user /domain
    T+52s   net group "Domain Admins" /domain
    T+60s   nltest /dclist:contoso.com
    T+68s   net share
    T+75s   tasklist /svc
    T+82s   netstat -ano

10 commands in 82 seconds. Every one of these commands is legitimate on its own. Your L1 analyst sees "net user /domain" and closes it — admin doing admin things.

But the sequence is the signal. No human admin runs whoami → hostname → ipconfig → systeminfo → net user → net group in that order at that speed. That's either a scripted sequence or a human operator working through a checklist. Both are attacker behaviour.

The detection that catches this isn't complicated:

DeviceProcessEvents
| where Timestamp > ago(1h)
| where ProcessCommandLine has_any ("whoami", "ipconfig", "systeminfo",
"net user", "net group", "nltest", "netstat", “hostname2, “tasklist”, “net share”)
| summarize
   CommandCount = count(),
   Commands = make_set(ProcessCommandLine)
   by DeviceName, InitiatingProcessId, bin(Timestamp, 5m)
 | where CommandCount >= 5

5+ recon commands from the same parent process within a 5-minute window. That's the detection. It's behavioural — it catches the pattern regardless of which specific commands the attacker runs.

Why most SOCs miss it:

  1. Each command is triaged independently. The alert for "net group Domain Admins" fires. L1 sees a user account, sees the command is legitimate, and closes it. They never see the other 9 commands that ran in the same 82-second window.

  2. No campaign-level correlation. The SOC's detection rules fire per-event. Nothing ties the PowerShell execution from 2 hours earlier (the phishing payload) to the discovery sequence happening now. They're separate alerts in separate queues.

  3. The window closes fast. After discovery, the attacker has what they need. They move to credential harvesting (Kerberoasting, LSASS dump, token theft) and lateral movement. The noisy phase is over. From here, they blend with legitimate traffic.

The fix: detect the sequence, not the individual command. Aggregate by parent process and time window. If you see a burst of reconnaissance commands from a single process in under 5 minutes, that's your alert — and it should be high severity, not informational.

This is from a course I built on offensive security for defenders (Ridgeline Cyber). The free modules (M0-M1) walk through campaign-level thinking and lab setup if you want to dig deeper: training.ridgelinecyber.com/courses/offensive-security-for-defenders/

Happy to answer questions on the detection logic or the campaign patterns behind it.

Full disclosure: I built the Offensive Operations course at Ridgeline Cyber

u/ridgelinecyber — 14 days ago