u/ridgelinecyber

▲ 1 r/AzureSentinel+1 crossposts

Service Principal Sign-Ins: A blind spot that a lot are missing

SOC analysts — when was the last time you checked service principal sign-ins?

Most teams never do, because the logs aren’t even enabled by default.

AADServicePrincipalSignInLogs is a completely separate table from normal user SigninLogs. Service principals log in independently:
• No MFA
• No Conditional Access (unless you explicitly enabled workload identity policies)
• Invisible in standard sign-in dashboards

An attacker who creates or compromises a service principal gets silent, persistent access that:
→ Doesn’t appear in user logs
→ Bypasses all user-based detections
→ Survives password resets and offboarding
→ Authenticates on its own schedule

Quick start to close this gap:

  1. Entra ID → Monitoring & health → Diagnostic settings
  2. Enable ServicePrincipalSignInLogs to your Log Analytics workspace

Then run this KQL:

let CorporateIPs = dynamic(["your-corporate-range-1", "your-corporate-range-2"]);
AADServicePrincipalSignInLogs
| where TimeGenerated > ago(30d)
| where isnotempty(IPAddress) and IPAddress !in (CorporateIPs)
| summarize 
    TotalSignIns = count(),
    SuccessCount = countif(ResultType == 0),
    FailureCount = countif(ResultType != 0)
    by ServicePrincipalName, AppId, IPAddress, Location
| extend FailureRate = round(toreal(FailureCount) / TotalSignIns * 100, 2)
| order by TotalSignIns desc
reddit.com
u/ridgelinecyber — 22 hours ago
▲ 7 r/iam+1 crossposts

Owning a service principal equals owning its permissions.

Silverfort published research two weeks ago showing the Agent ID Administrator role could take over any service principal in a tenant. Microsoft patched the specific flaw. But the underlying primitive is unchanged: if you own a service principal, you own its permissions.

The attack is simple. Gain ownership of a service principal that holds a directory role. Add a client secret. Authenticate as that service principal. Inherit every permission it holds. If the target has a Global Administrator, that's a full tenant takeover.

99% of tenants have at least one privileged service principal. Most organizations don't audit who owns them.

Here's what most environments look like:

→ Service principals created by developers who left 12+ months ago

→ Ownership assigned at creation time, never reviewed

→ Credentials that haven't been rotated since the application was registered

→ Application-level permissions that bypass every user-scoped control

→ No alert when someone changes ownership or adds credentials

We wrote a post covering:

1. The attack chain — how ownership becomes takeover in four steps

2. Where to check in the Entra admin center — the portal paths most admins never open

3. Three PowerShell audit queries you can run in 30 minutes

4. Two KQL detection rules for Sentinel — ownership changes and credential additions

5. The consolidated audit script you can hand to your security lead

The organizations that get compromised through service principal abuse aren't the ones that failed to patch a specific vulnerability. They're the ones that never governed the primitive.

Full post with all queries and detection rules: https://training.ridgelinecyber.com/blog/service-principal-ownership-attack-path/

u/ridgelinecyber — 1 day ago
▲ 32 r/AzureSentinel+1 crossposts

Detecting BEC Persistence with KQL

The detection rule that catches most BEC persistence (most still miss this one):

OfficeActivity
| where TimeGenerated > ago(1h)
| where Operation in ("New-InboxRule", "Set-InboxRule", "UpdateInboxRules", "Set-Mailbox")
| extend Parsed = parse_json(Parameters)
| mv-expand Parsed
| extend ParamName = tostring(Parsed.Name), ParamValue = tostring(Parsed.Value)
| where ParamName in ("ForwardTo", "RedirectTo", "ForwardAsAttachmentTo", "ForwardingSmtpAddress", "DeleteMessage", "MarkAsRead", "MoveToFolder", "Name")
| summarize 
    RuleActions = make_set(ParamName),
    ForwardDest = make_set(iff(ParamName in ("ForwardTo", " RedirectTo", "ForwardAsAttachmentTo", "ForwardingSmtpAddress"), ParamValue, "")),
    RuleName = max( iff(ParamName == "Name", ParamValue, "") ),
    ClientIP = max(ClientIP)
    by TimeGenerated, UserId, Operation
| where RuleActions has_any ("ForwardTo", "RedirectTo", "ForwardAsAttachmentTo", "ForwardingSmtpAddress")
   and (RuleActions has_any ("DeleteMessage", "MarkAsRead", "MoveToFolder") or array_length(ForwardDest) > 0)
// Optional: add your internal domains filter here to eliminate noise
// | where not(ForwardDest has_any ("@example.com", "@yourdomain.com", ...))
| project TimeGenerated, UserId, Operation, RuleName, ForwardDest, RuleActions, ClientIP
| order by TimeGenerated desc

Deploy this as a Sentinel analytics rule.

Run every 15 minutes. Alert on every hit.

This catches end-user inbox rules that forward to external addresses + hide/delete messages — the #1 BEC persistence trick.

(Pro tip: add your internal domains to kill false positives.)

This single rule would have caught the persistence mechanism in the majority of BEC cases we investigated last year.

There are other ways to address this, but the focus is on detection

reddit.com
u/ridgelinecyber — 7 days ago
▲ 76 r/AzureSentinel+1 crossposts

SigninLogs
| where TimeGenerated > ago(24h)
| where ResultType == 0
| where AuthenticationRequirement == "multiFactorAuthentication"
| where RiskLevelDuringSignIn in ("high", "medium")
| extend DeviceId = tostring(DeviceDetail.deviceId)
| summarize
SigninCount = count(),
IPs = make_set(IPAddress),
RiskDetails = make_set(RiskDetail),
Apps = make_set(AppDisplayName),
DeviceId = any(DeviceId),
TimeGenerated = max(TimeGenerated)
by CorrelationId, UserPrincipalName, RiskLevelDuringSignIn
| where array_length(IPs) > 1
or isempty(DeviceId)
| project TimeGenerated, UserPrincipalName, IPs, Apps, RiskLevelDuringSignIn, RiskDetails, CorrelationId, DeviceId, SigninCount
| order by RiskLevelDuringSignIn desc, SigninCount desc

This surfaces successful MFA sign-ins that Entra ID still flags as medium/high risk — the exact pattern many default analytics rules miss because “MFA passed = safe.”If it returns results, investigate immediately.
High risk + MFA satisfied + proxy indicators (multiple IPs on the same CorrelationId or an empty DeviceId) is a classic AiTM phishing signal.

Save it. Run it daily. You’ll catch stuff your alerts don’t.

reddit.com
u/ridgelinecyber — 8 days ago

We just open-sourced VanGuard — a self-contained IR toolkit that bundles Velociraptor, Hayabusa, Chainsaw, Loki, and YARA into a single binary with a terminal UI.

Built it because we were tired of the 45-minute tooling setup at the start of every engagement. Download KAPE, remember the flags, set up Velociraptor, manually hash evidence, and track the chain of custody in a spreadsheet.

What it does:

  • Quick triage (20+ Windows, 15+ Linux artifact categories using native commands)
  • Velociraptor server lifecycle + agent deployment from the TUI
  • Threat hunting with Hayabusa, Chainsaw, Loki, YARA + live anomaly detection
  • Memory capture + Volatility 3 analysis
  • 28 pre-built use cases (ransomware, BEC, credential theft, lateral movement, rootkits) with MITRE ATT&CK mapping
  • Evidence dual-hashed (MD5 + SHA256), HMAC chain of custody
  • Runs from USB, works fully offline

Cross-platform (Windows + Linux), Apache 2.0, no dependencies.

GitHub: https://github.com/ridgelinecyberdefence/vanguard

It's provided as-is — every environment is different, especially with remote ops (WinRM/SSH auth varies by config). Test in a lab first. Issues and suggestions welcome on GitHub.

u/ridgelinecyber — 10 days ago
▲ 10 r/AzureSentinel+1 crossposts

After an attacker gets initial access — phishing, AiTM, whatever — the next 30 minutes are your best detection opportunity. Here's why most SOCs miss it.

The attacker has to do discovery. They have no choice. They just landed on a machine they've never seen before and need to answer basic questions: who am I, where am I, what can I reach, and who has admin. That means running commands. In sequence. Fast.

The discovery pattern looks like this (from real campaign telemetry):

   T+0s    whoami /all
    T+3s    hostname
    T+8s    ipconfig /all
    T+15s   systeminfo
    T+45s   net user /domain
    T+52s   net group "Domain Admins" /domain
    T+60s   nltest /dclist:contoso.com
    T+68s   net share
    T+75s   tasklist /svc
    T+82s   netstat -ano

10 commands in 82 seconds. Every one of these commands is legitimate on its own. Your L1 analyst sees "net user /domain" and closes it — admin doing admin things.

But the sequence is the signal. No human admin runs whoami → hostname → ipconfig → systeminfo → net user → net group in that order at that speed. That's either a scripted sequence or a human operator working through a checklist. Both are attacker behaviour.

The detection that catches this isn't complicated:

DeviceProcessEvents
| where Timestamp > ago(1h)
| where ProcessCommandLine has_any ("whoami", "ipconfig", "systeminfo",
"net user", "net group", "nltest", "netstat", “hostname2, “tasklist”, “net share”)
| summarize
   CommandCount = count(),
   Commands = make_set(ProcessCommandLine)
   by DeviceName, InitiatingProcessId, bin(Timestamp, 5m)
 | where CommandCount >= 5

5+ recon commands from the same parent process within a 5-minute window. That's the detection. It's behavioural — it catches the pattern regardless of which specific commands the attacker runs.

Why most SOCs miss it:

  1. Each command is triaged independently. The alert for "net group Domain Admins" fires. L1 sees a user account, sees the command is legitimate, and closes it. They never see the other 9 commands that ran in the same 82-second window.

  2. No campaign-level correlation. The SOC's detection rules fire per-event. Nothing ties the PowerShell execution from 2 hours earlier (the phishing payload) to the discovery sequence happening now. They're separate alerts in separate queues.

  3. The window closes fast. After discovery, the attacker has what they need. They move to credential harvesting (Kerberoasting, LSASS dump, token theft) and lateral movement. The noisy phase is over. From here, they blend with legitimate traffic.

The fix: detect the sequence, not the individual command. Aggregate by parent process and time window. If you see a burst of reconnaissance commands from a single process in under 5 minutes, that's your alert — and it should be high severity, not informational.

This is from a course I built on offensive security for defenders (Ridgeline Cyber). The free modules (M0-M1) walk through campaign-level thinking and lab setup if you want to dig deeper: training.ridgelinecyber.com/courses/offensive-security-for-defenders/

Happy to answer questions on the detection logic or the campaign patterns behind it.

Full disclosure: I built the Offensive Operations course at Ridgeline Cyber

u/ridgelinecyber — 14 days ago
▲ 1 r/AzureSentinel+1 crossposts

The easiest way to diagnose whether you're running security operations or compliance operations:

Ask what causes your team to change something.

Compliance-driven triggers: audit findings, contract renewals, framework updates, and regulatory changes. The team acts when an external authority requires it.

Threat-driven triggers: an incident revealed a gap; a purple-team exercise showed a rule didn't fire; threat intel identified a new technique; and a coverage assessment found an empty ATT&CK tactic. The team acts because the adversary's behaviour demands it.

If your program changes primarily in response to audit cycles, you're running a compliance operation. That's a diagnostic, not a judgement — and it's fixable.

Full post: https://ridgelinecyber.com/blog/security-operation-or-compliance-operation/

u/ridgelinecyber — 15 days ago