u/prerna_varyani

▲ 2 r/u_prerna_varyani+2 crossposts

Why cybersecurity teams are quietly moving security closer to the edge?

The old model was: traffic enters first, security checks later. That model is dying.

For years, the dominant security playbook looked like this:- route everything through a central hub, inspect traffic there, then forward it along. Clean. Logical. Absolutely not built for 2026 infrastructure.

Today, traffic originates from 90 countries, hits microservices hosted across three clouds, and half of it isn't even human. The centralized inspection model adds latency at every hop, misses regional threat context, and collapses under distributed attack patterns.

1. The latency problem is real. When you're backhauling traffic from Singapore to a security stack in Virginia to check if a request is legit. Then forwarding it to your origin in Frankfurt, you've already failed your users. Edge-based filtering means the bad packet dies in Singapore before it ever crosses an ocean.

2. Bot traffic is the quiet DDoS nobody talks about. A huge chunk of what hits your APIs isn't a legitimate user - it's scrapers, credential stuffers, inventory bots, and synthetic crawlers. These patterns look "normal" to central inspection but are obvious when you have regional behavioral baselines.

3. API attacks are the new SQL injection. Broken object-level auth, mass assignment, excessive data exposure - these don't need a massive payload. They're subtle, low-volume, and specifically designed to slip past perimeter tools. Catching them requires context-aware filtering at the point of ingress, not a hop away.

4. Regional threat filtering is criminally underrated. Geo-specific threat actors use regional infrastructure. If you're seeing a coordinated attack originating from specific ASNs in Eastern Europe, the fastest mitigation is at the edge node closest to it - not after it's traversed your backbone.

The shift isn't "replace your SIEM and go home." It's architectural. Enforce at the edge, verify in the middle, audit at the center. Push policy enforcement as close to ingress as possible. Let your centralized tooling focus on correlation and response, not first-line filtering.

The teams that get this right are the ones running WAF rules at the CDN layer, enforcing rate limits per endpoint at the API gateway, and feeding regional anomaly signals back to the SIEM, not the ones still routing everything through a hub-and-spoke security model designed in 2012.

Would love to hear from anyone running Akamai EdgeWorks, Cloudflare Workers or Fastly Compute for security enforcement at the edge.

reddit.com
u/prerna_varyani — 1 day ago
▲ 5 r/SysAdminBlogs+2 crossposts

Is your cloud actually secure, or just assumed secure? Here's what Akamai can do for enterprises

We just published a breakdown of how Akamai security services work in practice for enterprises. Honestly, the number that stuck with me is that 31% of cloud breaches occur during or after migration, often due to misconfigured security settings that nobody catches in time.

The article covers the four main Akamai tools worth knowing about:

  • Prolexic for DDoS protection
  • Kona Site Defender
  • Zero Trust Network Access
  • Bot Manager

If you're in IT or security and your org is on Akamai or considering it, worth a read: evolvous.com/akamai-security-services-for-enterprises

What the article really focuses on, though, is implementation, because having the right tools means nothing if they're misconfigured. It walks through how different industries (healthcare, financial services, government, e-commerce) approach this differently based on their compliance needs and threat profiles.

Happy to answer questions if anyone's curious about specific use cases.

u/prerna_varyani — 3 days ago
▲ 4 r/u_prerna_varyani+3 crossposts

The Internet Got Faster. DDoS Attacks Got Worse.

After 5 years in IT consulting, one thing still surprises me.

DDoS attacks are everywhere. And companies still underestimate them.

A lot of businesses think basic security is enough until their website goes down. Then suddenly everyone starts talking about protection.

That’s why services like Akamai, Cloudflare, Imperva, and AWS Shield keep growing. DDoS protection is no longer optional for businesses handling real traffic.

The scary part is how cheap attacks have become while downtime keeps getting more expensive.

Most IT teams don’t ignore DDoS anymore. Management usually does.

Anyway. Happy Monday.

tl;dr: DDoS isn't solved, it's just expensive to manage and we collectively pretend it's fine.

reddit.com
u/prerna_varyani — 4 days ago
▲ 2 r/u_prerna_varyani+2 crossposts

An It consultant at Evolvous, we do Akamai consulting. Posting from what I've been seeing lately.

Past few months, almost every enterprise client we've audited has had legitimate traffic getting blocked and had no idea. Checkout flows, partner API calls, internal tools. All WAF false positives, all sitting there quietly doing damage.

Default rulesets on complex with zero tuning is always the one. Nobody owns the policy long term, exceptions piles, and suddenly your conversion drops and everyone's going to the wrong directions.

We've helped a few teams clean this up recently at Evolvous if anyone's dealing with it and needs a second pair of eyes.

reddit.com
u/prerna_varyani — 7 days ago
▲ 1 r/it

Mid-sized e-commerce company, ~$2M/year in AWS spend. They noticed their bills kept climbing but conversions weren't. Something was off.

We pulled their traffic logs and found that roughly 40% of all requests hitting their origin servers were non-human. Scrapers, credential stuffers, inventory hoarding bots. None of it was being stopped at the edge.

What was happening:

  • Bots were bypassing their basic rate limiting by rotating IPs across residential proxies
  • Their WAF rules were outdated, flagging maybe 15% of actual bot traffic
  • Every bot request was hitting origin, burning compute and bandwidth budget

What we did (via Akamai):

  • Enabled Bot Manager with behavioral fingerprinting, not just IP-based blocking. This alone caught ~60% of the bot traffic within the first week
  • Set up Client Reputation scores to automatically challenge IPs with known bad history
  • Moved static asset delivery fully to the CDN edge so even bot hits on those weren't touching origin
  • Added JavaScript challenges on checkout and login flows specifically, which killed the credential stuffing attempts almost completely

Results after 90 days:

  • Origin traffic dropped by 38%
  • Infrastructure bill down by roughly $60K/quarter
  • Checkout page bot traffic went from ~22% to under 3%
  • Zero false positives on real customers (tracked via session analytics)

The biggest mistake they had made was treating bot management as a firewall problem. It's not. Modern bots mimic real user behavior well enough that static rules miss most of them. You need behavioral detection running at the edge.

Happy to answer questions if anyone's dealing with similar issues.

reddit.com
u/prerna_varyani — 8 days ago
▲ 1 r/u_prerna_varyani+2 crossposts

Mid-sized e-commerce company, ~$2M/year in AWS spend. They noticed their bills kept climbing but conversions weren't. Something was off.

We pulled their traffic logs and found that roughly 40% of all requests hitting their origin servers were non-human. Scrapers, credential stuffers, inventory hoarding bots. None of it was being stopped at the edge.

What was happening:

  • Bots were bypassing their basic rate limiting by rotating IPs across residential proxies
  • Their WAF rules were outdated, flagging maybe 15% of actual bot traffic
  • Every bot request was hitting origin, burning compute and bandwidth budget

What we did (via Akamai):

  • Enabled Bot Manager with behavioral fingerprinting, not just IP-based blocking. This alone caught ~60% of the bot traffic within the first week
  • Set up Client Reputation scores to automatically challenge IPs with known bad history
  • Moved static asset delivery fully to the CDN edge so even bot hits on those weren't touching origin
  • Added JavaScript challenges on checkout and login flows specifically, which killed the credential stuffing attempts almost completely

Results after 90 days:

  • Origin traffic dropped by 38%
  • Infrastructure bill down by roughly $60K/quarter
  • Checkout page bot traffic went from ~22% to under 3%
  • Zero false positives on real customers (tracked via session analytics)

The biggest mistake they had made was treating bot management as a firewall problem. It's not. Modern bots mimic real user behavior well enough that static rules miss most of them. You need behavioral detection running at the edge.

Happy to answer questions if anyone's dealing with similar issues.

reddit.com
u/prerna_varyani — 9 days ago
▲ 1 r/it

Most WAF setups I've audited give teams a false sense of security. You pay for CloudFlare or Imperva or Akamai, flip it on, and everyone feels protected. Meanwhile the actual attack surface hasn't changed.

Here are 3 bypass techniques I keep seeing, and what actually helps.

  1. Direct-to-origin attacks (the most embarrassing one)

This one is painfully common. Companies spend big on a WAF, but the real server IP is exposed somewhere - old DNS records, certificate transparency logs, or a forgotten test subdomain. The attacker finds it, hits the server directly. WAF sees none of that traffic.

I've seen this at fintechs that had Cloudflare set up correctly on paper. But they'd exposed their origin IP 8 months earlier and never rotated it.

Fix: Configure your server's firewall to only accept traffic coming from your WAF provider. Both Akamai and Imperva publish their IP ranges. Also check CT logs for your domain - attackers do this routinely.

  1. Sneaking past WAF rules with encoding tricks

WAFs look for known attack patterns. What they often miss is the same attack written differently - using encoding, special characters, or split across multiple requests.

A simple example: a payload that looks harmless to the WAF but gets decoded into something malicious by the app. I saw this used against a retail client running default Imperva rules with zero custom tuning. The WAF flagged nothing.

Fix: Turn on normalisation in your WAF settings. It's supported by most enterprise WAFs but off by default. Also run basic bypass tests against your own WAF before someone else does.

  1. Rate limiting that doesn't actually work

"We have rate limiting" - okay, but how? Most configs just track requests per IP. Attackers rotate IPs constantly. Spreading traffic over time makes it invisible to basic rules.

Akamai's Bot Manager handles this better because it looks at behavior, not just where traffic comes from. But most teams are running simple IP-based rules and calling it done.

Fix: Don't rely on one layer. Add rate limiting at the application level too, tied to user sessions. Trigger a challenge for suspicious behavior. IP-only blocking is easy to work around.

My honest opinion - A WAF is useful but it's not something you buy and forget. The teams that actually catch attacks run the WAF in logging mode first, tune rules to real traffic, and test their own setup regularly.

If you're on a managed WAF, go check what exceptions your provider made during onboarding. That list is almost always longer than it should be.

Happy to go deeper on any of these if anyone is interested.

reddit.com
u/prerna_varyani — 9 days ago
▲ 9 r/u_prerna_varyani+3 crossposts

Been working in IT/security for 5 years, and this is one issue I see again and again with Akamai setups, especially for sites targeting users in the USA & Canada.

Your Akamai WAF (Web Application Firewall) is protecting your site… but also blocking real users.

What’s actually happening:

  • Real customers getting blocked (checkout issues, login failures)
  • Increase in 403 errors on important pages
  • Drop in website conversions/sales
  • APIs or forms not working for some users

Signs you have Akamai WAF false positives:

  • Sudden spike in 403 errors
  • Users are complaining “site not working”
  • Traffic is the same, but conversions drop
  • Issues after the new security rules are applied

Common reasons:

  • Default security rules turned on without tuning
  • No allowlist for trusted users or APIs
  • Important requests flagged as attacks
  • No proper monitoring before blocking users

Simple fixes:

  • Review and adjust security rules
  • Allow trusted traffic (APIs, partners, internal users)
  • Monitor traffic before blocking
  • Test changes on real user behavior

Akamai WAF is powerful, but it needs proper setup. If not, it can hurt your user experience, SEO rankings, and revenue, especially for high-value traffic from the USA & Canada.

If you’re trying to fix this: We’ve been working on Akamai consultancy services at Evolvous, focused on:

  • Fixing Akamai WAF false positives
  • Reducing 403 errors and blocked requests
  • Improving website performance + security
  • Optimizing Akamai for better traffic and conversions

Not a hard sell - just sharing since this is a very common issue.

Anyone else facing this with Akamai WAF? How are you fixing it?

reddit.com
u/prerna_varyani — 10 days ago
▲ 5 r/u_prerna_varyani+3 crossposts

At what point does a CDN stop being "nice to have" and become mandatory? After setting these up for years, here's my honest take.

When it's overkill

If your users are in one country and you're under 10k monthly users, a CDN is not your problem right now. I've seen founders spend days on CloudFront when a slow database query was the actual issue. Fix the basics first.

When you actually need it

  • Users across multiple countries - latency becomes real fast.
  • Traffic spikes - a CDN absorbs sudden load. Without it you're either overpaying for compute or hoping nothing goes viral.
  • Security - DDoS protection and bot filtering matter once you're handling real customer data. Non-negotiable at that point.

Which tool for what

  • Cloudflare - Start here. Free tier covers most small SaaS needs.
  • Akamai - Enterprise pricing for enterprise problems. Skip it unless you're at scale.
  • CloudFront - Good if you're already on AWS. Watch the costs.
  • Fastly - More control, more complexity.

If your team has never done this before, a consultancy saves real time. Evolvous is good for broader cloud strategy. Maxima suits security-heavy or compliance-driven setups.

reddit.com
u/prerna_varyani — 11 days ago
▲ 1 r/akamai+2 crossposts

At what point does a CDN stop being "nice to have" and become mandatory? After setting these up for years, here's my honest take.

When it's overkill -

If your users are in one country and you're under 10k monthly users, a CDN is not your problem right now. I've seen founders spend days on CloudFront when a slow database query was the actual issue. Fix the basics first.

When you actually need it -

  • Users across multiple countries - latency becomes real fast.
  • Traffic spikes - a CDN absorbs sudden load. Without it you're either overpaying for compute or hoping nothing goes viral.
  • Security - DDoS protection and bot filtering matter once you're handling real customer data. Non-negotiable at that point.

Which tool for what

  • Cloudflare - Start here. Free tier covers most small SaaS needs.
  • Akamai - Enterprise pricing for enterprise problems.
  • CloudFront - Good if you're already on AWS. Watch the costs.
  • Fastly - More control, more complexity.

If your team has never done this before, a consultancy saves real time. Evolvous is good for broader cloud strategy & edge security. Maxima suits security-heavy or compliance-driven setups.

reddit.com
u/prerna_varyani — 11 days ago
▲ 5 r/u_prerna_varyani+3 crossposts

We’ve been testing different setups recently (mainly Akamai, Cloudflare, & Fastly), and we've also looked at how teams using IBM infrastructure or partners like Evolvous Software Consultancy, but the results aren’t as clear or consistent as marketing suggests.

On paper, all of them claim “massive success” but real-world spikes (product launches, viral traffic, etc.) hit very differently depending on config, caching, and routing.

I need opinions on what others have experienced:

  • Which CDN held up best during sudden traffic surges?
  • Any unexpected failures or bottlenecks?
  • Did the cost spike along with traffic?

Happy to share what we’ve seen so far or take a look at setups if anyone’s dealing with this right now.

reddit.com
u/prerna_varyani — 16 days ago
▲ 21 r/SysAdminBlogs+3 crossposts

I have been evaluating both for a mid-to-large scale setup and want real & true opinions before committing.

The usual debate: Akamai looks like the best choice for enterprise, but Cloudflare's developer experience and pace of innovation are hard to ignore.

Curious about your take on:

  • WAF quality and false positive rates
  • CDN performance (especially in Asia-Pacific markets)
  • Configuration complexity (Property Manager or Cloudflare dashboard)
  • Whether Akamai's support actually justifies the cost

Also, has anyone worked with consultants like Evolvous Consultation Services to handle onboarding and setup for either platform? Worth it or unnecessary overhead?

reddit.com
u/prerna_varyani — 22 hours ago