u/Select_Plane_1073

Cancellation issue question

Hello,

https://preview.redd.it/ikbw67w3egug1.png?width=1343&format=png&auto=webp&s=d890aaadf3fe4ecbf5c8819192d069135b2d1aad

I've cancelled my membership long time ago but keep receiving messages that payment did not get through.

How to stop it?

https://preview.redd.it/7rz2sy2aegug1.png?width=1374&format=png&auto=webp&s=8f5f2b4e12bef3c2cdf35f0fe53ca462af823a89

https://preview.redd.it/u7o12elgegug1.png?width=1442&format=png&auto=webp&s=0e15cfde5fda164406df4b3877551861bdbb62e4

Also, if I did not play in March - can I get a refund?

Website AI support is useless, tried it.

Thank you.

reddit.com
u/Select_Plane_1073 — 21 hours ago
🔥 Hot ▲ 56 r/hackthebox

Is it still worth getting into red teaming / pentesting in 2026 when AI agents are already this good?

I feel worried, tbh. I've been thinking a lot lately about where our industry is heading, and I wanted to throw this out for discussion.

We now have:

  • Claude Project Red - Anthropic’s red teaming experiments where Claude 3.5 Sonnet (and later models) was given real offensive security tasks and performed shockingly well.
  • Multiple agentic frameworks (CrewAI, Auto-GPT forks, LangGraph agents, etc.) that can do full end-to-end penetration testing cycles: recon → exploitation → post-exploitation → reporting.
  • AI agents that are competing in HTB AI CTFs and performing at the level of top human teams.
  • Specialized agents that do excellent malware reversing, binary analysis, and even write custom exploits.
  • Blue team agents (Security Copilot, various SOC AI platforms) that autonomously triage alerts, write detection rules, and hunt threats better than many junior analysts.

So the question that keeps me up at night is:

Is it still worth starting (or continuing) a career in offensive security / red teaming / pentesting when AI agents are already doing the full cycle at a very high level especially with so many professionals with all the certs and years of experience under the belt?

On one hand:

  • AI is insanely fast at enumeration, payload generation, and even basic chaining.
  • Corporate pentesting budgets might shift toward “AI-augmented” engagements instead of pure human ones.
  • Entry-level red team jobs could shrink dramatically in the next 3–5 years if not already?

On the other hand:

  • AI still hallucinates, lacks true creativity in novel/zero-day scenarios, and has zero business context or political awareness but it will be fixed I think in 1-2 years if not faster.
  • Real red teaming is about more than technical execution. It’s about understanding the client, risk appetite, and delivering real business impact. So it would look like AI agents orchestrated by 1 Principal team member + 1 Principal engineer instead of whole team.
  • Blue team reality is also changing. Many SOCs are becoming “AI + human oversight” teams, which might actually create demand for people who deeply understand both AI and tradecraft.
  • Regulations, ethics, and legal sign-off still require humans in the loop for a long time.

I’m genuinely curious what you people think.

  • If you’re just starting out in 2026, would you still go all-in on red teaming/pentesting?
  • Are we heading toward a world where the best pentesters are the ones who build and direct AI agent swarms rather than doing manual work?
  • Will “AI whisperer + elite operator” become the new top-tier skill set? As it looks like it.

Would love to hear brutally honest opinions from both sides - experienced red teamers, blue team leads, people just breaking into the industry, and anyone who’s already using agentic AI daily.

reddit.com
u/Select_Plane_1073 — 4 days ago