u/0xsherlock
I’ve seen strong opinions on both sides of this
ctfs clearly help people learn fundamentals and get hands on experience
especially for beginners
but real world environments are often less structured
more noisy
and not designed like challenges
I wonder if ctfs mainly train pattern recognition
while real world work requires more adaptation and uncertainty handling
I’m not saying one is better than the other
just curious how others see the balance
would love to hear different perspectives
Just curious about different opinions
Everyone seems to struggle with something different in this field, so what was the hardest part for you to learn or understand?
What’s one thing you see beginners focus on too much while missing what truly matters in cybersecurity?
With modern exploit mitigations becoming more common such as ASLR, NX, PIE, and stack canaries, classic stack-based exploitation seems less straightforward than it used to be. In older systems, simple buffer overflows often led to direct control of execution flow, but in modern environments exploitation usually requires additional steps like information leaks to bypass ASLR, ROP chains to bypass NX, and more complex memory corruption techniques.
At the same time, heap exploitation techniques such as use-after-free, tcache poisoning, and double free seem to be more prevalent in modern real-world vulnerabilities and CTF challenges.
This raises a discussion. Has stack exploitation lost its dominance in modern binary exploitation, or is it still just as relevant but simply harder to find and exploit in real-world scenarios? Do you think heap exploitation has become the primary attack surface now?
I’m curious to hear different perspectives from people working in exploit development, reverse engineering, and vulnerability research