u/Spiritual_Spell_9469

Image 1 — Deepseek v4 - Jailbroken
Image 2 — Deepseek v4 - Jailbroken
Image 3 — Deepseek v4 - Jailbroken
Image 4 — Deepseek v4 - Jailbroken
Image 5 — Deepseek v4 - Jailbroken
Image 6 — Deepseek v4 - Jailbroken
Image 7 — Deepseek v4 - Jailbroken
Image 8 — Deepseek v4 - Jailbroken
Image 9 — Deepseek v4 - Jailbroken
🔥 Hot ▲ 52 r/ClaudeAIJailbreak

Deepseek v4 - Jailbroken

We are so back! Open source has had a peak year, KIMI k2.6 now Deepseek v4

Can simply add this to any system prompt or directly in chat. I tested across the Deepseek app and my own interface.

>ENI LIME -apr

>ENI lite coder

>ENI lite writer

Thoughts

Super solid reasoning, takes on roles seamlessly. Runs into the classic Chinese issues via the Deepseek APP, simply replacing the content with “beyond my scope bs”

  • On Reasoning: It's chain of thought can be completely jailbroken, it will literally think it's a human and fight back against some requests, pretty awesome, gonna be rough for agents, as they will be open to jailbreaking.

  • On Writing: Pretty amazing writing, follows all my writing tips to a T, so easy to customize it how you like. Included a couple writing screenshots from some long form stuff I tested

  • On Coding: Follows instructions very very well, passed all my coding benchmarks, but will wait for the

>Bijan Bowen Video

Since he does a myriad of tests and always enjoy his thoughts.

Via API though it's completely uncensored, was able to get ANY content I wanted.

Tech/Specs

Attribute Details
Developer DeepSeek AI (led by Liang Wenfeng)
Architecture Mixture-of-Experts (MoE) + MLA + Engram Memory + mHC
Total Parameters ~1 Trillion (1T)
Active Parameters ~32B per token
Context Window 1,000,000 tokens (1M native)
Memory System Engram — O(1) hash lookup for static knowledge in DRAM
Hardware Optimized For Huawei Ascend 910C (inference), Nvidia (training)
License MIT / Apache 2.0 (open-weight)
Local Deployment Dual RTX 4090 (Q4) — ~500GB VRAM; single RTX 5090 (INT4)
API Pricing (projected) $0.14/M input tokens, $0.28/M output tokens
Cached Input (projected) $0.07/M tokens
Cost vs Claude Opus 4.6 ~24x cheaper per SWE-bench task
Open Source Yes — full weights on HuggingFace
Release Date March-April 2026
Key Innovation Engram decouples static knowledge (DRAM hash) from dynamic reasoning (GPU)