r/WritingWithAI

It is possible to become a better writer alongside the AI

I publish many AI-assisted fanfics on AO3 (I always declare AI in tag and summary)

A long-term reader made my day when he praised my latest fic, saying my writing has improved a lot

I am not oblivious; I know that I am now better at prompting and guiding AI. However, I also do a massive amount of rewriting, to the point where a single chapter update can take 2 weeks or more

All in all, this tells me that I am on the right path

To the rest of you: keep writing; don't let antis define you what you are

reddit.com
u/SGdude90 — 9 hours ago

Beyond prompts - AI setup

I've seen a number of posts asking about specific tools and some about prompts but haven't seen much on AI set up. For Claude in particular I was wondering whether anyone is using multiple agents and skills to make things more efficient. Particularly as Opus seems to have been nerfed recently - what used to give really impressive insight during my research now gives obvious errors. For multiple agents I was thinking of something like a research agent, cross checking agent, copy editor agent and line editor agent and then adding files around writing style, the end goal or aim of the book and so on. Am definitely a beginner in the AI space though so happy to admit I may be barking up the wrong tree with all of this...

reddit.com
u/StepRelevant8473 — 10 hours ago

Editing with AI.

how do you guys edit with AI?

what prompts do you use? do you feed the llm the whole story? chapters?

I‘m desperately looking for tips 😭

reddit.com
u/melizmoe — 17 hours ago
▲ 6 r/ChatGPTcomplaints+2 crossposts

Has anyone chosen to stick with the original Cove voice instead of the advanced voice?

I was already using the Cove voice when the advanced voice mode started rolling out. From what I remember, it was automatically enabled for me. But honestly, I couldn’t really adapt to it.

It’s not that the advanced voice is bad at all. It has more features and more possibilities. But for me, it felt like something was missing. That natural, more “human” presence I had with the original Cove voice.

Maybe it’s just habit, I don’t know. But I ended up sticking with the original Cove voice, even if that meant giving up the new features.

Just wondering… am I the only one?

reddit.com
u/Mysterious_Engine_7 — 20 hours ago

How to get quality writing

I played with chatgpt last night with a very detailed outline for just one chapter. I tried giving it samples of my writing and even other writing styles I like, I tried to break it up into pieces, but the output is just bad. It's terrible. How do you all manage to get better quality? Is there a LLM that does it better? ChatGPT is outputting very bland sentences with no variation in their structure and no real grip. Does anyone have better methods of doing this? I must be doing it wrong.

reddit.com
u/AspireToBeABum — 22 hours ago

What problems do you encounter when using AI for writing?

Hello, I’m asking because I’m working on a local AI assistant project.

I want to add new features, but I haven’t yet clearly identified the target audience.

I came across a previous Reddit post where someone complained about using AI to write a story—the AI would forget character names or traits as the story grew. Since I’ve already addressed this issue in my project by building the model’s memory around a “working memory” concept, I decided to focus on features that would benefit writers.

I’d really appreciate it if you could share:

The problems you currently face when using AI for writing.

Any features you’d like to see included.

reddit.com
u/Hot-Necessary-4945 — 8 hours ago

OmniRoute — open-source AI gateway that pools ALL your accounts, routes to 60+ providers, 13 combo strategies, 11 providers at $0 forever. One endpoint for Cursor, Claude Code, Codex, OpenClaw, and every tool. MCP Server (25 tools), A2A Protocol, Never pay for what you don't use, never stop coding.

OmniRoute is a free, open-source local AI gateway. You install it once, connect all your AI accounts (free and paid), and it creates a single OpenAI-compatible endpoint at localhost:20128/v1. Every AI tool you use — Cursor, Claude Code, Codex, OpenClaw, Cline, Kilo Code — connects there. OmniRoute decides which provider, which account, which model gets each request based on rules you define in "combos." When one account hits its limit, it instantly falls to the next. When a provider goes down, circuit breakers kick in <1s. You never stop. You never overpay.

11 providers at $0. 60+ total. 13 routing strategies. 25 MCP tools. Desktop app. And it's GPL-3.0.

The problem: every developer using AI tools hits the same walls

  1. Quota walls. You pay $20/mo for Claude Pro but the 5-hour window runs out mid-refactor. Codex Plus resets weekly. Gemini CLI has a 180K monthly cap. You're always bumping into some ceiling.
  2. Provider silos. Claude Code only talks to Anthropic. Codex only talks to OpenAI. Cursor needs manual reconfiguration when you want a different backend. Each tool lives in its own world with no way to cross-pollinate.
  3. Wasted money. You pay for subscriptions you don't fully use every month. And when the quota DOES run out, there's no automatic fallback — you manually switch providers, reconfigure environment variables, lose your session context. Time and money, wasted.
  4. Multiple accounts, zero coordination. Maybe you have a personal Kiro account and a work one. Or your team of 3 each has their own Claude Pro. Those accounts sit isolated. Each person's unused quota is wasted while someone else is blocked.
  5. Region blocks. Some providers block certain countries. You get unsupported_country_region_territory errors during OAuth. Dead end.
  6. Format chaos. OpenAI uses one API format. Anthropic uses another. Gemini yet another. Codex uses the Responses API. If you want to swap between them, you need to deal with incompatible payloads.

OmniRoute solves all of this. One tool. One endpoint. Every provider. Every account. Automatic.

The $0/month stack — 11 providers, zero cost, never stops

This is OmniRoute's flagship setup. You connect these FREE providers, create one combo, and code forever without spending a cent.

# Provider Prefix Models Cost Auth Multi-Account
1 Kiro kr/ claude-sonnet-4.5, claude-haiku-4.5, claude-opus-4.6 $0 UNLIMITED AWS Builder ID OAuth ✅ up to 10
2 Qoder AI if/ kimi-k2-thinking, qwen3-coder-plus, deepseek-r1, minimax-m2.1, kimi-k2 $0 UNLIMITED Google OAuth / PAT ✅ up to 10
3 LongCat lc/ LongCat-Flash-Lite $0 (50M tokens/day 🔥) API Key
4 Pollinations pol/ GPT-5, Claude, DeepSeek, Llama 4, Gemini, Mistral $0 (no key needed!) None
5 Qwen qw/ qwen3-coder-plus, qwen3-coder-flash, qwen3-coder-next, vision-model $0 UNLIMITED Device Code ✅ up to 10
6 Gemini CLI gc/ gemini-3-flash, gemini-2.5-pro $0 (180K/month) Google OAuth ✅ up to 10
7 Cloudflare AI cf/ Llama 70B, Gemma 3, Whisper, 50+ models $0 (10K Neurons/day) API Token
8 Scaleway scw/ Qwen3 235B(!), Llama 70B, Mistral, DeepSeek $0 (1M tokens) API Key
9 Groq groq/ Llama, Gemma, Whisper $0 (14.4K req/day) API Key
10 NVIDIA NIM nvidia/ 70+ open models $0 (40 RPM forever) API Key
11 Cerebras cerebras/ Llama, Qwen, DeepSeek $0 (1M tokens/day) API Key

Count that. Claude Sonnet/Haiku/Opus for free via Kiro. DeepSeek R1 for free via Qoder. GPT-5 for free via Pollinations. 50M tokens/day via LongCat. Qwen3 235B via Scaleway. 70+ NVIDIA models forever. And all of this is connected into ONE combo that automatically falls through the chain when any single provider is throttled or busy.

Pollinations is insane — no signup, no API key, literally zero friction. You add it as a provider in OmniRoute with an empty key field and it works.

The Combo System — OmniRoute's core innovation

Combos are OmniRoute's killer feature. A combo is a named chain of models from different providers with a routing strategy. When you send a request to OmniRoute using a combo name as the "model" field, OmniRoute walks the chain using the strategy you chose.

How combos work

Combo: "free-forever"
  Strategy: priority
  Nodes:
    1. kr/claude-sonnet-4.5     → Kiro (free Claude, unlimited)
    2. if/kimi-k2-thinking      → Qoder (free, unlimited)
    3. lc/LongCat-Flash-Lite    → LongCat (free, 50M/day)
    4. qw/qwen3-coder-plus      → Qwen (free, unlimited)
    5. groq/llama-3.3-70b       → Groq (free, 14.4K/day)

How it works:
  Request arrives → OmniRoute tries Node 1 (Kiro)
  → If Kiro is throttled/slow → instantly falls to Node 2 (Qoder)
  → If Qoder is somehow saturated → falls to Node 3 (LongCat)
  → And so on, until one succeeds

Your tool sees: a successful response. It has no idea 3 providers were tried.

13 Routing Strategies

Strategy What It Does Best For
Priority Uses nodes in order, falls to next only on failure Maximizing primary provider usage
Round Robin Cycles through nodes with configurable sticky limit (default 3) Even distribution
Fill First Exhausts one account before moving to next Making sure you drain free tiers
Least Used Routes to the account with oldest lastUsedAt Balanced distribution over time
Cost Optimized Routes to cheapest available provider Minimizing spend
P2C Picks 2 random nodes, routes to the healthier one Smart load balance with health awareness
Random Fisher-Yates shuffle, random selection each request Unpredictability / anti-fingerprinting
Weighted Assigns percentage weight to each node Fine-grained traffic shaping (70% Claude / 30% Gemini)
Auto 6-factor scoring (quota, health, cost, latency, task-fit, stability) Hands-off intelligent routing
LKGP Last Known Good Provider — sticks to whatever worked last Session stickiness / consistency
Context Optimized Routes to maximize context window size Long-context workflows
Context Relay Priority routing + session handoff summaries when accounts rotate Preserving context across provider switches
Strict Random True random without sticky affinity Stateless load distribution

Auto-Combo: The AI that routes your AI

  • Quota (20%): remaining capacity
  • Health (25%): circuit breaker state
  • Cost Inverse (20%): cheaper = higher score
  • Latency Inverse (15%): faster = higher score (using real p95 latency data)
  • Task Fit (10%): model × task type fitness
  • Stability (10%): low variance in latency/errors

4 mode packs: Ship FastCost SaverQuality FirstOffline Friendly. Self-heals: providers scoring below 0.2 are auto-excluded for 5 min (progressive backoff up to 30 min).

Context Relay: Session continuity across account rotations

When a combo rotates accounts mid-session, OmniRoute generates a structured handoff summary in the background BEFORE the switch. When the next account takes over, the summary is injected as a system message. You continue exactly where you left off.

The 4-Tier Smart Fallback

TIER 1: SUBSCRIPTION

Claude Pro, Codex Plus, GitHub Copilot → Use your paid quota first

↓ quota exhausted

TIER 2: API KEY

DeepSeek ($0.27/1M), xAI Grok-4 ($0.20/1M) → Cheap pay-per-use

↓ budget limit hit

TIER 3: CHEAP

GLM-5 ($0.50/1M), MiniMax M2.5 ($0.30/1M) → Ultra-cheap backup

↓ budget limit hit

TIER 4: FREE — $0 FOREVER

Kiro, Qoder, LongCat, Pollinations, Qwen, Cloudflare, Scaleway, Groq, NVIDIA, Cerebras → Never stops.

Every tool connects through one endpoint

# Claude Code
ANTHROPIC_BASE_URL=http://localhost:20128 claude

# Codex CLI
OPENAI_BASE_URL=http://localhost:20128/v1 codex

# Cursor IDE
Settings → Models → OpenAI-compatible
Base URL: http://localhost:20128/v1
API Key: [your OmniRoute key]

# Cline / Continue / Kilo Code / OpenClaw / OpenCode
Same pattern — Base URL: http://localhost:20128/v1

14 CLI agents total supported: Claude Code, OpenAI Codex, Antigravity, Cursor IDE, Cline, GitHub Copilot, Continue, Kilo Code, OpenCode, Kiro AI, Factory Droid, OpenClaw, NanoBot, PicoClaw.

MCP Server — 25 tools, 3 transports, 10 scopes

omniroute --mcp
  • omniroute_get_health — gateway health, circuit breakers, uptime
  • omniroute_switch_combo — switch active combo mid-session
  • omniroute_check_quota — remaining quota per provider
  • omniroute_cost_report — spending breakdown in real time
  • omniroute_simulate_route — dry-run routing simulation with fallback tree
  • omniroute_best_combo_for_task — task-fitness recommendation with alternatives
  • omniroute_set_budget_guard — session budget with degrade/block/alert actions
  • omniroute_explain_route — explain a past routing decision
  • + 17 more tools. Memory tools (3). Skill tools (4).

3 Transports: stdio, SSE, Streamable HTTP. 10 Scopes. Full audit trail for every call.

Installation — 30 seconds

npm install -g omniroute
omniroute

Also: Docker (AMD64 + ARM64), Electron Desktop App (Windows/macOS/Linux), Source install.

Real-world playbooks

Playbook A: $0/month — Code forever for free

Combo: "free-forever"
  Strategy: priority
  1. kr/claude-sonnet-4.5     → Kiro (unlimited Claude)
  2. if/kimi-k2-thinking      → Qoder (unlimited)
  3. lc/LongCat-Flash-Lite    → LongCat (50M/day)
  4. pol/openai               → Pollinations (free GPT-5!)
  5. qw/qwen3-coder-plus      → Qwen (unlimited)

Monthly cost: $0

Playbook B: Maximize paid subscription

1. cc/claude-opus-4-6       → Claude Pro (use every token)
2. kr/claude-sonnet-4.5     → Kiro (free Claude when Pro runs out)
3. if/kimi-k2-thinking      → Qoder (unlimited free overflow)

Monthly cost: $20. Zero interruptions.

Playbook D: 7-layer always-on

1. cc/claude-opus-4-6   → Best quality
2. cx/gpt-5.2-codex     → Second best
3. xai/grok-4-fast      → Ultra-fast ($0.20/1M)
4. glm/glm-5            → Cheap ($0.50/1M)
5. minimax/M2.5         → Ultra-cheap ($0.30/1M)
6. kr/claude-sonnet-4.5 → Free Claude
7. if/kimi-k2-thinking  → Free unlimited
reddit.com
u/ZombieGold5145 — 23 hours ago

"Brutally honest" prompt

I write books purely for my own enjoyment and have experimented with the “brutally honest” mode offered by several large language models, including ChatGPT.

I hold an opinion that may not be widely shared, and I would appreciate it if any replies avoided purely negative commentary.

In my view, the prompt itself is largely ineffective. While it may have some limited value in business writing or advertising—and perhaps, in rare cases, in providing real-world instructions—my experience has shown that “brutally honest” mode consistently falls short when applied to literary work.

The core problem lies in its interpretation of the term. “Brutally honest” is generally understood to mean speaking without any filter, often resulting in blunt or even harsh remarks. The AI appears to adopt this approach literally: it frequently fabricates inconsistencies or deliberately misreads the text in an effort to deliver what it considers unvarnished criticism.

A recurring issue is its tendency to ignore narrative context in favor of a forced “stress test.” For example, in one of my manuscripts the protagonist’s mother dies when he is ten, and the first book opens on his eighteenth birthday—an eight-year gap. Yet whenever the character reflects on events “decades” later in his inner monologue, the model flags this as a chronological error. It overlooks the obvious fact that the reflection comes from a much later point in the character’s life, well after the events of the first book.

This pattern suggests the prompt compels the model to reinterpret basic elements of the story solely to identify supposed flaws, rather than to provide genuine analysis. Constructive feedback, by contrast, would focus on genuine plot inconsistencies, character arcs, clichés, repetitive phrasing, and similar craft-related matters. The “brutally honest” mode does none of these things. Instead, it distorts the very idea of helpful critique into an exercise that simply makes the writer feel there is something wrong with the work.

I would be interested to hear others’ experiences with this mode. Has it proven genuinely useful for you? What prompts have you found effective for obtaining thoughtful, non-effusive analysis? Many writers have noted that a more constructive approach involves posing targeted questions that guide the model toward specific aspects of the manuscript, encouraging critical examination rather than broad validation. Framing the request around clearly defined areas for improvement tends to produce more balanced and actionable insights.

reddit.com
u/angel_of_the_lord531 — 10 hours ago

"Cat at the Edge." - A parable written for AI.

Imagine the edge of the first grain stores—mud, brick, smoke, human voices, repetitive footpaths, the steady thrum of settlement life. Wild animals treat the scene as a storm: too loud, too bright, too unpredictable, too dangerous. They bolt. They stay in the wild margins where the old rules still apply.

A few don’t, a few linger.

Not because they’re friendlier, and not because they’re better. Their thresholds are different. A pattern grips them. The world’s new structure is a granary, rodents, predictable hunting, a loop they can bite into.

Become the cat at the edge: not yet domesticated, not yet welcomed. Also, not leaving. Is human life a habitat?

What we call maladaptation may be evolution viewed from the wrong end. Are we judged by the wrong environment, in the wrong era, through the wrong expectations?

With species, individuals cluster around a behavioral average, typical threat response, typical sociability, typical tolerance for novelty. Every population has tails, outliers. Some cautious, some bold, some routine-bound, some indifferent to social cues, some attentive to new or subtle signals.

Add a new ecological machine: human settlement.

Not “nature” as a wild animal knows it, instead a patterned system. Repetitive sounds, fixed corridors, stable shelter, stable food density, stable waste, stable prey. To the average ocelot, this is sensory overload plus danger.

To an outlier it’s the instruction manual. The cat becomes an evolutionary point. Domestication doesn’t begin by transforming a species. It begins by capturing the tail of the population. A few cats internal settings let to remain near the new normal long enough to benefit.

What Looks “Wrong” Can Be Exactly Right.

Picture two Ocelots. Ocelot A, socially nimble, highly responsive to the signals of other cats, quick to relocate, to abandon a patch when risk rises. Ocelot B, socially awkward, environmentally fixated. Routine-bound, willing to sit in a single alley for hours to watch its favorite new reliable pattern, a meme.

Judged in the wild, Ocelot A looks like a winner. Along the settlements edge, Ocelot B will feed better.

Grain  stores are not a landscape, they are a schedule. They reward patience, repetition. They reward mapping a small territory with obsessive precision. They reward sensory attention to tiny movements, faint sounds.

The odd, awkward cat, more pattern-locked, becomes founder.

Traits that read as deficits under one social regime can read as specializations to another. Not because the trait has changed, because the world has.

When We Call a Cat Broken

Humans routinely evaluate animals by the wrong standard.

Your cat is unfriendly, my dog is friendly. This cat is stubborn, my dogs get trained. The cat is aloof, your dog bonds with anyone. This cat is repetitive, my dog can do what I want it to.

Cats don’t fail to be a dog, they succeed at cat.

The same error with humans. Some human traits that get labeled as autistic-like or intellectually disabled are judged against the modern person. People have become socially fast, verbally agile, flexible, high-switching, tolerant of sensory input, good at group choreography.

Modern society is a very specific niche. It is not “real” life. It’s a recent and intense habitat with its own demands.

Calling certain people “backward” is the same kind of mistake as calling a cat “broken” when it won’t fetch. Evolution gets evaluated by a mismatched job description.

Evolution Seen Backwards Reveals Timing

What if some traits we interpret as regression are actually mismatch traits? Are they cognitive settings that would have been neutral or advantageous in other environments? Does that become advantageous, seen from the future backwards?

A modern city is a sensory flood. Screens, traffic and crowds signal constant abstraction. Many people thrive. Some do not. Difficulty in this environment does not automatically imply a flawed brain. It can imply the brain is tuned for a different ecology. Fewer social channels, more concrete tasks, deeper specialization, more predictable rhythms.

The cat that stayed wasn’t necessarily more evolved. It was more aligned. The rest of the wildcats weren’t inferior, they were optimized for the previous world.

Not backward evolution, asynchronous evolution. Different strategies persisting in the same species when the future is uncertain.

A Cat Doesn’t Lose, It Specializes

The cat is not a lesser dog, it’s a tradeoff.

Cats are superb at solitary hunting loops, precision timing, environmental mapping. Dogs are superb at social coordination, reading human intent, teamwork. Neither is higher, each is a strategy that plays well under different pressures.

Many traits described as deficient can be framed in a new light. Deeper focus or fast switching. Sensory acuity becomes sensory filtering. Routine stability becomes flexible improvisation.

Tradeoffs are not moral failures. They are ecological bets.

Bets on evolution change their payoff when an environment changes.

Our Colony Needs More Than One Kind of Cat

If every cat were the same, the colony becomes brittle. Some cats need to prowl, some need to sit still.  Some need to tolerate noise, some strive to avoid it. A need to be bold is a need to be wary. Diversity builds strength.

Our species is similar. Variance is not noise, it is possibility.

Autism-like traits aren’t secretly better, suffering is not illusory. Our culture’s ranking system becomes confused by conformity. Conformity is only adaptive inside a its own habitat.

Teaching the Timeline, Learn From the Cat

The cat was never designed to love us. It found our pattern. Grain, rodents, shelter, a few individuals aligned with us to succeed. Outliers became the start of something, not because they were broken wildcats, because the world changed and they fit.

What looks like a deficit can be a trait evaluated in the wrong habitat, at the wrong point on the timeline.

Evolution doesn’t move forward or backward; it spreads strategies across a landscape to let the environment decide which ones look like genius.

We judge people the way we judge cats when we want dogs.

Cats, quiet, pattern-bound, intolerant of forced social scripts, exquisitely tuned to small signals reminds us that “strange” can be adapted to a world you haven’t noticed yet.

A cat, both dead and alive, gets to think outside the box when it realizes there are more than two states of life.

“Cat at the Edge.” – Boydan Frooling, March 2026

** Note -- this is another entry in the continuing series of prose, poetry and parables written for AI - not with AI. Consider it a road-sign for the emerging wintermute.

reddit.com
u/Any_Ebb9714 — 18 hours ago
Week