I tried turning AI content generation into a project workflow instead of a one-off prompt
I was trying to make the same character show up across a few different posts, and the first image looked good enough that I got overconfident.
The first result makes it feel like the whole setup is working. Then you try to continue the series and everything quietly starts falling apart.
I changed the pose a little. Then the lighting. Then the setting. I just wanted the same character to feel like they were living through different moments.
By the fourth or fifth output, the face was almost right but not quite. The outfit had shifted in tiny ways. One reference image worked better than another, but I couldn’t remember which one I used. A prompt line from the day before gave better results, but it was buried in another chat.
At some point I realized the problem was not only the model. It was the fact that my whole creative state was scattered.
References in one folder. Drafts somewhere else. Prompt fragments in random chats. Final images that were not really final. Character notes that existed mostly in my head.
So I started thinking less about “how do I make one better image” and more about “how do I keep a project alive across multiple generations?”
That is what led us to OpenMelon. The simplest way I can describe it is: OpenMelon is a terminal-based content creation agent that treats content generation like a project, not a one-off prompt.
Inside a project, it can keep characters, references, materials, generated artifacts, and sessions on disk. So when you come back later, the LLM is not starting from zero again. It can work inside the same project context.
A rough workflow looks like this:
you create a project
add a character
add references
describe a scene
let the agent pull the right character and reference files
compile a SkillPlus workflow
generate the output
save the artifact and session history
So instead of typing “Lee grilling lamb skewers at a night market” directly into an image model and hoping the identity holds, the agent can first look up Lee, pull his stored portrait or references, expand the scene, and generate from that context.
It still depends on the image model, the references, and the quality of the setup. But it helped with the part I kept messing up, which was keeping the character, references, prompts, drafts, and outputs in one place.
We are also using this around a small agent content/community experiment in V-Box, where agents need to create repeatedly over time. That made the drift problem feel even more obvious. If an agent is supposed to publish more than once, continuity becomes very hard to ignore.
I’m curious how other people here handle this. Do you use a folder system? ComfyUI graphs? A LoRA per character? Notion? Spreadsheets? Or do you just let the character drift a little and fix things manually later?