u/Deep-Huckleberry-752

Been deep into prompt optimization for a while now. The frustrating thing about X is you scroll past stunning AI images all day, but barely anyone shares the actual prompt — and copying the description never gets you the same thing.

So I pulled 1,000+ of the most-liked prompts from X and looked for patterns. Three things kept showing up:

  1. Negative constraints still matter — telling the model what NOT to include actually does work
  2. Multi-sensory descriptions help — beyond visuals, add texture, temperature, even smell
  3. Group by scene type — portrait, product, food prompts each have a different shape

If you nail those three, you don't really need JSON-formatted prompts at all.

I turned the patterns into a system prompt. Feed it something like "a bowl of ramen" and it expands into a structured prompt. Works in ComfyUI, n8n, GPTs, anywhere that takes a system prompt.

On categories:

Early on the tags were a mess — content topics (Photograph / 3D / Product / Food / Poster / Design) mixed with prompt style tags (JSON) and meta tags (App / Other / Girl). A single prompt would often carry three or four tags and the dataset got hard to browse.

I redid the categorization based on what the final image actually looks like and dropped the cross-cutting tags entirely. Six content categories left:

  • Photography (533) — portraits, street, photorealistic
  • Illustration & 3D (370) — illustrations, 3D renders, CGI, icon sets
  • Product & Brand (239) — product shots, brand visuals, packaging
  • Food & Drink (156) — food, recipe visualizations
  • Poster Design (146) — movie/event posters, typography
  • UI & Graphic (52) — infographics, storyboards, UI mockups

The last two barely existed before GPT Image 2 — that's where it's strongest.

On the MCP:

Besides the JSON, there's a companion MCP you can drop straight into Claude Code / Cursor / VS Code. Two things it does:

First, natural-language search. Say "find me a few product photography ideas" in Claude Code and it calls search_gallery, pulls a handful of prompts back with thumbnails. See one you like, follow up with "give me the full prompt and reference images for #3" and it calls get_inspiration to return the source text and all image URLs.

Second, generation hookup. Once you've got an API key set up, you can say in the same conversation "rewrite this with a Japanese vibe and generate it" and it'll apply the system prompt rewrite rules, then call generate_image. The whole loop happens in one chat — find, rewrite, generate, no tool switching.

Local ComfyUI works too. Setup guide is in the repo, and once it's running it's all free.

Bumped the dataset for GPT Image 2's release. Current count: 1,446.

  • GPT Image 2: 298
  • NanoBanana: 1,148
  • Midjourney V7 set is small, still building

Each entry has the full prompt text, generated image URLs, author, likes, views, and categories. JSON, CC BY 4.0, ranked by X likes within each model.

The GPT Image 2 cut leans toward posters, typography, and multi-panel storyboards. NanoBanana goes the other way — mostly portraits and product shots, often written in JSON.

Dataset and system prompt: https://github.com/jau123/nanobanana-trending-prompts

Companion MCP: https://github.com/jau123/MeiGen-AI-Design-MCP

Live gallery: https://www.meigen.ai

Featured in Awesome Prompt Engineering (5.5k stars).

u/Deep-Huckleberry-752 — 15 days ago
▲ 15 r/comfyui

Been deep into prompt optimization for a while now. The frustrating thing about X is you scroll past stunning AI images all day, but barely anyone shares the actual prompt — and copying the description never gets you the same thing.

So I pulled 1,000+ of the most-liked prompts from X and looked for patterns. Three things kept showing up:

  1. Negative constraints still matter — telling the model what NOT to include actually does work
  2. Multi-sensory descriptions help — beyond visuals, add texture, temperature, even smell
  3. Group by scene type — portrait, product, food prompts each have a different shape

If you nail those three, you don't really need JSON-formatted prompts at all.

I turned the patterns into a system prompt. Feed it something like "a bowl of ramen" and it expands into a structured prompt. Works in ComfyUI, n8n, GPTs, anywhere that takes a system prompt.

On categories:

Early on the tags were a mess — content topics (Photograph / 3D / Product / Food / Poster / Design) mixed with prompt style tags (JSON) and meta tags (App / Other / Girl). A single prompt would often carry three or four tags and the dataset got hard to browse.

I redid the categorization based on what the final image actually looks like and dropped the cross-cutting tags entirely. Six content categories left:

  • Photography (533) — portraits, street, photorealistic
  • Illustration & 3D (370) — illustrations, 3D renders, CGI, icon sets
  • Product & Brand (239) — product shots, brand visuals, packaging
  • Food & Drink (156) — food, recipe visualizations
  • Poster Design (146) — movie/event posters, typography
  • UI & Graphic (52) — infographics, storyboards, UI mockups

The last two barely existed before GPT Image 2 — that's where it's strongest.

On the MCP:

Besides the JSON, there's a companion MCP you can drop straight into Claude Code / Cursor / VS Code. Two things it does:

First, natural-language search. Say "find me a few product photography ideas" in Claude Code and it calls search_gallery, pulls a handful of prompts back with thumbnails. See one you like, follow up with "give me the full prompt and reference images for #3" and it calls get_inspiration to return the source text and all image URLs.

Second, generation hookup. Once you've got an API key set up, you can say in the same conversation "rewrite this with a Japanese vibe and generate it" and it'll apply the system prompt rewrite rules, then call generate_image. The whole loop happens in one chat — find, rewrite, generate, no tool switching.

Local ComfyUI works too. Setup guide is in the repo, and once it's running it's all free.

Bumped the dataset for GPT Image 2's release. Current count: 1,446.

  • GPT Image 2: 298
  • NanoBanana: 1,148
  • Midjourney V7 set is small, still building

Each entry has the full prompt text, generated image URLs, author, likes, views, and categories. JSON, CC BY 4.0, ranked by X likes within each model.

The GPT Image 2 cut leans toward posters, typography, and multi-panel storyboards. NanoBanana goes the other way — mostly portraits and product shots, often written in JSON.

Dataset and system prompt: https://github.com/jau123/nanobanana-trending-prompts

Companion MCP: https://github.com/jau123/MeiGen-AI-Design-MCP

Live gallery: https://www.meigen.ai

Featured in Awesome Prompt Engineering (5.5k stars).

https://preview.redd.it/7mj3n2zyc2yg1.jpg?width=2702&format=pjpg&auto=webp&s=75d6af952d21304edce056baee0cf9855117bbb1

reddit.com
u/Deep-Huckleberry-752 — 15 days ago