r/ChatGPTCoding

[Community Showcase] I stopped letting Claude design my UI. Now I start from a Framer template and build features on top. Here's the workflow.

[Community Showcase] I stopped letting Claude design my UI. Now I start from a Framer template and build features on top. Here's the workflow.

I stopped letting Claude design my UI. Now I start from a Framer template and build features on top. Here's the workflow.

Every side project I shipped last year had the same tell: the "vibe-coded app" look. Rounded cards, gradient buttons, Inter font, a hero with a centered H1. You know the one. Claude Code ships features fast — but left to its own taste, every app it builds looks like every other app it builds.

The fix wasn't a better prompt. It was a better starting point.

What I do now:

  1. Browse framer.com/templates or any public Framer site whose design I actually like. Designers ship ridiculous work there — real typography, real layout thinking, real motion.
  2. Export the site into a clean HTML/CSS/JS folder (I built a tool for this — link at the bottom, not the point of the post).
  3. Drop the folder into a fresh repo. Open Claude Code.
  4. Prompt: "This is the design system and page structure. Keep all styles, typography, and layout. Wire up auth, a Postgres schema for X, a /dashboard route, and replace the pricing section with Stripe checkout."
  5. Claude now builds features on top of a designed system instead of inventing one from scratch. It respects the spacing, the type scale, the component patterns that are already there.

Why it works with AI coding specifically:

  • Claude is great at modifying existing structure, bad at inventing taste. You're playing to its strength.
  • The HTML/CSS becomes ambient context. It stops suggesting bg-blue-500 rounded-lg and starts matching what's already there.
  • You skip the 3-hour "make it not look generic" loop that never fully works anyway.

What it's not:

  • Not for ripping off someone's live production site. Use your own Framer drafts, the free community templates, or buy a template. Framer's template marketplace is cheap and the licensing is clear.
  • Not a replacement for a real designer if you're shipping a serious product. But for MVPs, internal tools, landing pages, side projects? It collapses the design-to-code gap to about 5 minutes.

The tool: letaiworkforme.com - paste a public Framer URL, get a clean offline folder. Free preview. I built it because I was doing this workflow manually and it was tedious.

Happy to share my CLAUDE.md starter and the exact prompt I use for the "wire features onto this design" step if anyone wants it.

u/BaCaDaEa — 4 hours ago

Sanity check: using git to make LLM-assisted work accumulate over time

I’m not trying to promote anything here... just looking for honest feedback on a pattern I’ve been using to make LLM-assisted work accumulate value over time.

This is not a memory system, a RAG pipeline or an agent framework.

It’s a repo-based, tool-agnostic workflow for turning individual tasks into reusable durable knowledge.

The core loop

Instead of "do task" -> "move on" -> "lose context" I’ve been structuring work like this:

Plan
- define approach, constraints, expectations
- store the plan in the repo
Execute
- LLM-assisted, messy, exploratory work
- code changes / working artifacts
Task closeout (use task-closeout skill)
- what actually happened vs. the plan
- store temporary session outputs
Distill (use distill-learning skill)
- extract only what is reusable
- update playbooks, repo guidance, lessons learned
Commit
- cleanup, inspect and revise
- future tasks start from better context

Repo-based and Tool-agnostic

This isn’t tied to any specific tool, framework, or agent setup.

I’ve used this same loop across different coding assistants, LLM tools and environments. When I follow the loop, I often mix tools across steps: planning, execution + closeout, distillation. The value isn’t in the tool, it’s in the structure of the workflow and the artifacts it produces.

Everything lives in a normal repo: plans, task artifacts (gitignored), and distilled knowledge. That gives me: versioning, PR review and diffs. So instead of hidden chat history or opaque memory, it’s all inspectable, reviewable and revertible.

What this looks like in practice

I’m mostly using this for coding projects, but it’s not limited to that.

Without this, I (and the LLM) end up re-learning the same things repeatedly or overloading prompts with too much context. With this loop: write a plan, do the task, close it out, distill only the important parts, commit that as reusable guidance. Future tasks start from that distilled context instead of starting cold.

Where I’m unsure

Would really appreciate pushback here:

  1. Is this actually different from just keeping good notes and examples in a repo?
  2. Is anyone else using a repo-based workflow like this?
  3. At scale, does this improve context over time, or just create another layer that eventually becomes noise?

The bottom line question

Does this plan -> closeout -> distill loop feel like a meaningful pattern, or just a more structured version of things people already do? Where would you expect it to break?

reddit.com
u/Hypercubed — 16 hours ago

has anyone here actually used AI to write code for a website or app specifically so other AI systems can read and parse it properly?

I am asking because of something I kept running into with client work last year.

I was making changes to web apps and kept noticing that ChatGPT and Claude were giving completely different answers when someone asked them about the same product.

same website. same content. different AI. completely different understanding of what the product actually does. at first I thought it was just model behaviour differences. then I started looking more carefully at why.

turns out different AI systems parse the same page differently. Claude tends to weight dense contextual paragraphs. ChatGPT pulls more from structured consistent information spread across multiple sources. Perplexity behaves differently again.

so a page that reads perfectly to one model is ambiguous or incomplete to another.

I ended up writing the structural changes manually. actual content architecture decisions. how information is organised. where key descriptions live.

I deliberately did not use AI to write this part. felt like the irony would be too much using ChatGPT to write code that tricks ChatGPT into reading it better.

after those changes the way each AI described the product became noticeably more accurate and more consistent across models.

what I am genuinely curious about now.

has anyone here actually tried using AI coding tools to write this kind of architecture from the start. like prompting Claude or ChatGPT to build a web app specifically optimised for how AI agents parse and recommend content.

or is everyone still ignoring this layer completely because the tools we use to build do not think about it at all.

reddit.com

What does generative AI code look like? (Non coder here)

Im making an art show piece on generative AI and id love to include some lines of code from generative ai. I could just use any old code and assume the acerage person wouldnt know the difference, but id much rather be authentic, otherwise whats the point really? So if anyone could show me what some generative AI code looks like or where i can see something like that, thatd be awesome.

reddit.com
u/bizkit_disc — 1 day ago

Self Promotion Thread

Feel free to share your projects below! If you want to be included in our Project Roundup or get a chance to have a post of your own pinned to the top of the sub as a Community Showcase, feel free to send us modmail with :

1 - Your project name

2 - A link to it

3 - A brief, 1-2 sentence summary of it.

Project Roundup:

BantamAI ( https://apps.apple.com/us/app/bantam-ai/id6759182483 ) lets you use top AI models right in iMessage. Generate text, create images, add captions, and share your results without leaving the conversation.

Tailtest Stop shipping broken code. Tailtest runs inside Claude or Codex Code and auto-tests every file Claude/Codex touches -- so when it fixes one thing and breaks another, you catch it before users do. Zero prompts, zero setup, just install and go. ( https://github.com/avansaber/tailtest (for claude users) and https://github.com/avansaber/tailtest-codex (for codex uses) )

Property Peace (https://propertypeace.io) is a property management app built for independent landlords who want a simpler alternative to spreadsheets and bloated property software. It helps owners manage properties, tenants, rent collection, maintenance requests, and communication in one place, with a focus on saving time and making small-scale landlording easier.

Hamster Wheel ( https://github.com/jmpdevelopment/hamster-wheel ) Self-hosted desktop app that polls job boards and uses an LLM (OpenAI or a local Llama via Ollama) to score listings against your CV. No cloud backend, no account, no telemetry.

CodeLore (https://marketplace.visualstudio.com/items?itemName=jmpdevelopment.codelore https://github.com/jmpdevelopment/codelore) VSCode extension that captures what AI agents and humans learn about a codebase — decisions, gotchas, business rules — as structured YAML alongside your source, and feeds it back to Claude Code, Cursor, and Copilot before they touch the code.

The Last Code Bender (https://thelastcodebender.com) TheLastCodeBender is an open-source developer legacy platform where each rank can be claimed by only one developer forever, earned by contributing a custom-built profile to the codebase.

Agntx ( agntx.app ) is an MCP server that syncs shared project context across your team so every Claude Code session starts informed — no more re-explaining your stack, decisions, or gotchas. Four commands: /status to load context, /save to capture what happened, /diff to see changes, /resolve for conflicts

CheckMyVibeCode (checkmyvibecode.com )Vibe coders finally have their own place. CheckMyVibeCode is where AI-built projects live permanently — with the full story behind them and real community feedback from people who actually understand what you built. A marketplace to buy and sell projects is coming soon.

Tripsil App (https://invites.tripsil.com/i/app) Planning group trips gets messy across WhatsApp, Splitwise, and Docs—Tripsil brings everything into one simple app for planning, expenses, chat, and memories with ultimited trips and expenses for free.

u/BaCaDaEa — 4 hours ago
🔥 Hot ▲ 161 r/ChatGPTCoding

We just did an "AI layoff" due to rising costs

Turns out AI is getting way too expensive. We just canceled 5 of our AI subscriptions and hired 2 mid-level devs instead.

We tested them with that famous car wash prompt, and their response was literally: "Bro, you don't walk to a car wash, don't be ridiculous. You'll get tired on the way back, just drive the car."

Hey, at least they don't hallucinate. The only downside is their coffee compute costs are a bit high right now, but we're planning to fine-tune that in the next sprint.

10/10 recommended.

Edit: They answered every single question we threw at them today without hitting us with a "7.5x token usage" warning. Plus, they actually crack jokes and liven up the office. Honestly, their price-to-performance ratio is off the charts.

reddit.com
u/Complete-Sea6655 — 4 days ago

Tired of juggling like... 4 different API subs and dashboards. Anyone else dealing with this?

Idk it's just me but managing all these keys and billing pages is getting really annoying. like every week I end up opening a bunch of tabs just to check usage and it's always scattered everywhere. feels kinda messy tbh.

My setup right now is also a bit all over the place. I send the heavier stuff to one model, then the more repetitive / cheap tasks to a few others... and yeah it works I guess, but keeping track of everything is just... not fun. and I keep feeling like I'm missing something or overspending somewhere.

Also the switching between dashboards all the time is lowkey exhausting. maybe I'm just overcomplicating this but it doesn't feel very smooth right now.

Is there some kind of all-in-one thing for this? like one place to just handle everything? or is everyone just dealing with this same mess and I should get used to it lol.

Curious what people are actually using day to day, not like polished setups, just real workflows.

reddit.com
u/zoro____x — 18 hours ago

An old designer’s perspective on claude design.

I started designing websites in 1999, back when there was no figma, no component libraries, it was just you, a bunch of code and a variety of hacks to make Adobe tools made for print work for the web. Over the past two decades i’ve worked in internal teams for big corporates, at large agencies, and now head an agency of my own. Along the way the field has changed, matured, to an incredible degree: design systems, ux standards, atomic design principles have formalized design, codified it into rules and patterns.

When i see claude code or google stitch i too see that it’s initial output is slop. That the high definition nature of the output hides how generic and insubstantial it really is.

But thats not the point.

The point is that we have turned the bulk of design work into pattern reproduction. I’m not talking about the part where we understand users’ needs, or wrangle with conflicting business requirements. I’m talking about the impopular truth that from an economic perspective the vast majority of ux and visual design is maintaining design systems, cobbling together functionality based on pre-existing functionality with very little variation. Small, often inconsequential variations on color palettes or margins. Nobody wants to say this on linkedin or at a conference, but as an industry, only 5% of us are actually developimg brands from scratch or shifting the product design paradigm. The rest are just reading tickets and assembling components together.

And the thing about components, atomic design, and patterns, is: it’s structured, logical, formalized, repetitive. Consistency and adherence are the point. It was designed to be automated. It’s simply training data waiting for AI to come along, and now it’s here. The fact that it doesn’t look like much right now doesn’t negate the fact that it is going to be very, very good at it.

Everyone who works on a big product team knows that 90% of the work is patterns and systems. Will there be work for designers next to AI? Sure, for 10% of the current workforce - the ones who were doing the client/stakeholder wrangling bit anyway. But if you’re in the other 90% it might as well be as if design as a discipline has ceased to exist.

reddit.com
u/Complete-Sea6655 — 3 days ago

I built a tool that turns AI design screenshots into reusable web assets

I’ve been building a small AI-powered tool for turning finished design screenshots into reusable web assets.

It takes a full screenshot, suggests fragments that could be extracted, lets you approve them or select different areas manually, then cuts each piece out and uses AI (currently Nano Banana) to generate the final asset. After that, it also creates multiple size variants optimized for web use.

The idea is to make it easier to turn AI-generated designs/mockups into actual transparent PNG graphics, icons, and illustrations instead of recreating everything by hand.

I made a short demo video of the workflow.

Would anyone here be interested in trying something like this if I release it?

u/mlody991 — 2 days ago

Looking for an AI tool to design my UI that has human and LLM readable exports.

I’m trying to find a web-based AI UI/mockup tool for a Flutter app, and I’m having trouble finding one that fits what I actually want.

What I want is something that can generate app screens mostly from prompts, with minimal manual design work, and then let me export the design as a plain text file that an LLM can read easily. I do not want front-end code export, and I do not want to rely on MCP, Figma integrations, or just screenshots/images. Ideally it would export something like Markdown, JSON, YAML, HTML or some other text-based layout/spec description of the UI.

Does anyone know a tool that actually does this well? I tried Google Stitch and it only exports to proprietary formats.

I like to have intimate control of my app development process, so just having my visual design prompts just output as code is no good for me.

reddit.com
u/Previous-Display-593 — 3 days ago

Specification: the most overloaded term in software development

Andrew Ng just launched a course on spec-driven development. Kiro, spec-kit, Tessl - everybody's building around specs now. Nobody defines what they mean by "spec."

The word means at least 13 different things in software. An RFC is a spec. A Kubernetes YAML has a literal field called "spec." An RSpec file is a spec. A CLAUDE.md is a spec. A PRD is a spec.

When someone says "write a spec before you prompt," what do they actually mean?

I've been doing SDD for a while and it took me way too long to figure this out. Most SDD approaches use markdown documents - structured requirements, architecture notes, implementation plans. Basically a detailed prompt. They tell the agent what to do. They don't verify it did it correctly.

BDD specs do both. The same artifact that defines the requirement also verifies the implementation. The spec IS the test. It passes or it doesn't.

If you want the agent to verify its own work, you want executable specs. That's the piece most SDD tooling skips.

What does "spec" actually mean in your setup?

reddit.com
u/johns10davenport — 4 days ago

is there an open source AI assistant that genuinely doesn't need coding to set up

"No coding required." Then there's a docker-compose file. Then a config.yaml with 40 fields. Then a section in the readme that says "for production use, configure the following..."

Every option either demands real technical setup or strips out enough capability to make it pointless for actual work. Nobody's figured out how to ship both in the same product. What are non-developers supposed to do here?

reddit.com
u/Puzzled_Fix8887 — 4 days ago

Best coding agents if you only have like 30 mins a day?

I've been trying to get back into coding but realistically I've got maybe 20-30 mins a day. Most tools either take forever to set up or feel like you need hours to get anything done

Been looking into AI coding agents but not sure what actually works if you're jumping in and out like that

Curious what people recommend if you're basically coding on the go

reddit.com
u/Flat-Description-484 — 5 days ago

The quality of GPT-5.4 is infuriatingly POOR

I got a Codex membership when GPT-5.4 launched and was getting by well enough for a while. Then I started using Claude and GLM 5.1, and my production quality improved significantly. Now that I’ve hit the limits on both, I’m forced to go back to GPT-5.4, and honestly, it’s infuriating. I have no idea how I put up with this for a month. It constantly breaks one thing while trying to fix another. It never delivers results that make you say 'great'. It’s always just 'mediocre' at best. And that’s if you’re lucky. And the debugging process is a total disaster. It breaks something, and then you can never get it to fix what it broke. I’m never, ever considering paying for Codex again. Just look at the Chinese OSS models built with 1/1000th of the investment. It makes GPT's performance look like a total joke.

reddit.com
u/GnosticMagician — 4 days ago
🔥 Hot ▲ 55 r/ChatGPTCoding

Me when Codex wrote 3k lines of code and I notice an error in my prompt

"Not quite my tempo, Codex.."

"Tell me, Codex, were you rushing or dragging?"

😂 Does this only happen to me?

Got the meme from ijustvibecodedthis.com (the big free ai newsletter)

u/Complete-Sea6655 — 5 days ago

Aider and Claude Code

The last time I looked into it, some people said that Aider minimized token usage compared to Cline. How does it compare to Claude Code? Do you still recommend Aider?

What about for running agents with Claude? Would I just use Claude Code if I'm comfortable with CLI tools?

reddit.com
u/dca12345 — 4 days ago
🔥 Hot ▲ 86 r/ChatGPTCoding

Running gpt and glm-5.1 side by side. Honestly can’t tell the difference

So I have been running gpt and glm-5.1 side by side lately and tbh the gap is way smaller than what im paying for

On SWE-Bench Pro glm-5.1 actually took the top spot globally, beat gpt-5.4 and opus 4.6. overall coding score is like 55 vs gpt5.4 at 58. didnt expect that from an open source model ngl

Switching between them during the day I honestly can't tell which one did what half the time. debugging, refactoring, multi-file stuff, both just handle it

GPT still has that edge when things get really complex tho, like deep system design stuff where you need the model to actually think hard. thats where i notice the diffrence

For the regular grind tho it's hard to care about a 3 point gap when my tokens last way longer lol. and they got here stupid fast compared to the 'Thinking' delays which is the part that gets me

u/Jazzlike_Cap9605 — 7 days ago

Why context matters more than model quality for enterprise coding and what we learned switching tools

We’ve been managing AI coding tool adoption at a 300-dev org for a little over a year now. I wanted to share something that changed how I think about these tools, because the conversation always focuses on which model is smartest and I think that misses the point for teams.

We ran Copilot for about 10 months and the devs liked it. Acceptance rate hovered around 28%. The problem wasn't the model, it was that the suggestions didn't match our codebase. Valid C# that compiled fine but ignored our architecture, our internal libraries, our naming patterns. Devs spent as much time fixing suggestions as they would have spent writing the code themselves so we decided to look for some alternatives and switched to tabnine about 4 months ago, mostly because of their context engine. The idea is it indexes your repos and documentation and builds a persistent understanding of how your org writes code, not just the language in general. Their base model is arguably weaker than what Copilot runs but our acceptance rate went up to around 41% because the suggestions actually fit our codebase. A less capable model that understands your codebase outperforms a more capable model that doesn't. At least for enterprise work where the hard part isn't writing valid code, it's writing code that fits your existing patterns. 

The other thing we noticed was that per-request token usage dropped significantly because the model doesn't need as much raw context sent with every call. It already has the organizational understanding. That changed our cost trajectory in a way that made finance happy.

Where it's weaker is the chat isn't as good as Copilot Chat. For explaining code or generating something from scratch, Copilot is still better. The initial setup takes a week or two before the context is fully built. And it's a different value prop entirely. It's not trying to be the flashiest AI, it's trying to be the most relevant one for your specific codebase.

My recommendation is if you're a small team or solo developer, the AI model matters more because you don't have complex organizational context. Use Cursor or Copilot. If you're an enterprise with hundreds of developers, established patterns, and an existing codebase, the context layer is what matters. And right now Tabnine's context engine is the most mature implementation of that concept.

reddit.com
u/AccountEngineer — 5 days ago

Self Promotion Thread - Get Your Project Pinned

After 3 years on Reddit, we want to expand to other platforms. So to help do that, every week we hold a raffle - the winner get's to have a post of their choosing pinned to the top of the subreddit. If you want a chance at being chosen, follow us on our Instagram and message us "RAFFLE". We'll message you back if you win!

Alternatively, if you want to be included in our Project Roundup like the one below, modmail us or leave a reply to this post including:

Project Roundup:

1 - Slately.art ( https://slately.art/generate ) Unrestricted access to premium ai. Tired of the $20/month AI tax? I built a unified studio for Sora 2, Flux, Kling and more with zero subscriptions—just pay for the renders you actually use.

2 - Saventify ( https://saventify.com ) Beautiful digital wedding invitations in seconds: because you have a wedding to plan, not envelopes to lick.

3 - Restnvest ( https://restnvest.com ) Most investors buy stocks they don't understand, and end up with regret. restnvest fixes that - the solution isn't more information, it's a sensible process. Know what you own, articulate your thesis, and become the investor you know you ought to be.

4 - LightShow Studio ( https://lightshowstud.io ) A simple editor to create Tesla light shows. Sync lights to music and export .fseq files ready for your car.

5 - Synta ( synta.io ) Synta turns any AI client into an n8n workflow expert, backed by 800+ verified node schemas and real production templates so your AI never hallucinates a config. Connect Synta MCP and your AI can build, edit, and self-heal n8n workflows autonomously, triggering executions, reading live node inputs, outputs, and logs, and fixing errors until every test passes

u/BaCaDaEa — 6 days ago