u/LucianoMGuido

I’m thinking of building an alternative pre-launch platform after Product Hunt removed Ship

Product Hunt used to have a pre-launch feature called Ship, where makers could create a waitlist, validate an idea, collect early interest, and build momentum before launch.

That part of the ecosystem feels missing now.

I’m thinking about building a platform focused entirely on pre-launch products for makers, indie hackers, and startups.

The idea is:

•	create a product pre-launch page

•	add screenshots, description, category, and launch date

•	collect waitlist emails

•	allow upvotes and comments

•	let people discover upcoming products before they officially launch

•	optionally feature products in a newsletter or curated list

One thing I also want to add is launch-readiness auditing powered by Conservatory by Symphony.

Before a product goes public, makers could audit their landing page for SEO, AI SEO/AEO, accessibility, metadata, schema, semantic structure, and overall AI-readiness.

The idea is that the platform wouldn’t only help founders collect early interest, but also help them improve discoverability and presentation before launch day.

Especially now that more traffic and discovery are starting to happen through AI systems like ChatGPT, Perplexity, Gemini, and Claude — not only traditional search engines.

The goal is not to replace Product Hunt, but to recreate some of the early validation and momentum-building experience that disappeared after Ship was removed.

Would you use something like this today? Or do you think pre-launch discovery has already moved completely to X, Reddit, LinkedIn, Discord, and private communities?

reddit.com
u/LucianoMGuido — 14 hours ago

I traced what OpenAI web search actually opens on two sites. The gap between 99/100 and 50/100 comes down to 3 things

Most LLM readiness discussions focus on content quality. I wanted to see the structural layer, what makes a page actually get opened and cited by OpenAI web search.

I built a CLI tool called Prelude by Symphony (open source, MIT, runs via npx) that uses the OpenAI Responses API with web_search_preview to trace which URLs the model actually opens for a query, not just which it searched, but which it read.

I ran it on two sites. Results:

Site A — 99/100, Grade A:

  • Schema types: Answer, FAQPage, ImageObject, Organization, SoftwareApplication, WebSite
  • 29 valid headings, H1: 1 ✓
  • Chunking quality: excellent (8 viable of 61 paragraphs)
  • GPTBot: allowed / ClaudeBot: allowed
  • Issues found: 1 (low — missing BreadcrumbList)

Site B — 50/100, Grade D:

  • Schema types: none
  • Headings: 1 total, H1: 0 — broken
  • Chunking quality: poor (0 viable paragraphs)
  • Robots.txt: not found
  • Issues found: 9

Site B had real content. The problem wasn't what it said — it was structurally invisible to LLMs.

The 3 things that explain the gap:

  1. Valid H1 hierarchy — LLMs use headings to understand page structure before reading content
  2. Structured schema (JSON-LD) — without it, the model can't identify what type of entity the page is
  3. Content chunking — paragraphs need to be independently meaningful to be citation-ready

If you want to check your own site, search for "symphony-prelude" on npm or GitHub — the audit command is free and doesn't require an API key. The trace command uses your own OpenAI key.

Happy to discuss methodology or run a comparison on anyone's site in the comments.

reddit.com
u/LucianoMGuido — 4 days ago
▲ 1 r/SaaS

We’ve all gotten a bit spoiled: ChatGPT, Claude, Copilot, etc. gave us access to super powerful models for almost nothing, or literally free in some tiers. That was never “normal”, it was aggressive subsidized growth, and the phase is clearly ending.

If you run a SaaS that leans on AI, this doesn’t just touch your feature set, it hits your margins. Many SaaS pricing models quietly assume:

  • Token costs are tiny and will stay that way
  • New models will be better and cheaper
  • OpenAI & friends will keep pushing prices down over time

What we’re actually seeing instead:

  • More paid tiers, less useful stuff in the cheap ones
  • Tighter rate limits and quotas
  • The really valuable features getting locked behind “pro/enterprise”

In SaaS terms, that means:

  • Your COGS can spike with one provider announcement
  • Your “unlimited” or “all you can eat” plans might stop making sense
  • Your negotiation power is weak if you’re married to a single vendor

Personally, I’m building a SaaS that depends a lot on AI and I’m now assuming costs will rise, not fall. So I’m focusing on:

  • Being very deliberate about where we use top models vs. smaller/cheaper ones
  • Designing flows that can downgrade to cheaper models if needed
  • Keeping the architecture flexible enough to switch or mix providers

I’d love to hear from other SaaS founders:

  • Are you baking realistic AI costs into your unit economics, or just hand-waving them?
  • Do you have a contingency plan if your AI provider doubles prices?
  • Has anyone already had to raise their SaaS prices because of AI costs, and how did that conversation go with customers?

Looking for concrete experiences here, not theory, what’s actually working for you?

reddit.com
u/LucianoMGuido — 8 days ago
▲ 104 r/SaaS

Let’s be honest for a second.

You spend months building. Nights, weekends, probably some mental health too. You’re a programmer, a founder, a designer, a marketer, all in one. You’re paying for Vercel, OpenAI, Claude, Supabase, Figma, Linear, Stripe, some AI writing tool, maybe a landing page builder. Month after month.

And your revenue? $0. (Maybe $100 if you’re lucky).

Meanwhile, every single tool you “need” to build your dream is billing you on autopilot. AWS, Google Cloud, Anthropic, Notion, etc they’re not betting on you. They don’t care if you make it. They already won. You are the product. Your ambition is the subscription.

The narrative is brilliant, honestly. “Anyone can build a SaaS now.” Yeah, and anyone can open a restaurant. Most close in year one, but the food suppliers always eat.

We work 60-hour weeks, bootstrap with credit cards or savings, and call ourselves founders while we’re actually just unpaid employees of a dozen Silicon Valley companies.

I’m not saying don’t build. I’m building too. But let’s stop pretending the ecosystem is rooting for us. It’s extracting from us.

Casino mode, the house always wins. We’re just the most motivated customers they’ve ever had.

reddit.com
u/LucianoMGuido — 9 days ago
▲ 2 r/SaaS

The original architecture: agent detects an issue → generates the fix → opens a PR → auto-merges if tests pass. Full autonomy. Clean pipeline.

I scrapped it before shipping.

Codebases carry intent that agents can’t infer from static analysis. A “redundant” attribute might exist for a specific edge case. An agent that auto-merges doesn’t just make technical mistakes, it makes product mistakes silently, and you find out three deploys later.

What I rebuilt: the agent handles the hard part (analysis + fix generation), you review the diff, then it opens a draft PR. It never merges without explicit approval.

I’m building a platform with several specialized agents working in coordination, and this keeps coming up as the core design question: where does the agent’s responsibility end and the founder’s begin?

The SaaS teams adopting this fastest aren’t chasing full automation, they want to eliminate the tedious while keeping ownership of the consequential.

That window of “getting this right before everyone ships it wrong” feels like it’s closing fast.

reddit.com
u/LucianoMGuido — 15 days ago

The original plan was full autonomy. Analyze → detect → fix → deploy. No friction, no approvals.

We redesigned it. Every agent action goes through a review-first workflow. The agent prepares the work, you approve it, then it executes. It never acts silently.

The reason wasn’t technical. It was trust. Websites carry context an agent can’t see, a “wrong” heading might be intentional copy, a missing tag might be decorative. Full autonomy doesn’t just make technical mistakes, it makes product mistakes.

I’m building a platform with several specialized agents working in coordination. The teams adopting this fastest aren’t the ones removing humans from the loop — they’re the ones who figured out exactly where humans should be in it.

That’s the playbook nobody’s talking about yet, and the window to get it right is closing fast.

reddit.com
u/LucianoMGuido — 15 days ago

I’ve been working on my own product site and everything looked solid from a traditional SEO perspective. Good scores, structured content, decent traffic.

But I started wondering something different:

→ how do LLMs actually interpret a page?

Not crawl it like a search engine but read it as input.

So I ran some experiments and found a few issues that typical SEO audits don’t catch:

• paragraphs that look fine visually but break when chunked

• weak semantic boundaries between sections

• missing or ambiguous entities

• content that lacks clear “units of meaning”

In other words, the site was optimized for humans (and maybe Google), but not for AI systems.

So I hacked together a small CLI to test this, it parses the page, runs chunking simulations, and scores semantic clarity across sections.

First run on my own site:

→ 91/100

→ biggest issue: 0 usable chunks out of 40+ paragraphs

After restructuring content and fixing chunking:

→ 99/100

What surprised me is how different this layer is from traditional SEO. It’s less about keywords and more about whether your content can actually be parsed and understood reliably by an AI system.

What metrics or signals are you using to evaluate LLM interpretability? Thinking about if others have run into similar issues.

reddit.com
u/LucianoMGuido — 17 days ago