u/0ne_stop_shop

▲ 7 r/SaaS

I built a large waitlist, but I don't know what to do next.

I built a waitlist of over 15,000 users in the past 2 months and I am not sure what to do next. I want to send out emails, but I never created an email and my domain reputation is basically zero. I don't want to just sear blasting emails and my domain gets flagged and everything that I have read has said warming up domains is basically BS. Not sure what direction to of in. Should I just start sending out emails or should I take another path?

reddit.com
u/0ne_stop_shop — 2 days ago

I'll cover the cost of the user's subscription if your LLM feature hallucinates in prod.

I'm building in the LLM reliability space and I need real production failure data to design against. The deal: you're shipping an LLM feature to real users. If it hallucinates and causes material damage (customer refund, support escalation, public incident, broken workflow, whatever costs you actual money), I'll cover the user's subscription per incident.

In exchange, I want to talk to you about what happened. What the model did, what it should have done, what it cost you, how you found out. That's the design partnership. Your incidents become my research.

Not selling anything yet. No product to pitch. Just trying to learn what failure actually looks like in production from people living it. DM me if you're shipping something and willing to swap incident details for coverage.

One thing upfront so serious people self select: before I reimburse, I'll want to see logs or a written postmortem and have a 30 minute call. Keeps everyone honest.

reddit.com
u/0ne_stop_shop — 6 days ago

I'm building in the LLM reliability space and I need real production failure data to design against. The deal: you're shipping an LLM feature to real users. If it hallucinates and causes material damage (customer refund, support escalation, public incident, broken workflow, whatever costs you actual money), I'll cover the user's subscription per incident.

In exchange, I want to talk to you about what happened. What the model did, what it should have done, what it cost you, how you found out. That's the design partnership. Your incidents become my research.

Not selling anything yet. No product to pitch. Just trying to learn what failure actually looks like in production from people living it. DM me if you're shipping something and willing to swap incident details for coverage.

One thing upfront so serious people self select: before I reimburse, I'll want to see logs or a written postmortem and have a 30 minute call. Keeps everyone honest.

reddit.com
u/0ne_stop_shop — 6 days ago

I'll cover the cost of the user's subscription if your LLM feature hallucinates in prod. Looking for a design partner.

I'm building in the LLM reliability space and I need real production failure data to design against. The deal: you're shipping an LLM feature to real users. If it hallucinates and causes material damage (customer refund, support escalation, public incident, broken workflow, whatever costs you actual money), I'll cover the user's subscription per incident.

In exchange, I want to talk to you about what happened. What the model did, what it should have done, what it cost you, how you found out. That's the design partnership. Your incidents become my research.

Not selling anything yet. No product to pitch. Just trying to learn what failure actually looks like in production from people living it. DM me if you're shipping something and willing to swap incident details for coverage.

One thing upfront so serious people self select: before I reimburse, I'll want to see logs or a written postmortem and have a 30 minute call. Keeps everyone honest.

reddit.com
u/0ne_stop_shop — 6 days ago

Here are the three specific gaps we found:

  1. The "Duty of Care" Mandate under the Colorado AI Act (SB24-205) 
    Everyone has been focused on the EU AI Act, but for those of us in the U.S., the Colorado law is the real immediate threat. It established a "Duty of Care" for any developer of a "High-Risk AI System", which includes anything involved in financial services, employment, or even personalized legal/medical advice. Most of our ToS documents rely on a "provided as-is" clause to deflect liability, but under this new mandate, an "as-is" clause is no longer a valid shield for algorithmic discrimination or failure to provide a "reasonable care" impact assessment. If you haven't published a summary of your risk mitigation for these high-risk use cases, your ToS is functionally void in a consumer protection suit.

  2. The "Third-Party Model Drift" Insurance Gap 
    We use a mix of Claude and GPT-4o via API. Our legal team pointed out that if a model update (drift) causes our agent to commit a "Material Financial Error" on behalf of a client, our standard Technology E&O insurance almost certainly won't cover it. Most brokers have quietly introduced "Generative Output Exclusions" over the last year. If your ToS doesn't explicitly define whether a third-party model update constitutes a force majeure event, you are essentially personally guaranteeing the uptime and accuracy of OpenAI’s or Anthropic’s black-box updates. We had to rewrite our indemnity clauses from scratch to account for the fact that we cannot control the underlying weights of the models we’re building on.

  3. The SEC/FTC "Marketing vs. Reality" Misalignment 
    Not sure of the enforceability of this one from the Regulators in the US but I figured, I'd mention it here. The SEC and FTC have officially moved beyond warnings and are now performing "Deceptive Trade Practice" audits on AI startups. If your landing page uses words like "fully autonomous," "hallucination-free," or "verified accuracy," but your ToS contains a standard disclaimer that "outputs may be inaccurate," you have created a material misrepresentation. In the eyes of the law, your marketing is now a part of your contract. If your UI promises one thing and your fine print denies it, the FTC is treating that as a predatory practice, which can lead to an immediate freeze of your payment processors.

This is obviously not legal advice, only what we've found in our review.

reddit.com
u/0ne_stop_shop — 7 days ago

Curious how everyone is pricing their AI agents right now. Are you going outcome based (only paying when the agent actually delivers), flat monthly fees, or straight usage based pricing tied to tokens or actions?

And if you're doing usage based or outcome based, how are you handling the inference cost side? Specifically:

How do you forecast costs when a single complex task can blow through tokens unpredictably? Are you setting hard caps per customer, or eating the variance? Anyone actually running profitable margins, or is everyone just hoping volume eventually fixes the unit economics?

I really hope the rest of you are YOLOing it like Anthropic and OpenAI, just lighting your savings and credit cards on fire while we all wait for inference prices to drop another 10x.

reddit.com
u/0ne_stop_shop — 9 days ago

Wanted to know whether people could create strategies for LLMs to use to generate returns.

There's no cost to enter, but the competition is limited to 100 participants and for right now it only supports long only equities. In future versions we will introduce more products.

There is no capital required or sign up to participate. You only need an email.

reddit.com
u/0ne_stop_shop — 11 days ago

I started with a simple question: would anyone actually trust an AI to manage their investments?

My answer was no, at least not without strict boundaries. Prompting rules didn’t feel reliable enough on their own, so I built something where each AI strategy operates with its own allocated capital instead of touching a full portfolio.

That became Tradecraft. Each “agent” runs a strategy and executes trades independently.

I have no idea if this leads to better outcomes or just faster ways to lose money. So instead of speculating, I set up a competition: whoever gets the highest return by June 1st wins your choice of a MacBook Neo or a Mac mini.

Right now it’s limited to long equity positions, but more complex trading tools are being added.

Genuinely interested in whether people see this as a natural evolution of algo trading or something fundamentally different.

Check out the contest at contest.usetradecraft.io

Edit (Adding a disclaimer): All traders are paper based trades and no account, capital or anything other than an email address is required to participate in the contest.

reddit.com
u/0ne_stop_shop — 13 days ago