u/Excellent_Poetry_718

how do you decide what features should stay out of an MVP?

I have been thinking a lot about MVP scope while working on small SaaS ideas. the hardest part is not usually building the feature, it is deciding whether the feature should exist in the first version at all. sometimes a feature feels useful in our head, but it adds more UI, more edge cases, more testing, and more support later. i am trying to follow a simple rule now: if the feature does not help the user reach the main outcome faster, it probably should wait. for SaaS developers here, how do you decide what to cut from the first version? do you use any checklist, user feedback, analytics, or just your own judgement in the beginning?

reddit.com
u/Excellent_Poetry_718 — 15 hours ago

building a tamil grammar tool after learning from our hindi grammar tool

i have been working on a small tamil grammar and spelling correction tool, mainly because we already built something similar for hindi and started noticing the same problem in other indian languages too. english has so many grammar tools, but regional languages still feel underserved, especially for students, writers, teachers, publishers and small content teams. the hard part is not just spelling correction. tamil writing changes a lot based on context, style and region, so the tool can easily overcorrect or miss the actual meaning. right now my plan is simple: collect real examples, test corrections manually, speak with local publishers and understand whether this is useful enough before adding too many features. for people here who built micro saas products, would you validate this first with publishers/teachers, or launch a simple public tool and learn from usage?

reddit.com
u/Excellent_Poetry_718 — 15 hours ago

We almost lost a client in week 6. Here's the dumb thing we did.

6 weeks into a project. client goes quiet on the friday demo. then says 'this isn't what i asked for..' We built the wrong thing. not because the brief was unclear. because we assumed instead of asked. 48 hours to fix it. worked all weekend. showed him monday. he stayed. referred us three months later. still think about it.

reddit.com
u/Excellent_Poetry_718 — 2 days ago

The deployment checklist item we always skipped until something broke in production

every deployment we did had the same gap. we checked the obvious stuff, tests passing, environment variables set, database migrations run, health check endpoint responding. what we kept skipping was rollback validation. we had a rollback plan on paper but we never actually tested whether it worked until we needed it in production. the first time we had to roll back a bad deploy the process was slower and messier than expected because some of the assumptions in the rollback plan were wrong. started actually running rollback drills after that. deploy to staging, verify it works, then roll it back and verify the previous version is fully functional again. adds maybe 20 minutes to the staging process but means the rollback procedure is tested and familiar before you ever need it under pressure. sounds obvious but i've talked to a lot of teams who have never actually executed a rollback in a non crisis situation. anyone else building rollback testing into their regular deployment process or only finding out it's broken when something goes wrong?

reddit.com
u/Excellent_Poetry_718 — 2 days ago

We stopped estimating project timelines the old way. Here's what we do now.

used to estimate projects by breaking down tasks and adding up hours. always wrong. either we were too optimistic about the easy parts or we forgot to account for the parts that only show up mid project. the number that came out of the estimate had no relationship to reality. what actually works for us now, we estimate based on decisions not tasks. how many unknown things are in this project. how many places where we'll need the client to clarify something before we can proceed. how many integrations we've never done before. high unknowns means longer timeline regardless of how simple the feature list looks. we pad aggressively for anything touching a third party api we haven't used before. we pad even more for anything where the client hasn't fully decided what they want yet. the projects that run over are almost always the ones where we underestimated the number of open questions at the start not the ones where we underestimated the coding time. has anyone else shifted how they think about estimates, away from hours and toward something else?

reddit.com
u/Excellent_Poetry_718 — 2 days ago

What if small businesses could rent a pre built AI agent for one month before committing?

been thinking about this after watching how most small business owners approach AI. they've heard about it, they're curious, but they don't want to pay for a custom build without knowing if it'll actually work for their specific situation. the barrier isn't the price. it's the uncertainty. what if instead of selling a custom build you offered a pre built agent for a specific vertical on a one month rental. a tailoring shop gets the order management bot. a service business gets the invoice follow-up agent. a clinic gets the appointment reminder system. they run it for 30 days on their actual data with their actual customers. if it works they keep it or upgrade to something custom. if it doesn't they walk away with no commitment. the interesting thing is you've already built most of these anyway as custom projects. the productisation layer is the only new work. does this exist anywhere or is everyone still doing full custom builds for small business AI?

reddit.com
u/Excellent_Poetry_718 — 2 days ago

Built a micro WhatsApp agent for a small business, it solved a problem we didn't know existed

Been building small focused tools alongside agency work. This one started simple, a whatsapp bot that lets you set reminders in plain english. "remind me to call the supplier friday at 10" and it handles the rest. took about a week to build. shipped it for a client and forgot about it. few weeks later she messaged saying it was the most useful tool in her business. we assumed she meant the reminders. turns out she runs a tailoring shop and gave the number to her customers. they started messaging it to check order status, reschedule fittings, confirm pickups. she built a customer communication system out of a reminder tool without us suggesting it. no app, no portal, no login. just whatsapp which every one of her customers already used. the interesting thing, she never asked us for a CRM. she just found the closest available tool and repurposed it. the micro tools that find unexpected uses are always the ones built to solve one thing really well. has anyone else had a tool get repurposed in a way that revealed a bigger problem than the one you originally solved?

reddit.com
u/Excellent_Poetry_718 — 2 days ago
▲ 4 r/github

How do you structure repositories when multiple clients share the same core codebase?

been dealing with this on client projects where we build similar systems for different businesses, property management, invoice automation, that kind of thing. the core logic is mostly the same but each client needs customisation. started with separate repos per client which got messy fast, bug fixes had to be applied to every repo manually. moved to a monorepo with client specific config files which is cleaner but branch management gets complicated when one client wants a feature others don't. now leaning toward a core library repo that each client repo depends on as a package. updates to core flow downstream, client customisation lives in their own repo. works better but versioning the core package adds overhead. curious how others handle this, separate repos, monorepo, shared package, or something else entirely?

reddit.com
u/Excellent_Poetry_718 — 3 days ago

What I learned running LangChain agents in production for real clients, the parts nobody talks about

been using langchain in production across a few different client projects, invoice automation, whatsapp reminders, financial reporting. the framework is great for prototyping but there are a few things that only show up when real users touch it that i didn't see covered well anywhere. context window bloat on long running tasks is the biggest one. the agent works perfectly in testing and silently degrades in production when the context fills up. no error thrown, just progressively worse output. we now do periodic summarisation checkpoints during long tasks, compress completed sections and carry a summary forward instead of appending everything. tool call failures without exit conditions is the second one. agent hits an error, retries, hits the same error, retries again forever. hard exit limit plus a flag for human review after two failures fixed this for us. state persistence across sessions is the third, langgraph helps here but the learning curve is steeper than the docs suggest. happy to go deeper on any of these if useful.

reddit.com
u/Excellent_Poetry_718 — 3 days ago

I built a chatbot for my dad's tailoring shop. His customers started using it in a way I never expected.

my dad runs a small tailoring shop. he was spending a chunk of every day answering the same whatsapp messages, is my order ready, can i reschedule my fitting, what time do you close. nothing complicated but it added up to a lot of interrupted time. i built him a simple whatsapp bot. customers message it in plain english and it handles the standard stuff automatically. order status, appointment rescheduling, shop hours, pickup confirmations. took about a week to build. what i didn't expect, customers started trusting it more than calling directly. they'd message the bot at 11pm to check their order status instead of waiting to call the next morning. the shop started getting fewer interruptions during work hours and customers felt like they were getting faster responses. he now has a log of every customer interaction which he never had before. didn't plan for that. just happened. the whole thing cost less to build than one month of a part time assistant. if you run a small business and you're still answering the same whatsapp messages every day, this is a solved problem. happy to answer questions about how it works.

reddit.com
u/Excellent_Poetry_718 — 3 days ago
▲ 5 r/nocode

hit the wall with Zapier on a client project, ended up writing a small custom script, worth it?

been helping a client automate their invoice follow-ups using zapier and make. worked fine for the basic stuff but kept hitting limits when we needed conditional logic based on payment history and custom timing per client. ended up writing a small node script to handle just that part and keeping zapier for the simpler triggers. feels like a hybrid approach but it works. curious if others are doing the same, mixing no code for the easy bits and dropping into code only when you genuinely hit a wall. or do most people just push no-code tools until they break completely?

reddit.com
u/Excellent_Poetry_718 — 3 days ago

How do you handle stripe webhook duplicate events in production?

been dealing with this on a payement integration project.. stripe sends duplicate webhook evnets more often than the docs suggest, especially around subscription state changes and failed payment retries. if you process them twice you end up with double entires in financial records which is a real problm for accounting. currently using idempotency keys tied to the stripe event ID and checking against a processed events table before doing anything. works but adds a database lookup to every webhook handler. curious if others have a cleaner aproach , are you using the event ID check, relying on stripes built in deduplication, or somthing else entirely?

reddit.com
u/Excellent_Poetry_718 — 3 days ago

We built an AI financial reporting tool for a small accounting firm, here is what actually took the longest

Disclosure: we build software at Teckas, this is from one of our recent projects.

A small financial services client had their accountant spending two full days every month pulling data from different systems and formatting reports manually. Same task, every month, zero variation. We built an agent that connects to their accounting software, pulls the data automatically and generates GAAP compliant reports in the right format. Two days became twenty minutes.

The integration part was straightforward. What actually took the longest was the QuickBooks API pagination behavior, it's inconsistent across date ranges and if you don't handle it carefully you get incomplete data silently with no error. Took several iterations to get reliable extraction every time.

The second hard part was prompt engineering for consistent GAAP formatting. Claude handles the generation well but maintaining formatting consistency across different report types needed a validation layer before anything goes out.

For anyone building in the fintech automation space, the data extraction reliability problem is underestimated. The AI part is fast. Getting clean consistent data in is where most of the project time goes.

Curious what others are building in financial automation and what integration problems you've run into.

reddit.com
u/Excellent_Poetry_718 — 3 days ago

[For Hire] Built a WhatsApp AI agent a client turned into a full customer CRM, available for similar projects

Hi, we are Teckas Technologies, a software development team from India. Sharing a specific project in case anyone needs something similar built. We built a WhatsApp AI agent for a client that was originally designed for internal task reminders. The user messages in plain English, "remind me to follow up with John tomorrow at 3", and it handles scheduling and follow-ups automatically. A few weeks after deployment the client gave the WhatsApp number to her customers. They started using it to check order status, reschedule appointments and confirm pickups. She turned a reminder tool into a full customer communication system without us suggesting it. Built on Twilio for WhatsApp routing, Claude API for natural language parsing, Node.js for the logic layer and PostgreSQL for storage. If you're building something in the AI automation or business communication space we're available immediately. Hourly from $15/hr, small tasks from $150, full projects from $800. DM with your project scope and budget range.

reddit.com
u/Excellent_Poetry_718 — 3 days ago

I used to think fixed price projects were the smart move. Then week 6 happened.

honestly still think about this one. we were building a property management system, six weeks in, everything felt fine. friday demo, client goes quiet. then says "this isn't what I asked for." turned out we built the rent collection flow completely backwards from how his team actually worked. not because the brief was wrong. because we filled the gaps with our own logic instead of asking. he wanted out. we asked for 48 hours. rebuilt the core over the weekend. showed him monday. he stayed. paid in full. referred us three months later. but the thing that still sits with me, nothing in the brief said we were wrong. we just assumed. every assumption in a fixed price project is a debt. sometimes you pay it in week 2. sometimes week 6. now we spend the entire first call just asking questions. no solutions discussed. just questions. has anyone else had a fixed price project nearly blow up because of something that wasn't even in the brief?

reddit.com
u/Excellent_Poetry_718 — 3 days ago

The client who nearly walked in week 6 ended up referring us three months later

Six weeks into building a property management system, rent collection, maintenance tracking, financial reporting, and everything felt fine on our end. Week 6 demo, client goes quiet, then says "this isn't what I asked for." We had built something technically solid but the rent collection flow was completely backwards from how his team actually operated. Six weeks, wrong direction. He wanted to pull the contract and had every right to. We asked for 48 hours. Got the whole team on a call that night, went back through the original brief line by line, found six places where we had assumed instead of asked, spent the weekend rebuilding the core flow and showed him Monday. He stayed, paid in full, and referred us to another client three months later. The thing that still sits with me is that nothing in the brief said we were wrong. We just filled gaps with our own logic instead of asking. Every assumption you make during scoping is a debt you pay later, sometimes small, sometimes week 6. Has anyone else nearly lost something big because of a miscommunication early on?

reddit.com
u/Excellent_Poetry_718 — 3 days ago

The one monitoring mistake we kept making on client deployments

Took us longer than it should have to figure this out. Every time we deployed something for a client we set up monitoring for the obvious stuff, uptime, error rates, response times. Standard stuff. What we kept missing was business logic monitoring. The app was technically healthy but something in the actual workflow was silently broken. An invoice not sending. A webhook not firing. A scheduled job running but producing wrong output. None of that shows up in your standard infra dashboard. We started adding a simple layer on top, key business events logged explicitly, alerts if expected events stop happening within a time window. Invoice agent hasn't sent anything in 2 hours? Alert. Reminder scheduler ran but zero reminders fired? Alert. Sounds obvious but it took a few painful production incidents to actually build it into our standard setup. Anyone else separating business logic monitoring from infra monitoring or treating them as the same thing?

reddit.com
u/Excellent_Poetry_718 — 3 days ago

We shipped something nobody asked for and it became our most used tool

Built an internal task tracker for our team about a year ago. Nothing fancy, just a way to log what everyone was working on and flag blockers. Took maybe three days. We used it daily and forgot it even existed after a while. Showed it to a client during a call once just to explain how we track sprints and they immediately asked if they could use it for their team. Then another client asked. Then a third. We never marketed it, never wrote a landing page, never even gave it a proper name. It just got used because it solved a real problem in a simple way. The tools we've built for ourselves have found users faster than anything we deliberately set out to productise. Not sure what the lesson is exactly but there's definitely something in building for your own pain first and seeing what happens after. Has anyone else had something they built for internal use quietly become something people actually wanted?

reddit.com
u/Excellent_Poetry_718 — 3 days ago

What if the best way to validate a startup idea is to just build it for one person first?

give all the post as a single para and human written

10:46 AM

Claude responded: Been thinking about this after watching how most of our best projects actually started.

Been thinking about this after watching how most of our best projects actually started. Not from a big market research exercise, from one specific person with one specific problem. The invoicing agent started because one client complained about chasing late payments. The WhatsApp bot started because one business owner wanted task reminders. The financial reporting tool started because one accountant was drowning in manual data entry. None of them started as startup ideas. They started as solutions for one real person. The interesting thing is the ones that started this way are also the ones that actually got used. The ones we tried to build for a broad market from day one always struggled to find real users. One person with a genuinely painful problem is a better starting point than any market size calculation. Build it for them first. If it works find ten more like them. If those ten use it you might have something. Does this exist as a formal methodology anywhere or is it just what people figure out the hard way?

reddit.com
u/Excellent_Poetry_718 — 3 days ago

Built a micro tool that turns podcast episodes into social posts, here's the one problem that took longest to solve

Been building small focused tools alongside agency work. This one came from watching podcast creators spend hours every week writing social posts for episodes they'd already recorded. So I built PodToPosts, takes the audio, transcribes it, generates posts in 10+ formats automatically. The technical part was honestly pretty straightforward. What I didn't expect was the voice problem. Every creator sounds different, their phrasing, their rhythm, the words they use. Generic AI output sounds like every other AI post on LinkedIn and creators noticed immediately. Spent weeks on this. Fed the model examples of each creator's past content, built a style extraction step before generation, kept iterating until the output actually felt like them and not like a tool. 12 paying customers now, tiny but real. For content tools the voice matching is the actual product, everything else is plumbing. Has anyone else built something where the AI output quality was the core challenge rather than the integration?

reddit.com
u/Excellent_Poetry_718 — 3 days ago