u/createvalue-dontspam

Why does bedtime always turn into doomscrolling?

Why does bedtime always turn into doomscrolling?

Most sleep apps are just scoreboards.

“Congrats, you slept terribly.” Thanks. Super helpful. 😂

The real problem was never tracking sleep after it happens.

It’s actually falling asleep in the first place.

But instead of fixing that, the industry gave us:

  • ⁠more charts
  • ⁠more notifications
  • ⁠more reasons to check our phones at 1 AM

Late-night doomscrolling, stress, racing thoughts, bad routines, noisy rooms, that’s what actually ruins sleep.

So we built Naptick AI.

An AI sleep companion designed to help before sleep begins.

It:

  • ⁠runs adaptive sound + light routines
  • ⁠reduces phone distractions
  • monitors room conditions
  • ⁠includes an AI sleep coach
  • ⁠learns what helps you sleep better over time

Because maybe the worst place for a sleep app… is on the device keeping you awake.

What’s your unpopular opinion about sleep apps?

We launched today on Product Hunt and would genuinely love feedback.

Please show your support and share your feedback on PH → https://www.producthunt.com/posts/naptick-ai

u/createvalue-dontspam — 3 hours ago

every application looks the same now and I'm not sure the resume is worth anything anymore

I have noticed something weird lately. The quality of applications from candidates has improved on paper, but in reality it is substandard.

Their cover letters are flawless. There is no doubt they don't have any typos. They have strong structures and all the relevant keywords for the job they are applying to. However, when we get on a call with them and try to ask them to explain their experiences, they are unable to explain half of what they have written with that clarity.

The volumes of applications don't help either. We posted for a mid-level role and had over 500 applications in 2 days. They all looked similar, following the same template.

To curb this, we made a few changes that have helped, not solved it, but helped. We added a short skill assessment before scheduling any interviews and saw the volume drop sharply. We also introduced async video screening early on, which gave us a much better understanding of whether someone had actually understood the role. We quietly dropped the cover letter requirement altogether.

It’s still not a perfect system. We're still making mistakes. But the candidates we're getting to the final stages are noticeably more qualified.

Curious what others are doing: Are you changing how applications come in, tightening your assessments, using specific tools? Would genuinely like to know what's actually working at your end.

reddit.com
u/createvalue-dontspam — 19 hours ago

What if every business had an AI COO running operations 24/7?

Most businesses today run on fragmented systems.

One tool for CRM.

Another for support.

Another for calling.

Another for follow-ups.

And somehow, teams are expected to hold everything together manually.

We kept asking ourselves:

What if one AI system could operate the business for you?

So we built Frontdesk AI.

An AI COO that can:

  • ⁠call, text & email customers 24/7
  • ⁠generate AI voice agents from your website
  • ⁠manage CRM and ticketing workflows
  • and automate customer operations end-to-end

Instead of stitching together dozens of SaaS tools, businesses get one operational AI layer that handles communication and workflows continuously.

The goal isn’t just automation.

It’s helping smaller teams operate with the efficiency of much larger companies.

We launched today and would genuinely love feedback:

What business workflow would you trust AI to fully handle first?

Please show your support on PH → https://www.producthunt.com/posts/frontdesk-ai-2

Launching today on Product Hunt - ngram, an AI video agent for product and marketing teams

Hey r/GrowthHacking,

We're launching ngram on Product Hunt today and the mods kindly offered to pin this so the community knows what we're up to.

Quick context on what we built. Most product and marketing teams sit on a backlog of things they should make videos for. Launches, feature updates, sales demos, onboarding walkthroughs, support tutorials, social cutdowns. The bottleneck is rarely ideas. It's production. Editing is slow, agencies are expensive, and template tools still leave you doing the hardest parts.

ngram is built for that. You bring what you already have, like release notes, a rough screen recording, screenshots, a doc, or a URL, and ngram plans the message, writes the script, builds the storyboard, and produces a polished, on-brand video. You can edit through chat, regenerate scenes, create variants for different audiences and channels, and translate into 100+ languages.

For growth teams specifically, the unlock is producing the right video per channel and audience without the production overhead. A LinkedIn cut, a 9:16 social version, an ad variant, an onboarding walkthrough, a support tutorial, all from the same source material.

We've set up a 25% off code for the PH community, PH25, valid on Plus and Pro plans until May 31st.

PH page: https://www.producthunt.com/products/ngram?launch=ngram

Product: ngram.com

Happy to answer anything in the comments.

What if software could redesign itself as your needs change?

Most software today is static.

You buy a tool.

Adapt your workflow around it.

Wait months for feature requests.

And eventually stack more SaaS tools on top.

We kept asking ourselves:

What if software could evolve with the user instead?

So we built CraftBot with Living UI.

It’s a proactive AI agent that can:

  • ⁠build apps and dashboards from scratch
  • ⁠import existing GitHub projects
  • ⁠modify interfaces through conversation
  • ⁠and directly operate the UI it creates

The interesting part is that the UI is “living.”

It’s not a finished dashboard.

The agent stays context-aware of the interface and continuously updates it as your workflow changes.

Need:

  • ⁠a custom CRM?
  • ⁠an AI-powered Kanban board?
  • ⁠a dashboard connected to multiple tools?

CraftBot can generate it, operate it, and evolve it over time.

We launched today and would genuinely love feedback:

What workflows do you wish software adapted to automatically?

Please support on PH →

https://www.producthunt.com/posts/craftbot-with-living-ui

5 things we changed when we stopped treating the resume as the whole story

We had a rough patch last year where three hires in a row didn't work out the way we expected. They were great candidates, but not for us.

When we looked back at the process, we realized we'd been filtering based on how someone described themselves rather than what they could actually do.

A few things we changed that made a difference:

  1. Stopped letting the ATS do the thinking for us. We hadn't reviewed our keyword logic in over a year. Once we did, we realized how many people we'd been quietly rejecting before anyone on the team had a real look at them.
  2. Rewrote job descriptions from scratch. "5+ years required" for a role that barely existed 3 years ago. Every line we cut from the wishlist made the applicant pool a little more honest.
  3. Shortened the interview funnel. We were running four rounds for roles that didn't need it. Good candidates were dropping off. That was the hardest part: starting again.
  4. Actually tested for the job. This was the one that changed things the most. Replacing the resume screen with a short skill assessment meant we stopped letting formatting and school names do the work.
  5. Built a feedback loop. We started tracking why hires weren't working out. Obvious in hindsight, but we genuinely hadn't been doing it.

I’m not saying we've figured it all out. But at least now the process reflects what we're actually trying to find out. What would you change about yours?

reddit.com
u/createvalue-dontspam — 2 days ago

What if subscriptions, credits, and usage billing worked together natively?

Most SaaS teams think adding payments is the hard part.

It’s not.

The real complexity starts after that:

  • ⁠subscription states
  • ⁠usage tracking
  • ⁠credits
  • ⁠feature access
  • ⁠tax compliance
  • ⁠pricing migrations
  • ⁠failed payment recovery
  • ⁠global currencies

Over time, billing logic slowly spreads across your entire codebase.

Changing pricing plans, adding limits, or experimenting with monetization suddenly becomes an engineering project.

We kept seeing this problem repeatedly.

So we built Kelviq.

Kelviq is a monetization platform for SaaS, AI, and digital products that handles:

  • ⁠usage-based billing
  • ⁠global tax & compliance
  • ⁠credits & feature access
  • ⁠payments & subscriptions
  • ⁠digital delivery & license keys
  • ⁠pricing updates without redeploys

The goal was simple: Help teams monetize globally without rebuilding billing infrastructure from scratch.

We launched today and would genuinely love feedback.

Where does billing or monetization usually break down for your team?

Please support on PH →

https://www.producthunt.com/posts/kelviq-2

u/createvalue-dontspam — 2 days ago

how do you actually know if someone can do the job, or if they're just good at getting hired?

This has happened to us twice in the past year. Candidate looks strong on paper, interviews well, references are solid. Then 90 days in it's obvious the performance just isn't there.

I keep going back and forth on where the process is failing. Are we asking the wrong questions? Is our job description pulling in the wrong people? Or are some candidates genuinely just very good at the interview game?

We tried adding a skills test through Codility for one role and used structured scorecards in Lever for a few others. Both helped a bit but I still don't feel like I have a reliable signal on actual performance.

For people who've gotten this right: what's actually working? Work samples, take-home tasks, longer panels, something else? And does whatever you're doing hold up when you're filling multiple roles at the same time?

reddit.com
u/createvalue-dontspam — 3 days ago

What if AI agents had persistent work memory across your tools?

Everyone is building AI agents right now.

But most agents still struggle with one thing:

context.

Business context lives across Slack threads, CRM updates, support tickets, GitHub activity, Jira tasks, emails, and dozens of other tools.

Most teams solve this in one of two ways:

  • ⁠dump raw API responses into the model
  • ⁠or build static RAG pipelines

Both create problems fast.

Raw context explodes token usage.

Static snapshots go stale almost immediately.

So we started asking:

What would a persistent, continuously updating context layer for AI agents look like?

That’s why we built Weavable.

Weavable creates live shared work context across your tools and exposes it through a single MCP endpoint agents can reason from.

Instead of constantly re-ingesting fragmented updates, agents work from structured context that stays mapped and updated over time.

The result:

  • ⁠lower token usage
  • more reliable outputs
  • ⁠better agent behavior in real workflows

Curious how others here are handling context for agentic systems today.

Please support on PH →

https://www.producthunt.com/posts/weavable

u/createvalue-dontspam — 3 days ago

Why does nobody talk about AI agent security yet?

Most people are excited about AI agents.

Very few are asking what happens when those agents go rogue.

Today, AI agents can:

  • execute shell commands
  • ⁠access local files
  • ⁠connect to APIs
  • ⁠process sensitive data
  • ⁠operate autonomously with system permissions

But almost nobody verifies them.

We kept seeing the same problem:

AI agents are scaling faster than the security infrastructure around them.

So we built ClawSecure.

An AI-powered antivirus for AI agents.

It:

  • ⁠scans agents before install
  • ⁠monitors runtime behavior
  • ⁠detects malicious actions & code mutation
  • ⁠flags credential harvesting & data exfiltration
  • ⁠provides instant verification through an API

We’ve already audited thousands of agents and found a surprising amount of risky behavior hiding underneath seemingly normal installs.

Launched today and would genuinely love feedback from developers, security engineers, and anyone building with agents.

What do you think is the biggest security risk in the AI agent ecosystem right now?

Please show your support on PH → https://www.producthunt.com/posts/clawsecure-2

u/createvalue-dontspam — 3 days ago

What if you knew exactly which investors back people like you?

Most founders approach fundraising like a volume game.

  • ⁠Build a huge investor list.
  • ⁠Send hundreds of cold emails.
  • ⁠Hope someone replies.

But investors are often surprisingly pattern-driven.

Some consistently back:

  • solo founders
  • ⁠technical founders
  • ⁠ex-FAANG operators
  • ⁠repeat founders
  • ⁠certain universities
  • ⁠specific geographies

We kept asking ourselves: What if founders could see those patterns before fundraising?

So we built InvestorFinder.

You paste your profile and startup idea.

The platform matches you with investors based on actual portfolio behavior:

  • ⁠founder backgrounds
  • ⁠prior companies
  • ⁠university patterns
  • ⁠Geography
  • ⁠check sizes
  • founder archetypes

The goal is simple: Help founders stop wasting time pitching investors who were never the right fit.

We launched today and would genuinely love feedback.

How do you currently research investors before fundraising?

Please show your support on PH → https://www.producthunt.com/posts/investorfinder

u/createvalue-dontspam — 3 days ago

if I could rebuild our hiring process from scratch, I'd change these 4 things first

I feel that the current hiring system is redundant and needs an overhaul.

We have been doing the same thing over and over again for the past decade or more. We publish a JD on a portal, they find candidates, go through screening rounds, and then send an offer document.

A step change here or there for some of the industries.

A couple of things that I feel we could fix are:

1. Stop leading with resumes.  Resumes are crafted with precision because there is ample guidance on how to draft an effective resume, on the internet. I feel it will be replaced by a skills-based assessment in which candidates must demonstrate their skills before the first round.

2. Define roles by outcomes, not requirements.

Previously, the JDs used to be a senior customer success manager with 5 years of work experience worked on tools like HubSpot, Salesforce, etc. Slightly vague where a candidate didn't know what to actually expect when they started.  

In the future, I feel it will be more  outcome-driven: an example is shared below:

eg. Hiring a Senior Customer Success Manager to reduce churn in our top accounts, grow expansion revenue, and improve onboarding completion in the first 90 days. The role should be able to run difficult renewal calls, spot at-risk accounts early, and turn product feedback into action across sales and product.

This will ensure that the candidate knows exactly what's expected of them.

3. One structured screen before any interview.  

Currently, the flow is as follows: an HR person calls the candidate, asks a few qualifying questions, assesses their responses, and then sends them to the hiring manager. In the future, I feel this will be replaced by a problem-based video interview, taken before the HR calls, or maybe it will go to the hiring manager.

4. Track quality, not just speed.

Some of the companies I’ve spoken with track time-to-hire as a measure of success.  But they fail to  assess the candidate's performance 6 months or one year after they have joined the organisation. We are trying to incorporate processes that help us do this. It's quite tedious at this point. We'll share more details in a year.

What would you change first if you were rebuilding from zero?

reddit.com
u/createvalue-dontspam — 6 days ago

He runs a quarter-million-dollar business. It's not a startup that’s still figuring out product-market fit.

It's a real company that drives revenue and faces serious consequences for every candidate they fail to hire or fail to fill positions.

He shared that he had lost his best candidate of the quarter because a hiring manager took 11 days to send the feedback after the final round.

And not blaming the candidate because he had 2 other offers from another company in the same niche. However, he did his best to wait as long as he could and kept them fully informed throughout the process. And finally took the one he found more suitable.

And while we were having lunch between bites, he did math out loud, which, in a way, surprised me.  the cost of the open role sitting empty for another 6 to 9 weeks, the cost of restarting the search (worst-case), and the subsequent impact on his Q2 targets.

While this is a common event, where the hiring manager has done everything right, sourcing, screening, it stalls at the last moment because it was stuck with someone who had 50 other things on their plate.

He doesn't feel that hiring managers are bad people. It’s just that they don't think about the cost or the real impact a 10-day delay can have on the business.

Is anyone solving this structurally? Not nudge emails. Something that makes the cost of delay visible before the candidate is already gone.  I'd love to help my friend out.

reddit.com
u/createvalue-dontspam — 7 days ago

Most Shopify merchants already know what they want automated.

The problem is actually setting it up.

Things like:

  • ⁠tagging VIP customers
  • ⁠syncing inventory
  • routing wholesale orders
  • ⁠sending Slack alerts
  • ⁠updating Sheets
  • ⁠triggering follow-ups

Sound simple… until you open an automation builder.

Then suddenly you’re dealing with triggers, routers, mappings, conditions, failed runs, and debugging workflows 😅

We kept wondering:

Why can’t store automation work more like giving instructions to a teammate?

So we built MESA.

You describe the workflow in plain English:

“When orders over $500 come in, notify Slack, tag the customer VIP, and add them to a Klaviyo flow.”

MESA builds the automation for you.

It connects with Shopify plus tools like:

  • ⁠Slack
  • ⁠Klaviyo
  • ⁠Google Sheets
  • ⁠ShipStation
  • ⁠Recharge
  • ⁠Etsy
  • and more.

You can even add approval steps when you don’t want full autopilot.

We launched today on Product Hunt 🚀

Curious:

What’s the most annoying repetitive task in your store right now?

Please support on PH →

https://www.producthunt.com/posts/mesa

u/createvalue-dontspam — 7 days ago

Last year, we had an ad ops role open for almost 3 months. It was a challenging role, and we offered decent comp. We got a lot of applications.

I kept asking our recruiter why the shortlist was so short.

She kept saying "there's just no one good out there." So I did something dumb and pulled a random sample of the rejected pile myself. and spent an afternoon going through them.

There were strong profiles in there I would have genuinely called.

They'd been auto-rejected because our knockout filters hadn't been updated since a JD from the previous year, the keyword logic was matching on exact phrases our new JD didn't use, and one guy had written "client retention" where we'd written "customer success."

That afternoon made me realise the ATS wasn't neutral. It was just quiet about it.

A few ways it was filtering people out without us knowing:

  • Knockout filters no one had reviewed. Default settings auto-reject for small gaps. Nobody had touched ours in months.
  • Exact keyword matching instead of skill matching. If the phrase didn't match the JD word-for-word, it didn't count. Synonyms, alternate titles, same job done differently, all gone.
  • Resume formatting penalised over actual ability. The ATS ranked on presentation. A clean CV from a recognisable company scored higher than a messier one from someone who could genuinely do the work.
  • Old logic bleeding into new roles. Reject rules from a previous JD were still running quietly in the background, narrowing our pool for a completely different hire.

We realised late that the ATS wasn't broken. We had just been ignorant of it.

Are you relying solely on ATS filters, or have you added something else to catch what they miss?

reddit.com
u/createvalue-dontspam — 8 days ago

Every online meeting creates work.

Follow-ups.

Scheduling.

Notes.

CRM updates.

Docs.

Action items.

And when you stack multiple calls every day, the admin work becomes another full-time job.

We kept asking ourselves:

What if post-call work got completed before the call even ended?

So we built Shadow 2.0.

Shadow listens to the conversation in real time and handles tasks while you’re still talking.

Today it can:

  • ⁠create notes & summaries live
  • ⁠draft follow-up emails
  • ⁠schedule meetings automatically
  • ⁠manage workflows in the background

And this is just the start.

The biggest shift in 2.0:

Shadow is now a native desktop app.

No bots joining calls.

No manual setup.

It simply detects meetings automatically and works quietly in the background.

Built for anyone living in back-to-back meetings:

sales, recruiting, consulting, ops, founders, support teams.

We launched today and would genuinely love feedback 👇

If you could automate one thing that happens after every meeting, what would it be?

Please show your support on PH → https://www.producthunt.com/posts/shadow-2-0-2

u/createvalue-dontspam — 8 days ago

Every reporting stack has one missing piece.

A tool your team depends on…

but can’t easily connect to dashboards or analytics.

So teams end up:

  • ⁠exporting spreadsheets
  • ⁠maintaining scripts
  • ⁠asking engineering for help
  • ⁠or manually stitching reports together

It works.

Until it breaks.

We kept asking ourselves:

What if teams could connect virtually any API to their reporting stack without writing code?

So we built Custom Integrations for Databox.

You can:

  • ⁠connect almost any API
  • ⁠sync data automatically
  • ⁠turn responses into structured datasets
  • ⁠analyze everything alongside existing metrics

That means your reporting finally includes the tools that usually get left out.

No manual exports.

No fragile workflows.

No waiting on engineers.

We launched today and would genuinely love feedback 👇

What’s the one tool your reporting stack still doesn’t support properly?

Please show your support on PH → https://www.producthunt.com/posts/custom-integrations-by-databox

u/createvalue-dontspam — 8 days ago

Most people research founder networks manually.

LinkedIn tabs. Crunchbase searches. Random Google digging.

And even after all that, you still miss context:

  • who actually worked together
  • ⁠who overlapped during the same period
  • ⁠which founders came from the same teams
  • ⁠and which alumni networks consistently produce strong startups

We kept asking ourselves:

What if founder ecosystems were mapped visually instead of manually researched?

So we built Alumni Founder.

You enter any company or university and instantly see:

  • ⁠funding raised
  • ⁠overlap timelines
  • ⁠every founder that spun out
  • ⁠connection strength between people
  • ⁠cross-company founder comparisons

People are already using it for:

  • ⁠VC deal sourcing
  • ⁠warm intros
  • founder discovery
  • ⁠co-founder search
  • talent ecosystem research

We launched today and would genuinely love feedback.

What company alumni network would you explore first? 🚀

Please show your support on PH → https://www.producthunt.com/posts/alumni-founder

u/createvalue-dontspam — 8 days ago

So there's been a lot of posts going around on Reddit and LinkedIn lately.

Candidates calling out companies, tagging recruiters, writing long posts about being ghosted after multiple rounds. I've been reading them and honestly not saying much because I've been on the other side of this. And it is frustrating.

I had dinner last night with a friend who runs a recruiting agency.

She told me about a candidate who cleared all three rounds: Phone screen, technical, and full panel. The company was close, genuinely close to hiring him. Then a hiring freeze hit, and the whole thing got put on hold.

Because it's a startup, everyone assumed someone else had sent the email, but nobody had. Classic case of inaction due to unaccountability.

The candidate posted on LinkedIn a few days later.

He didn't name anyone specifically, but tagged the company. My friend reached out to him and tried to explain that it wasn't intentional. And I believe her. But I also think the candidate is completely right to be frustrated.

I've read that 61% of job seekers in the US say they've been ghosted after an interview. And I think the number is going up.

The more rounds someone has cleared or participated in, the more they feel they're owed at least a rejection or some form of feedback. And they are right.

After hearing this, we've started sending rejection emails on our end. But it's honestly not airtight yet. There are still gaps, especially when a role gets paused mid-process.

So genuine question: are there any tools you're using that actually help send structured, thoughtful feedback at scale? Or have you built a system internally that works? Would love to hear what's actually holding up in practice.

reddit.com
u/createvalue-dontspam — 9 days ago

Recording a quick video sounds easy.

But in reality?

You record.

Re-record.

Trim. Edit. Fix audio.

Then still feel like it’s not quite right.

So most people either over-edit… or just send something half-done.

We kept asking:

What if making a video felt as easy as talking?

So we built something around that idea.

With Velo 2.0:

  • ⁠You record your screen once
  • ⁠It turns into a polished video + a written doc
  • ⁠You edit everything by chatting (no timelines)
  • ⁠It can generate narration even if there’s no audio
  • ⁠And rewrite your script based on who it’s for

No complex editing.

No starting over.

Just record → refine → share.

We just launched today.

Curious what’s the most annoying part of making screen recordings for you?

Please show your support on PH → https://www.producthunt.com/posts/velo-2-0

u/createvalue-dontspam — 9 days ago