u/Electrical-Shape-266

6 week update from the Shenzhen robot cleaning pilot. Still running, no recall, footage looks less staged than launch

Quick update for anyone tracking the 58 / X Square Robot home cleaning service in Shenzhen.

Launched mid March, still active, customers can still book through the 58 home services app. Robot side is mostly a wheeled dual arm Quanta platform doing the structured bits, human cleaner handles judgment work.

Two things stand out vs the launch coverage.

One, the bots in the recent footage are visibly handling more without intervention. At launch a lot of the wiping passes needed a human to pre stage the surface. The newer clips show it picking up small clutter (a sock, a piece of cereal, what looked like a kids book) before wiping, which is the part most robot vac people thought was years off.

Two, the failure mode footage is also out there now and it is unglamorous in a believable way. Robot gets stuck on a high pile rug, robot misreads a glass coffee table as floor, robot needs the human to physically rotate it once because it boxed itself into a corner near the fridge. None of those are surprising, but the fact that the footage exists and was not airbrushed is the part I find interesting. Compare to certain other humanoid demo cycles.

Six weeks of real apartments is not a victory lap, but it is a different signal than "video at trade show" which is most of what this industry runs on.

Pretty sure the same pilot expands to Beijing next, will post if I see confirmation.

reddit.com
u/Electrical-Shape-266 — 3 days ago

My PM stack for tracking 8 competitors weekly. quick rundown after swapping two tools last month

Saas PM here. Tracking 8 competitors is not glamorous but its part of my job and i kept watching the time spent on it creep up. Cleaned my stack last month, swapped two tools, sharing the result.

Current weekly competitor stack:

  1. Notion. master comparison db, 1 row per competitor, one column per dimension
  2. Slack. where the alerts land before standup
  3. Perplexity Pro. ad hoc deep dives when something interesting surfaces
  4. n8n. handles the clean rss and api side of things (changelog rss, status page, public stripe webhook patterns)
  5. MuleRun. the messy half, multi agent setup that runs every monday 6am

What i swapped out and why.

Crayon. Paid version. Good product honestly, the dashboards are polished and the competitive intel is solid. Problem was it was too much for my stage. We are 3 PMs, not a dedicated research team, and the dashboards were untouched between weekly reviews. Felt like paying for a ferrari to drive to the grocery store. Also the alerts were noisy and i found myself tuning them out after a month.

Puppeteer setup. Custom scraper i wrote at the start of the year. Worked fine initially but selectors broke every 3 weeks whenever a competitor updated their site. Pretending i had time to maintain a private scraper was the actual problem. Shouldve killed this months ago.

What MuleRun specifically does in my flow.

Multi agent setup runs every monday 6am. Agent A opens each of my 8 competitors changelog pages. Agent B opens their pricing pages. Agent C pulls their blog rss. Agent D diffs the new content against last weeks drive snapshot and writes a 1 page digest into notion. Before this i spent 3 hours every monday compiling notes and still missed silent pricing changes. Now its 20 min to review and edit.

One catch i should flag honestly. I still pull my main strategic competitor by hand because their changelog lives in a google docs i havent fully wired up yet. So 7 of 8 are fully automated, 1 is still a manual touch. On my todo list.

Not claiming this is the optimal stack. Just what i actually run after a year of trying things. The two pieces i would not give up are the diff (knowing whats new vs whats there) and the schedule (it just shows up monday whether i remember or not).

reddit.com
u/Electrical-Shape-266 — 5 days ago
▲ 2 r/family

Over the weekend, my kid and i decided to try a small activity outside our usual routine. We made a few simple handmade crafts and some easy drinks together.We set up a little stand outside,used a metal canopy we had from Costway to give us some shade and a space to organize everything.

At first my kid just stood there, figuring out where to put things. Then they started arranging items, spacing them out, asking if it looked okay. A few people walked by, and my kid explained what they made and how it worked. Slow at first, then more confident as they got used to it.I stayed nearby but didn't step in. Eventually, one of the crafts sold, and their face lit up. On the way back home, my kid was already talking about what to try next and how to improve the setup. They take initiative and enjoy the process that made it feel really special.

reddit.com
u/Electrical-Shape-266 — 7 days ago

I’ve been wrestling with something small but surprisingly complicated, and I figured this might be a good place to ask. You walk into a café, scroll online, or even just pass someone on the street, and you’ll see crosses, verses, “faith-inspired” graphics everywhere. Some of it looks beautiful at first glance, but the longer I sit with it, the more I wonder… is it actually pointing to truth, or just borrowing the language of it?

For example, I saw someone wearing a hoodie recently with layered text, multiple verses, distressed fonts, florals, the whole thing felt intense. Not necessarily wrong, just… crowded. It made me think: does adding more Scripture visually make something more faithful? Or can it sometimes dilute the weight of what’s being said?

And I often wear clothes that are clear in expression, especially those containing scriptures from the Bible, and which have bold designs, combined with other elements.If the expression is too vague, it will lose its meaning. So where is the boundary between expressing faith and being loyal to the teachings?

reddit.com
u/Electrical-Shape-266 — 13 days ago

I've seen a few posts here asking about HappyHorse access, and there's a lot of noise floating around, so I spent some time sorting through what's actually verified.

What we know for sure.On April 7, HappyHorse 1.0 appeared anonymously on the Artificial Analysis Video Arena and hit #1 on both the Text to Video and Image to Video (no audio) leaderboards. Within days the entries were quietly pulled. Then on April 10, Alibaba publicly attributed the model to the AI innovation unit inside its Alibaba Token Hub (ATH) business group. This was covered by CNBC, Yahoo Finance, and several other outlets.

The model itself is a 15 billion parameter unified Transformer that generates native 1080p video with synced audio in a single pass, including lip sync across around seven languages. Multi shot consistency (same character, same style across different scenes) is built in as a core feature rather than bolted on after the fact.

What is NOT available yet.Alibaba has committed to open sourcing HappyHorse 1.0 under Apache 2.0 with commercial use, but as of late April 2026 the weights, code, license file, and an official Alibaba hosted API have not dropped. The community GitHub repos floating around are README only with no actual releases. If someone is telling you they downloaded the weights, they did not.

Where you can actually use it right now.Two paths I've found that are legitimate. First, happyhorse.app runs a hosted demo with free daily credits and paid tiers (roughly $20 to $48 per month depending on the plan). This is an operator hosted surface, not Alibaba's official endpoint. Second, HappyHorse recently went live as a Creator Studio agent on MuleRun, which gives you a no code way to run it in the browser without dealing with API keys or GPU provisioning.

Neither of these is the same as running local weights, and I want to be clear about that. But if you just want to see what the model can do today, especially the joint audio plus video generation and the multi shot consistency, both options actually work. I've been poking at the character consistency across cuts and it holds up noticeably better than what I've gotten from other tools in that department.

reddit.com
u/Electrical-Shape-266 — 17 days ago

I kept mixing up chaining vs open addressing whenever I revisited hashing, so I put together a side by side comparison that lays out how each strategy handles collisions, what the probe sequence actually looks like, and the tradeoffs in memory and cache performance.

Hope this helps someone else who keeps second guessing themselves on this topic.

u/Electrical-Shape-266 — 18 days ago