u/CarpenterFine3887

Making Side-by-Side Bilingual Transcripts Is a Nightmare

Okay, I need to vent about bilingual transcripts for a sec. Trying to make a side-by-side Word doc with English on one side and Spanish on the other is the worst. Change one line, fix a speaker name, and suddenly the whole table is messed up. I’ve lost hours just hitting “enter” to get lines to line up.

Subtitle tools like Happy Scribe or YouTube don’t really help either. They spit out SRTs, which is fine if all you want is timing, but if you need a proper document where paragraph A matches paragraph A in another language… nope. You basically end up doing the whole alignment manually in Word or Excel.

I’ve been using Vocova for some projects, and it at least keeps things from completely falling apart when exporting side-by-side. Nothing fancy—just stops the table from turning into chaos every time you tweak a sentence.

Honestly, I’m curious if anyone else has found a better way to do this without wasting half a day formatting stuff. Or is everyone just stuck in Word hell like me?

reddit.com
u/CarpenterFine3887 — 22 hours ago
▲ 20 r/dji

Came for the snow mountain, stayed for the lakes..

Originally just wanted to get the snow mountain. Put the Avata360 up and let it slowly pan across, but hadn't even gotten a good look at the mountain yet and these two lakes already had my attention. This blue that almost doesn't look real, just sitting there in the middle of all this dry empty land, like someone spilled two drops of paint. The mountain's impressive, but those lakes are what I kept going back to watch. That's the nice thing about shooting 360 though, get home and just drag the view around, stuff I didn't even notice while flying is all still there waiting.

u/CarpenterFine3887 — 2 days ago
▲ 11 r/drones

Beginner-ish FPV pilot.

Burned one pack after work on this skinny pine trail behind my place, kept it low and slow. Avata 360, full auto exposure, no ND or anything. Felt fine in the goggles but watching the footage back the left guard looks like it was basically combing through pine twigs, maybe a foot of clearance in the tight spots. Pulled it in, checked props and guards, nothing stuck in there, no nicks. Was that normal trail margin or did I just get away with one?

u/CarpenterFine3887 — 4 days ago

May 2026 AliExpress Promo Codes Guide,  Tested New AliExpress Coupon Codes, Vouchers and Discount Codes to Help You Go Through the Whole May

AliExpress New May 2026 Discount Codes:

Valid From May 1st-31, 2026

🎟$2 off $18: USTK2U

🎟$5 off $39: USTK5U

🎟$8 off $59: USTK8U

🎟$15 off $109: USTK15U

🎟$23 off $169: USTK23U

🎟$30 off $239: USTK30U

🎟$45 off $359: USTK45U

🎟$60 off $479: USTK60U

reddit.com
u/CarpenterFine3887 — 5 days ago

You know that moment when you get your “perfect” hoodie sample back? It’s got that perfect weight, fits just right, and the French terry fabric feels like heaven. You’re hyped, right? You order 1,000 units, thinking everything’s good to go.

Then, bam. The nightmare begins. The medium fits like a small, the drawstrings are the wrong color, and half the batch has this weird 5% GSM variance. The stuff you were so excited about? Doesn’t match at all.

Welcome to the Sample Trap.

Here’s the thing: your sample is usually this "golden piece" made by a senior tailor, one that’s crafted by hand with extra care in a sample room. But bulk production? That’s a whole different story. You’re suddenly dealing with machines, line management, and speed. If you don’t account for things like fabric shrinkage or dye lot differences during the shift from sample to bulk, your production will never match the sample you loved so much.

So how do you scale without losing your mind? Here’s what’s non-negotiable for me:

  1. Tech Pack with a strict tolerance chart—at least +/- 1.5cm. If it’s outside that, it’s a no-go.
  2. Lab dips for every color. Don’t trust that the fabric will match unless you see the lab dip yourself.
  3. Never ship anything without a third-party inspection using AQL 2.5 standards. If they can’t show you a QC report for the last 5,000 units, move on.

But here’s the kicker—how do you find a factory that actually controls the raw materials before they even start stitching? For me, it was switching to ChengLin Clothing Manufacturer. They’ve got their knitting and dyeing done in-house. This is a game-changer because it means they’re controlling the yarn and the dyeing process directly. No more worrying about how the fabric’s gonna shift in production. With everything under one roof, they eliminate most of the issues that mess up bulk orders. So every hoodie feels the same, whether it’s the first or the 5,000th unit.

And the last thing? Always get a PPS (Pre-Production Sample). Don’t ever greenlight your bulk order until you see a sample made using the actual machines with the final fabric and trims. Trust me, this step can save you from a ton of headaches later on.

So how are you all handling QC for overseas shipments? Anyone else fallen for the “perfect sample” trap and ended up with a mess?

reddit.com
u/CarpenterFine3887 — 10 days ago

VirtuaMate is an open-source project that runs a real-time 3D VRM avatar on a Raspberry Pi 5, backed by a full AI agent loop. I was skeptical it would be usable, but it runs.

The rendering stack is SDL2 + OpenGL + Assimp. The avatar supports skeletal animation, multiple emotion states (happy, sad, thinking, loving, and others), lip sync driven by TTS audio, and runtime skybox switching. Visually it's a cel-shaded cartoon style with soft shadow transitions. Not photorealistic, but that's a deliberate choice — it runs on a Pi.

The avatar isn't just cosmetic. Emotion and animation are driven by what the AI is actually doing. When the LLM is thinking, there's a "thinking" expression. When the response streams in, the text triggers emotion analysis, and the avatar's face updates in real time. Lip sync follows the TTS audio playback.

Under the hood, VirtuaMate is built on TuyaOpen and reuses the DuckyClaw agent architecture: an agent loop, a message bus, and MCP tools. The 3D layer sits on top of that. The AI can directly call avatar tools — avatar_set_emotion, avatar_play_animation, avatar_composite_action — as part of its tool loop. So the agent isn't just talking; it's deciding how to present itself.

The platform note is worth reading: right now only Raspberry Pi 5 is fully tested. It requires on-device compilation, and cross-compilation isn't supported yet. The dependency install is a few apt packages.

I'm curious about the practical ceiling here — how much more complex can the scene and animation system get before the Pi 5 starts struggling? Has anyone pushed the rendering load on this, or are there other Pi projects using SDL2 + OpenGL for real-time 3D where you hit a wall?

Repo: https://github.com/tuya/VirtuaMate

u/CarpenterFine3887 — 12 days ago

I can already feel it with the wavy wig now Effortless wave just hits different when the weather warms up. Are y’all ready to switch up your hair for the season?

u/CarpenterFine3887 — 15 days ago

a landing page with 6 brands by enter AY3dprinter on search bar, the best seller in aliexpress 2026. I am a beginner, own 1 anycubic kobra s1 for now, what will you recommend if I wanna a second printer? also the filament, any recommendation? How's kingroon? Saw it frequently. currenctly I am using anycubic filament bought last year

u/CarpenterFine3887 — 15 days ago