u/AriaSmith19

Why do webinar footage always feel kind of boring when turned into Shorts?

Lately I’ve been trying to repurpose webinar content into short-form videos, but I keep running into the same issue.Even when the original material is actually solid and shows real expertise in the topic, once I turn it into Shorts it just ends up feeling kind of flat and lifeless.I think a big part of the problem is that my current editing approach is pretty limited. I’m keeping the important parts of the content, but visually there’s not much going on. No real motion, no changes in pacing, nothing on screen that really helps hold attention.Because of that, I’m starting to feel like the editing style might actually matter more than which exact clips I choose.Curious how others are handling this kind of workflow these days. Are there any tools out there that actually help automate or improve this process, or is it still mostly manual work at this point?

reddit.com
u/AriaSmith19 — 10 hours ago

🥤Cola-reps🥤 | Acne Studios | Carhartt | Ralph Lauren | NIKE | Denim Tears | Godspeed | SUPREME | AMI | Sp5der | Good Brand | BROKEN PLANET | BALENCIAGA| BAPE | ADIDAS | Off White | STUSSY | Fog of God | Casablanca |

Yupoo : https://colareps.x.yupoo.com/Whatsapp : +86 17070 885566 😊If you need any assistance, please. don't hesitate to reach out. We value every single message from you.

u/AriaSmith19 — 2 days ago

A flatlay of my favorite ashy contours for cool olives :)Products list (L to R):Schepenee 022Dance Up Liquid 01Dance Up Contour Stick 01Jillleen Liquid Contour 606Sheglam in StoneGirlscrush Liquid Contour 100Fenty Beauty Match Stix in Amber (Original version)Fenty Beauty Cream Contour in Amber

u/AriaSmith19 — 7 days ago

Managing indexing across a few side projects turned out harder than building them. A couple directories and docs sites quietly added up to ~3k URLs, and most sat in “discovered, not indexed” for weeks.

Manual GSC requests worked at first but broke past a few hundred pages. I tried a cron script with the Google Indexing API and some IndexNow pings, but tracking failures and quotas was messy.

So I turned the scripts into IndexerHub. It just scans sitemaps, queues URLs, retries failures, and shows what actually got submitted. One test site went from ~220 indexed pages to ~850 over a few weeks.

u/AriaSmith19 — 11 days ago

idk if I’m overthinking this but I went down a rabbit hole with transcription tools recently and now I feel a bit weird about the whole thinglike before I never really questioned ityou upload audio → get text → doneand most of these tools say stuff like “encrypted” so I just assumed it’s all finebut then I randomly started reading some of the terms (not even that carefully) and started noticing the same kind of wording over and overthings like “used to improve the service” or “may be used for training” etcand yeah maybe it’s anonymized or whatever, but it still kinda changes how it feels when you’re uploading real conversationsespecially if it’s not just random audiolike calls, meetings, anything even slightly sensitiveI tried running stuff locally for a bit (Whisper and similar)it works, just slower and a bit annoying to set upbut at least you know where your data is going (or not going)then I went back to looking at cloud tools again but now I’m way more paranoid about what they actually say vs what they advertiseI stumbled across one called Vocova that seems to lean more on not using your data for training, but I haven’t used it enough to say anything definitivejust noticed the difference in how they talk about itanyway I’m probably late to this but curious what other people dodo you just not think about it and use whatever’s convenientor are you actually checking this stuff / running things locally?

reddit.com
u/AriaSmith19 — 11 days ago

One of the things that took me a while to appreciate about the TuyaOpen agent framework is how the tool system actually works in practice on Linux.On a microcontroller, MCP tools are things like "set the volume" or "take a photo" — hardware actions. On a Raspberry Pi running Linux, the tool surface is much larger.The VirtuaMate project ships with tools for file operations (read, write, edit, list directory, find path), time and scheduling (get current time, add/list/remove cron jobs), Linux shell execution (tool_exec), and PC collaboration over a gateway. The agent decides when to call these based on what you ask.In practice: you can ask the assistant to check what's in a directory, edit a config file, set a reminder that fires at a specific time, or run a shell command — and it does it, because those are registered tools the LLM knows about. There's no custom intent matching or routing logic. The model sees the tool list and picks the right one.The memory system is file-based: long-term memory in MEMORY.md, daily notes in timestamped markdown files. The agent reads and writes these directly using the file tools. Over time, context accumulates in files rather than in a context window.What I find most interesting about this design: adding a new tool means writing a function and registering it in tools_register.c. There's no middleware to configure. If you want the agent to control something new — a GPIO pin, a camera, a sensor reading — you write the handler and describe it in English for the LLM.Has anyone tried adding custom tools to this kind of framework on Pi? I'm thinking about camera integration specifically — whether it's better to implement it as an MCP tool that returns a file path, or as something that returns raw data directly to the model.Repo: https://github.com/tuya/VirtuaMate

reddit.com
u/AriaSmith19 — 14 days ago