u/Fickle_Ad_3924

Federated IFC did not give me automated quantity takeoffs. It just looked like it did.

I federated Architecture, Structural, and HVAC IFC models, then wrote a Python parser to pull element counts and BaseQuantities.

The script ran fine. No errors. Clean output.

The problem was that the numbers were wrong.

The HVAC model was the first red flag. It had duct segments and air terminals, but no quantity sets. No length, area, or volume. The parser did not fail. It just returned empty values and moved on.

Then the counts were inflated. Some elements appeared in multiple models, like chimneys and roofs. If you just append IFC files without deduplicating by GlobalId, you can double-count things while the final QTO still looks totally normal.

The last issue was ownership. Walls were split between Architecture and Structural. So a rule like “walls belong to Arch” would quietly miss half of them.

That was the main lesson for me: federation is not the same as clean QTO automation.

Before trusting the output, you need to check:
GlobalId duplicates
missing quantity sets by discipline
element ownership across models

Otherwise you are not really automating the takeoff. You are just producing a spreadsheet that looks convincing.

reddit.com
u/Fickle_Ad_3924 — 11 hours ago
▲ 10 r/food

[I Ate] Squid Ink Arroz Negro and a Hibiscus Sour

Love the color contrast of this meal! The squid ink was incredibly rich and savory, and the hibiscus sour was the perfect floral palate cleanser.

u/Fickle_Ad_3924 — 3 days ago

When spindle vibration sounds like wear, check the CAM file first

A lot of people treat RMS vibration spikes as tool wear.
Spike goes up, insert gets blamed.

But in a 3-axis milling dataset I looked at, the "anomaly" runs did not really look like gradual wear. They looked more like aggressive toolpath geometry.

The biggest giveaway was the lead-in. A ramp or arc into the cut can create a short, sharp RMS spike right at entry. Tool wear usually looks more like the whole vibration floor slowly rising across the cut.

Climb vs conventional milling showed up too. Conventional cutting had more early-pass chatter because the cutter rubs before it really bites. That was especially visible in the Z-axis channel.

The tricky part is that stepovers can look like wear if you only watch amplitude. A roughing pass with regular re-engagement will create repeatable RMS bumps. Without the CAM context, those bumps look like anomalies.

My takeaway: before blaming the spindle or pulling the insert, check the G-code.

If the spike happens at the same cutter position every time, it is probably toolpath-related.

If it slowly rises across repeated passes at the same position, then wear is a much better suspect.

reddit.com
u/Fickle_Ad_3924 — 3 days ago
▲ 9 r/dinner

A seafood and steak feast featuring [Pan-Seared Sea Bass] and Ribeye Steak with a fresh arugula salad

This spread was incredible! The sea bass was the standout for me, you know the perfectly crispy skin and so flaky. Also, the arugula salad with those tomatoes added the perfect crunch to balance out the rich fish. Felt like a surf and turf because we had fish and steak lol

u/Fickle_Ad_3924 — 7 days ago

Financial report scripts fail in boring ways

Finance automation people love giant argparse blocks.
Flags for every path, entity, period, template, override. It feels production-ready.

But reporting scripts usually break in quieter ways:
A template changes and a hardcoded cell points to the wrong number.
A script crashes halfway through but still leaves a finished-looking file.
A source CSV gets overwritten and nobody can explain last month’s numbers.

That is not a flexibility problem. It is a silent corruption problem.

The fixes are boring, but they work:
Use atomic writes.
Keep Excel cell mappings in config.
Write a small manifest with timestamp, input file, row count, key totals, entity, and period.
Validate before writing anything.

The reporting script I trust is not the one with the most flags.

It is the one that makes bad output hard to create.

reddit.com
u/Fickle_Ad_3924 — 7 days ago

You've got turns, message counts, session length, word count, topics, sentiment, all the usual stuff. But those numbers can get misleading fast because these aren't normal conversations. They're a mix of user behavior, model behavior, and whatever safety/moderation layer is shaping the output.

A long chat doesnt always mean the user is engaged. Sometimes it means the scene is going well. Other times it means the model forgot the setup, got repetitive, refused too much, or the user kept trying to steer it back on track.

Same with "rich" responses. A model can produce a lot of text that looks emotional or detailed, but half of it might be boilerplate: recaps, generic affection, soft disclaimers, repeated scene-setting, or safety-shaped language. If you count all of that as meaningful content, the analysis gets inflated.

Tone is also hard to label cleanly. A chat can start playful, turn intimate, hit a boundary, and suddenly become formal or vague. That shift matters more than slapping one label on the whole session.

Thematic drift is another big one. Sometimes drift is fun and creative. Sometimes the roleplay quietly turns into generic assistant behavior, therapy-speak, recap mode, or refusal language. A topic model might still say the chat is "romance" or "fantasy", but the actual scene may have fallen apart.

The biggest mistake is comparing filtered and unfiltered chats like they're the same kind of data. Filtering doesn't just remove certain content. It changes pacing, wording, session length, and how much effort the user has to spend negotiating with the model.

u/Fickle_Ad_3924 — 15 days ago
▲ 222 r/Seafood

The ultimate seafood spread! The variety on this plate was insane. The prawns were perfectly charred, but honestly, those mussels were the surprise winner for me like so much flavor in the sauce. Its rare to find a place that hits the mark on every single item in a sampler like this. Which one are you grabbing first?

u/Fickle_Ad_3924 — 16 days ago

I've been looking into workflows for analyzing TikToks/Reels/Shorts: extracting audio, generating transcripts, splitting scenes, and adding metadata.

A few things stood out:

1. Extract audio when the task is text-based.
For transcription, audio-only is cheaper and cleaner. But if you need to connect speech with on-screen text, cuts, or visual context, separating audio too early can break timing.

2. Formats matter more than expected.
Social videos often have weird compression, variable bitrate, missing metadata, or inconsistent frame rates. That can cause transcript drift or bad scene detection.

3. Tooling is a trade-off.
FFmpeg + local models is cheaper but needs more engineering. Hosted APIs are easier and often stronger, but costs add up fast. On-device processing helps privacy/cost, but quality varies.

4. Log everything.
Codec, framerate, FFmpeg args, model version, sample rate, and timestamp changes. Otherwise debugging is impossible.

Main takeaway: normalize every video into a consistent format before analysis. Boring step, but it decides whether the rest of the pipeline works.

u/Fickle_Ad_3924 — 17 days ago

Caught the 26 car dropping down toward the bay. The visibility was so good you could see every detail on Alcatraz. Days like this remind me why everyone falls in love with this city

u/Fickle_Ad_3924 — 1 month ago