u/Tech_Invite09233

Engineering drawing OCR is not really an OCR problem

People often assume engineering drawing extraction means running OCR and cleaning up the text.

That works reasonably well for title blocks. Part numbers, revisions, and drawing numbers are usually in predictable places.

The harder part is the drawing area itself. Dimension callouts often sit on top of extension lines, center lines, hatching, and other geometry. A standard OCR tool can mistake those lines for noise or formatting and misread the actual callout.

There is also a separate issue: sometimes the information is missing from the drawing entirely. If the scale or general tolerance field is blank, OCR cannot fix that. It needs to be flagged as a drawing issue, not a reading issue.

So the pipeline needs more than text recognition. It needs to separate geometry from annotations and check whether the required fields are present.

Title block OCR may look good early on, but that does not mean the full extraction is reliable.

reddit.com
u/Tech_Invite09233 — 1 day ago

[I Ate] A vibrant Mezze spread featuring Beetroot Hummus, Labneh, and fresh Falafel

Havent really tried much middle eastern cuisines (I was traumatized when i was super young with the smell of spices lol sorry). But finally gave it a try and I was blown away by the colors of this spread! The beetroot hummus was the perfect earthy balance to the creamy labneh. Also cant go wrong with a warm pita!

u/Tech_Invite09233 — 1 day ago

The hardest part of drawing extraction was not the geometry. who knew?

I used to assume drawing extraction would fail on the complicated stuff: views, dimensions, GD&T, projection symbols.

But in an AS1100 dataset I looked at, most of that came out fine.

The real problems were in the title block.

The repeated issues were basic:
missing scale
missing general tolerance

That sounds small, but it breaks the whole output. If the scale is missing, the dimensions lose context. If the general tolerance is missing, any untoleranced feature becomes ambiguous.

The annoying part is that the geometry can look perfectly extracted while the drawing is still not usable for manufacturing.

So now I think title blocks should be a gate, not an afterthought.

reddit.com
u/Tech_Invite09233 — 2 days ago

Restaurants are not getting crushed by food costs anymore

The story everyone still repeats is that cafes and restaurants are getting squeezed because ingredient costs never really came back down.

That was true in 2022. It is a lot harder to defend now.

I pulled BLS data for processed-food PPI vs. food-away-from-home CPI from Jan 2019 to Mar 2026. The 2022 shock was ugly. In July 2022, input-cost inflation was 13.81 percentage points higher than menu-price inflation. Operators were taking the hit.

But then it flipped.

By July 2023, the spread was -19.80 pp. Menu prices were still going up while processed-food input costs were falling.
And that gap has mostly stayed negative.

March 2026:

Food away from home CPI: +3.78% YoY
Processed-food PPI: -3.60% YoY
Spread: -7.38 pp

2026 YTD average: -10.15 pp
2022 average: +10.66 pp

So if you run a cafe and your food cost percentage still is not improving, “inflation” probably is not the whole answer anymore.

It might be waste.
It might be portion creep.
It might be buying outside supplier contracts.
It might be prep overproduction.
It might be the team quietly leaking margin before it reaches the P&L.

Small caveat: the spread narrowed in March, from -11.10 pp in February to -7.38 pp, so the tailwind may be getting weaker.

But the broad point still stands.

2022 was an external cost shock.
2026 looks more like an operations problem.

reddit.com
u/Tech_Invite09233 — 4 days ago
▲ 235 r/BreakfastFood+1 crossposts

Nothing beats a classic Salmon Benedict on a Sunday morning

The hollandaise was velvety and had just the right amount of lemon. Perfectly toasted muffins made all the difference

u/Tech_Invite09233 — 4 days ago

I actually agree with the take that human advice will still have the upper hand.

AI has made content incredibly cheap, and the result is that we're drowning in generic AI slop. Everything sounds polished, but a lot of it feels empty.

That's where Reddit still has an advantage: human moderation, lived experience, disagreement, niche communities, and people calling out nonsense in real time.

I think people are slowly moving back toward smaller spaces where there's an actual human filter. Real voices, real experience, messy opinions, specific stories.

Maybe we're heading back toward something closer to old-school blogs and forums, just with better tools around them.

u/Tech_Invite09233 — 6 days ago

Lately it feels like originality isn't dead exactly, but trust in originality is getting weaker.

People publish books now and instead of readers talking about the ideas, the style, or the story, the comments often turn into "this is AI". I've seen it happen a lot this past year. Sometimes the work might be AI-generated, sure. But sometimes it feels like people assume first and read second.

So where does that leave actual writers?

You could sit down, write every word yourself, and still have people doubt it. That's the depressing part. If nobody trusts the process anymore, does originality still carry the same value?

At the same time, a lot of writers are using AI in some way. Maybe not to write the whole book, but for brainstorming, editing, fixing structure, tightening dialogue, or working through ideas. Even people in film and screenwriting are using it to polish scenes or develop concepts.

So now we're in this weird place:
People use AI.
People assume everyone uses AI.
And anything creative gets questioned by default.

It feels like writing has become a credibility problem as much as a creative one.

So I'm genuinely asking: is there still a real reason to write a book today, or has the value of writing been diluted too much?

reddit.com
u/Tech_Invite09233 — 8 days ago

People usually jump straight to the tools:

Louvain or Leiden?
PageRank or betweenness?

Which layout looks better?

But the real question is: what does this graph actually represent?

If the graph is badly defined, the metrics will still give clean numbers. They just won't mean much.

The first trap is node definition. Is a node a person, account, company, document, or event? If one person has multiple accounts, their role gets split. If several people share one account, it gets inflated.

Edges are just as messy. A follow, reply, mention, email, transaction, and shared event are not the same thing. Treating all of them as "relationships" can wreck the analysis.

Centrality also gets overused:

Degree = direct activity or visibility, not automatic influence.
Betweenness = brokerage, but very sensitive to missing edges.
Eigenvector = connection to important nodes, but often just rewards the dominant core.
PageRank = useful for directed attention networks, but only if links actually mean attention or endorsement.

Community detection is not magic either. Louvain, Leiden, and label propagation produce partitions based on assumptions. They do not automatically reveal "real groups."

Sampling bias can do even more damage. Snowball samples overrepresent connected nodes. Ego networks are bad for global rankings. Bad network boundaries can create fake bridges or fake communities.

Visualization can also fool people. A pretty force-directed graph is not proof of structure. Sometimes it is just a colorful hairball.

That distinction is the whole point.

SNA is useful, but only when the graph is defensible, the metric matches the question, and the result survives basic sensitivity checks. Most findings should be read as:
given this data, boundary, time window, and method, this is the structure we observe.

reddit.com
u/Tech_Invite09233 — 13 days ago

the texture of the shaved ice was incredibly fine! it felt more like fresh powder snow than traditional crushed ice. The matcha had a nice earthy bitterness that really balanced out the sweetness of the red bean and the condensed milk. It almost felt like a crime to break that perfect dome with a spoon lol

u/Tech_Invite09233 — 15 days ago
▲ 672 r/FoodPorn

Had this recently and Im still thinking about it! The meat was so tender it practically melted, and that char on the top added the perfect amount of smokiness. Easily one of the best seafood dishes Ive had in a long time. Pair it with a cold drink and it's basically a perfect meal!

u/Tech_Invite09233 — 17 days ago

I've been trying to pick a model for a roleplay / companion-style app I'm building, and honestly the options are kind of ridiculous right now.

At first I was just going off vibes, but then I found a dataset with a bunch of real prompts + model responses and decided to actually sit down and compare them.

Small catch: it didnt have any actual rankings in it. So I ended up just sampling responses across models and judging them myself based on how natural, engaging, and "in character" they felt.

Here's roughly how it shook out:

Best overall
- GPT-4 Turbo: This one just feels the most "complete". It stays engaging, handles creative stuff well, and doesnt fall apart when you push it a bit.
- Claude 2.1: Really solid at sticking to instructions and staying consistent. Only issue is it can feel a bit stiff sometimes depending on the tone you want.

Better than I expected
- Old GPT-4 (0314): Weirdly good at creative prompts. It handled stuff like humor and roleplay better than I expected. But yeah... still does the classic "AI disclaimer" thing which kills immersion.
- Claude Instant 1: If you care about speed or cost, this one is actually decent. Feels more lightweight, but still usable for simple interactions.

Where things drop off
- GPT-3.5: Feels more generic and a bit sloppy. More hallucinations, less personality.
- Vicuna-33B: Less about quality, more about reliability. It had some pretty questionable outputs on sensitive prompts, which is a bit of a red flag.

Big takeaway

The difference isn’t just "which one is smarter"
It's more like:
- does it feel natural
- does it stay in character
- does it randomly break immersion

Right now if I had to pick:

- GPT-4 Turbo for overall experience
- Claude 2.1 if you need it to follow rules strictly

u/Tech_Invite09233 — 21 days ago