u/AcanthisittaHorror86

Karachi vs Lahore vs Islamabad who actually dresses better
▲ 5 r/MENSTUFF_+3 crossposts

Karachi vs Lahore vs Islamabad who actually dresses better

okay so i've been to all three cities and the difference is honestly so funny. karachi guys look like they just woke up but somehow still pull it off idk how. lahore guys are way too dressed up for no reason bro it's a random tuesday and you're out here looking like a whole photoshoot. islamabad guys are clean but they're literally all wearing the same thing every single time.

anyway i'm curious what you guys think. which city actually has the best style and be honest don't just defend your city because you're from there

u/AcanthisittaHorror86 — 3 days ago

Welcome to r/MenStuff — real talk between men. No BS.

A space where guys can actually talk about the things that come up in life.

Grooming, fitness, style, health, relationships, work, money, gear — the stuff you Google at 1am because you don't know who else to ask.

What this place is for:

  • Honest questions, honest answers
  • Real experiences, not influencer takes
  • Men helping men figure things out

What this place isn't:

  • An echo chamber
  • A place to perform masculinity

No dumb rules. Just don't be a d*ck.

Jump in, ask something, answer something.

What's one thing you genuinely wish more men talked about openly?

reddit.com
u/AcanthisittaHorror86 — 3 days ago
▲ 0 r/legaltech+2 crossposts

Why enterprise legal teams quietly won't send their contracts to a third party AI tool even if they signed the NDA

This one doesn't get talked about enough in legal tech circles.

I keep having the same conversation with in house legal teams. They've seen the demos. The tool looks good. The accuracy is reasonable. The workflow makes sense. And then procurement stalls for six months and eventually quietly dies.

When you dig into why the answer is almost never about the AI itself. It's about where the data goes.

Contracts are not like other business documents. An MSA with a Fortune 500 customer contains your pricing, your liability exposure, your IP terms, your indemnification limits. An NDA contains the identities of who you're talking to and what you're exploring together. A supply agreement contains your supplier relationships and your cost structure. Taken together your contract portfolio is basically a map of your entire business.

And legal teams know this better than anyone because they're the ones who negotiated those terms. They're not being paranoid. They're being exactly as careful as their job requires them to be.

The standard response from vendors is we're SOC 2 compliant, we don't train on your data, here's our DPA. And that's fine as far as it goes. But it doesn't answer the actual question which is what happens if there's a breach. What happens if your vendor gets acquired and the new parent company has different data practices. What happens if a subpoena lands on your vendor's servers and your contracts are sitting there.

Legal teams have seen enough to know that the risk is not theoretical.

The model that actually addresses this isn't better SaaS security. It's deploying the AI inside the customer's own cloud environment entirely. No data leaves. No third party servers. No vendor to worry about. The contracts stay in your environment the same way they always have, the AI just runs there too. You even bring your own LLM, whether that's Azure OpenAI or AWS Bedrock. The vendor never touches your data at any point.

The tradeoff is it's not a plug and play SaaS signup. It requires implementation. But for enterprise legal teams handling sensitive commercial contracts that's not a bug, that's the whole point.

Curious whether others have run into this. Is data sovereignty a real blocker in your experience or do legal teams eventually get comfortable once the security docs check out?

reddit.com
u/AcanthisittaHorror86 — 4 days ago

Been thinking about this concept and wanted honest opinions.

The idea is a dedicated online store for men in Pakistan. And when I say dedicated I mean completely. No women's section tucked in the corner. No "also available for her." No mixed lifestyle clutter. You open the site and everything on it is for men. Full stop.

Grooming, beard care, face wash, hair products. Tech accessories like earbuds, trimmers, power banks. Wallets, bags, sunglasses, gym gear. And this is the part I think makes it interesting, hunting and fishing gear as well. Rods, reels, tackle, hunting knives, camping tools, field equipment. The stuff that Pakistani men actually do on weekends but nobody online is selling it properly in one place.

The frustration behind the idea is this. Every store in Pakistan tries to sell to everyone. Daraz is a bazaar. You search for a fishing rod and you're wading through listings for hair extensions and cooking pots. You look for a hunting knife and you get kitchen knives and beauty scissors. There is no place where a Pakistani man opens the homepage and immediately feels like this place gets him.

The question is whether that actually matters to Pakistani men or not.

Would men care that a store is built exclusively for them? Or are they just looking for the cheapest price and fastest delivery regardless of where it comes from?

Is there a real market in Pakistan where men aged 18 to 45 are actively spending on themselves and their hobbies and would appreciate a curated experience that actually speaks their language?

Is the hunting and fishing angle something that could genuinely set this apart or is that too niche inside an already niche idea?

Or does the whole men only concept sound good on paper but mean nothing when someone is deciding where to click buy?

Brutal honest takes only please.

u/AcanthisittaHorror86 — 8 days ago
▲ 2 r/LawFirm+1 crossposts

Genuinely curious because I keep seeing a gap between what vendors show in demos and what actually gets used day to day.

I work on the tooling side so I get to talk to a lot of legal teams and what I hear repeatedly is something like yeah we tried it, it was impressive in the pitch, then it sat unused after three months. And when I ask why the answers are usually the same few things.

It required lawyers to work outside of Word. Nobody wanted to do that.

Or the AI flagged everything as risky so reviewers stopped trusting it.

Or the suggestions it gave were generic and didn't match how their company actually thinks about risk.

So I'm curious what the actual picture looks like from people inside legal teams.

Are you using AI for contract review in any meaningful way right now? Not a pilot, not something being evaluated, something that's actually in the workflow.

If yes what does that look like practically. Which contract types. How much of the review does the AI handle vs a human. Do lawyers trust the output or do they just redo the work anyway.

If no what killed it. Was it adoption, accuracy, workflow friction, IT security concerns, something else.

I'm also curious about the data privacy angle because I hear this come up a lot especially in regulated industries. A lot of teams are uncomfortable sending contracts to a third party SaaS even with NDAs in place. Is that a real blocker for your team or is it more of a theoretical concern that doesn't actually stop procurement.

No agenda here, not selling anything, just trying to understand what's real versus what's still aspirational in this space. The vendor perspective is pretty noisy right now and it's hard to get a straight answer from people actually doing the work.

reddit.com
u/AcanthisittaHorror86 — 8 days ago
▲ 37 r/legaltech+1 crossposts

Bit of a long one but hopefully useful for anyone building in this space or evaluating tools.

Background: I've been working on AI assisted contract due diligence for about a year now. Not as a lawyer, more on the engineering and product side, working closely with in house legal teams. What I'm sharing here isn't theory, it's stuff we got wrong first and fixed later.

Why the obvious approach breaks down

The instinct when you first start is to give the model a checklist. Flag unlimited liability. Flag warranty terms over 3 years. Flag missing governing law. Prompt engineer your way to a solution.

Works okay on demo contracts. Falls apart on real ones.

The problem is the model doesn't know why something is a risk. So when it hits an edge case, a clause that's technically fine in isolation but problematic given the rest of the contract, it either misses it or flags it without useful context. A lawyer reading flagged: warranty clause can't do anything with that. They need to know whether this specific clause in this specific deal is actually a problem for their business.

Generic AI treats all contracts the same. Real contracts are not the same.

The shift that actually helped: teaching the WHY

We restructured how we encoded legal knowledge. Instead of a flat list of rules, every rule now has three components.

Definition which is the precise linguistic pattern that triggers a flag. Not long warranty but warranty duration stated or implied to exceed 36 months. Specific enough that the model can pattern match reliably.

Rationale which is the business logic behind the rule. Why does warranty duration matter past 36 months? Because it creates open ended exposure for latent defects that surface after the normal product lifecycle. This isn't for the lawyer, it's for the model. When the agent understands why a rule exists, its output rationale actually tracks back to real business risk instead of generic legal commentary.

Examples which are 1 to 2 actual sentence excerpts from real contracts that represent the pattern. This was the biggest unlock for us. Abstract rule definitions are hard for models to apply consistently. Concrete linguistic examples are much easier to match against.

When you structure it this way the model stops pattern matching and starts reasoning. Small distinction in theory, massive difference in output quality.

Ditch binary classification

Good clause bad clause is not how legal risk actually works.

We moved to three tiers. Green which favors your business and is low risk so you move on. Orange which is acceptable under certain conditions and needs a human decision. Red which is non negotiable so you push back, redline, or walk.

The middle tier is where most of the interesting work lives. A 2x liability cap might be fine on a $500K contract and completely unacceptable on a $10M one. The rule has to encode that conditionality or the model can't make a meaningful call. It'll just flag everything orange and create more work than it saves.

The multi agent thing nobody talks about

Single model reviewing an entire contract hits a ceiling pretty fast. The context window problem is obvious but the less obvious problem is specialization.

One model can't be an expert in warranty law, IP indemnification, data privacy obligations, force majeure, and governing law simultaneously, at least not with the precision you need for real due diligence. It knows a bit about everything. You need something that knows a lot about one thing.

We ended up splitting into domain specific agents. Each one only analyzes clauses in its category. Warranty agent looks at warranty clauses. IP agent looks at IP clauses. And so on.

Then you need an orchestration layer. Because a single paragraph in a contract can trigger three different agents at once and you can't flag everything at the same severity. The orchestrator compares what each agent found, looks at their confidence levels, and decides what to surface. That conflict resolution step is unglamorous and took longer to get right than anything else in the system.

What's still genuinely hard

Confidence calibration. Getting an agent to say I'm 70% confident this is high risk in a way that's consistent and meaningful across different contract types is still not solved cleanly. A model's internal uncertainty and real world legal risk don't map to each other neatly.

Rulebook maintenance. As business conditions change, as case law shifts, the knowledge base needs updating. This is a human process. Anyone claiming their system keeps itself current automatically is exaggerating.

Getting lawyers to trust it. This is maybe the hardest one. The output has to be explainable, not just this clause is risky but here's why, here's the specific language, here's what we'd suggest instead. Without that adoption stalls regardless of how accurate the model is.

Curious whether anyone here has experimented with graph based knowledge representations instead of flat vector search for legal reasoning. Been thinking about whether relationship modeling between clauses changes anything meaningfully.

reddit.com
u/AcanthisittaHorror86 — 8 days ago