u/DUmbChAd69

Recently came across a Kickstarter project called Focusaur. It’s basically trying to move focus timers and habit tracking away from the phone itself. Their idea of adding more physical friction instead of only relying on another productivity app sounds interesting. As someone who’s pretty hard to focus, I feel like this could actually be helpful for me, so I decided to back it for now. Curious if anyone else here has looked into it or backed it already?

u/DUmbChAd69 — 6 days ago

The slow and steady progress of my natural nail journey. 🌿 I’ve been babying my nails for months, and I’m finally starting to see the length and strength I’ve been aiming for! My goal has always been to maintain a healthy free edge while keeping my nail beds nourished.

Realized that for my nails to survive daily life without snapping, consistency with oiling is everything. I spent way too much time researching a routine that doesn't compromise on ethics. I finally settled on a PETA-certified and USDA organic oil (Apuree) because it’s one of the few that actually sinks in without making my hands a greasy mess—which is a lifesaver for someone who spends all day on a keyboard.

Super happy with the clarity of the nail plates today. Does anyone else find that keeping the proximal nail fold hydrated actually helps the nail grow out faster, or is it just me? 💅✨

u/DUmbChAd69 — 8 days ago

We've spent the past six months tracking how four AI engines — ChatGPT, Perplexity, Gemini, Google AI Overviews — actually cite agencies and service providers. Forty-ish prompts a month, mostly variations of "best GEO agency" or "AEO agency for B2B services."

Sharing the patterns that surprised us most. These are directional rather than precise; the methodology is monthly tracking, not a controlled experiment.

What we keep noticing

Engines disagree more than we expected. Same prompt, three engines, three different shortlists. Not just reranking — different agencies entirely. We expected convergence over time. Six months in, we haven't seen it.

Brand entity disambiguation matters before content quality. Companies whose name collides with a common noun or another brand often just don't surface, regardless of how much they publish. The fastest fix is the entity layer: Wikidata, sameAs connections to LinkedIn Company Page, Crunchbase, Wikipedia where eligible. Agencies that cleaned this up first moved faster than agencies that led with content.

Third-party verifiable certifications carry weight on values-aligned queries. We've seen lower-DR sites with B Corp listings outrank higher-DR sites in Perplexity for "ethical agency" type queries. The mechanism seems to be that AI retrieval treats verified external attributes as higher-confidence signals than self-claims.

Static "best of 2024" content decays in roughly two months. Without refresh, AI engines deweight stale pages noticeably. Recency is real and faster than we expected.

Reddit threads show up disproportionately in Perplexity citations. It's materially more often than Reddit's share of the indexed web would predict. Two implications: authentic forum presence compounds over time, and astroturf appears to get detected and tanks the parent brand. We haven't isolated the exact mechanism, but the pattern is consistent enough that we've changed our recommendations.

Methodology transparency appears to be a citation signal. Agencies that publish how they work — process pages, audit frameworks, public methodology docs — seem to surface more often on "how does X work" queries than agencies with gated or proprietary-only positioning. We've started recommending clients add a public methodology page for this reason.

What didn't work

  • Buying directory links — AI engines deweight thin profiles
  • Burst publishing (20 pieces in a quarter) — looks unnatural, citation share didn't move
  • Single-engine optimization — ChatGPT and Perplexity rarely converge to the same playbook, so optimizing for one leaves the other flat

What we'd do differently

Start with entity hygiene before content production. Wikidata + LinkedIn Company Page + Crunchbase + sameAs all need to be consistent before GEO/AEO content does much. We got this order wrong early on.

Don't optimize for one engine. Diversify across four. ChatGPT favors editorial depth; Perplexity favors Reddit and Quora; Gemini leans on Google's index; Google AI Overviews barely triggers on agency queries yet — but that will change.

Treat your methodology as public infrastructure, not IP. The agencies that are being cited are the ones whose process can be verified externally. The ones that aren't cited mostly have nothing findable that confirms how they work.

Curious whether others tracking this are seeing similar patterns. The category is still small enough that comparing notes is probably more useful than each agency publishing parallel content in isolation.

reddit.com
u/DUmbChAd69 — 12 days ago
▲ 28 r/dji

Saw this Avata 360 shot of a wind turbine and the whole thing just keeps spinning. Had to look away for a second ngl. I get that it shoots 360 but how do you reframe it like this in post. Like keeping it locked on the turbine while everything else rotates around it. Is that just done in the app or do you need something else

u/DUmbChAd69 — 13 days ago

I process a lot of dense information (literature reviews, market research, 40-page analysis docs), and having to present these findings visually is usually the worst part of my week.The thinking and the researching part is fine. But condensing a nuanced, 5-part argument into a 15-slide deck while maintaining the actual logic flow? Exhausting.I tested a bunch of those trendy AI slide generators. They make everything look like an Apple keynote, which is cool, but they are terrible at handling actual depth. I’d feed them a detailed methodology section, and they would literally delete half the steps and replace it with a giant stock photo of a mountain. Completely useless for a serious presentation.Tried Dokie AI a few weeks ago after seeing it mentioned somewhere. It actually feels like it was built for document parsing rather than just making pretty pictures.The biggest difference is that it respects the input. If I paste in a 10-step process, it actually gives me a layout that accommodates all 10 steps, rather than summarizing it into 3 bullet points just to make the slide look clean.My current process is basically:dump the raw abstract, methodology, and key findingsgenerate the presentationuse the generated structure as my baselinemanually fix the citations and tweak the wordingexportIt’s not flawless by any means. Sometimes the layouts it picks are a bit repetitive, and obviously, it won't do the critical thinking for you.But for turning a massive wall of text into a logically paced presentation, it saves me hours of manual formatting.Is anyone else using AI for data-heavy or text-heavy decks? Do you trust it with the structure, or are you just using it for design inspiration?

reddit.com
u/DUmbChAd69 — 15 days ago
▲ 282 r/kanpur+3 crossposts

Was taking an online Demo class for a 8th class mathematics.

Got harassed by the Kanpur Kid, he initially asked if I knew science, after saying yes he asked about reproduction and di@k and testicles and how it works. I tried avoiding he doubled down and asked again so i said di@k is something used for reproduction and creating new humans ofcourse in hindi "LING " he then almost flashed me show i yelled to fix camera then says to me "You look very cute and i want a kiss from you" and i said it's all recorded (which is true, btw), then he started to act like it was someone from side saying this and tried to change his voice. So I ended the class. Now I am simply disgusted.

u/Able-Two-6993 — 15 days ago