u/Alarming-Wish207

I recently build a platform, that aims to help everyone to practice data science cases, to get hands on experience. I've been working as a DS for years. I mainly use databricks or Hex notebook, with AI assistant. So this platform can let you practice with the same tool. This is one of the best case I've built, and I want to share it with all of you---

Imagine you're testing two homepage banners. banner A vs banner B. Two weeks of traffic, lots of data. Banner A wins by a comfortable margin - cool, ship A, done.
Then for some reason you decide to split it by device before pushing the button.
Desktop: B wins
Mobile: B wins

So, banner B is better for desktop users. And banner B is better for mobile users. But added up, banner A wins overall? How the answer is the test wasn't fair. For whatever reason (caching, ad targeting, just bad luck), banner A got shown to a lot more desktop traffic than banner B did. And desktop users convert way better than mobile on almost every site. So it turns out A wasn't a better banner, it was a banner that got tested on an easier audience. Fix the traffic mix and B is the right call.

This thing has a name (Simpson's Paradox if you wanna google it) but you don't need the name to spot it. you just need to remember to slice your data before you trust the headline. If you are interested, you can practice the same case at https://www.litmetrics.ai/

reddit.com
u/Alarming-Wish207 — 7 days ago
▲ 1 r/datascience+1 crossposts

Been noticing new DS hiring products like Litmetrics.ai lately, which seems much more focused on real datasets and messy business cases than the classic coding-test format.

A lot of DS work today are more like to be end-to-end analytical judgment with AI in the loop. That feels like a different hiring target than the classic CodeSignal / HackerRank screening - pretty sure most DS have used them in interviews.

Curious what other people think. Is DS hiring actually changing on the assessment layer - to whether candidates can work through an real business problem, or putting AI language on top of the classic coding test & screening process is still the best way?

u/Alarming-Wish207 — 8 days ago

So here's the story: another team in my company opened an associate-level DS role last week, we got 300+ applications, and somehow there were 30+ senior-level guys applying for it. Not fake senior either. Like actually senior all with 10+ yoe. One of them even was a master from Harvard.

I knew the market was bad, but seeing that kind of applicant piling up for an associate level role was still kind of unbelievable.

Feels like a lot of experienced people are applying down-level after being laid off now just to stay employed. Which is fair enough, but also DAMN.

Curious that are other people & teams seeing the same thing, or is this just a weird sample on our side?

reddit.com
u/Alarming-Wish207 — 13 days ago
▲ 7 r/productdesign+1 crossposts

The more I use Claude Design, the more it feels like a product that shipped before it was ready.

The design system and design files are basically the same thing: same structure, same logic. I still don’t understand why I need to create a separate design file instead of just building pages inside the system. It feels like someone added an extra step just to feel productive. As for the design quality, let’s just say it’s not beating Codex anytime soon. And the usage limits are just weirdly stingy for something that’s still this early.

Curious — is anyone actually using Claude Design for real work? Or are we all just beta testers with opinions?

https://preview.redd.it/z4qghdjggmxg1.png?width=710&format=png&auto=webp&s=911b328b4cfb1c0dde3cf28bb41ebdddfe479304

reddit.com
u/Alarming-Wish207 — 17 days ago

Honestly I'd say yes from my point of view.

I’m not saying this from some anti-AI angle. I mean I use it all the time and my team uses it all the time. At this point pretending otherwise would be dumb.

But I have noticed something kinda unsettling in myself for sure. I used to be able to grind through problems and datas so cleanly, and now if I don’t immediately reach to GPT (or Claude), there’s this weird brain lag. Like the knowledge is still in there, but it’s behind layers of dust. It feels like I'm weirdly naked without AI. That’s the part that gets me.

AI is insanely good at getting you unstuck fast, which is great... until you realize maybe you’re not actually getting unstuck, maybe you’re just getting used to never sitting in the hard part long enough to build your muscle.

And yeah, we are definitely “get the work done.” The SQL got written, the analysis got drafted, the deck got made, bluhbluh. But are we actually getting sharper as being an analyst, or just getting really good at steering GPT?

Again, I’m not dooming here. I genuinely think AI is a huge advantage if you use it well. But I do think there’s a real risk of becoming the kind of analyst, who can ship fast with AI and feels weirdly naked without it, LOL.

Curious if you guys have felt this too..

reddit.com
u/Alarming-Wish207 — 20 days ago