u/Cold_Safe5630

Been job hunting for a while and started digging into why the same roles keep showing up everywhere.

Short answer: most companies post to their ATS (Greenhouse, Lever, Workday) and just never post to LinkedIn or Indeed. A lot of smaller teams skip it. You can find these manually by searching site:boards.greenhouse.io "data analyst" "remote" in Google. Takes time but the roles tend to have fewer applicants since most people never see them.

Anyone else been doing this? Curious what other methods people use to find the less competitive postings.

reddit.com
u/Cold_Safe5630 — 9 days ago

Been on the job hunt for remote data roles and got fed up with the same recycled listings, outdated postings, and jobs labeled "remote" that turn out to be hybrid once you read the fine print.

Built datatrack.work — scrapes Greenhouse, Lever, Ashby, and Workday directly every hour so you're seeing roles within hours of them going live. Strictly remote-only, no exceptions. Each listing shows the actual tools required (SQL, Python, dbt, Tableau, etc.) and seniority level upfront so you're not wasting clicks.

The part I found most useful personally — a lot of the listings come from company career pages that never post to LinkedIn or Indeed at all. Companies that just post to their ATS and call it a day. That's where the less competitive applications are.

You can find jobs for data scientist, data analyst, and data engineer roles. + 9 others.

Still building it out, would genuinely appreciate feedback from people actively searching.

reddit.com
u/Cold_Safe5630 — 9 days ago

I just wrapped up the 90-minute technical screen for the Software Engineer 1 role. Since I used a few threads here to prep, I figured I’d pay it forward with a high-level overview of the types of problems I ran into without giving away the secret sauce.

The assessment was hosted on Coderbyte and consisted of three distinct sections:

  • Database Management: One query-based task. It wasn't just a simple "select" statement; you definitely need to be comfortable with joining multiple tables and filtering for specific rankings within a dataset. Make sure you handle empty or null values properly.
  • Mathematical Logic: A pattern-based algorithm. You’re given a sequence and need to determine the logic to find the subsequent value. Brush up on your basic combinatorics or famous number arrays—knowing the "why" behind the sequence is faster than trying to brute-force the math.
  • Workflow Automation: A practical task involving version control. This was less about "coding" an algorithm and more about demonstrating you know how to navigate a repo, manage branches, and stage specific files correctly via the command line.

Timing: 90 minutes felt like enough time if you don't get hung up on the syntax. I'd recommend double-checking your Git commands before hitting submit, as that environment can be a bit finicky if you aren't precise.

Has anyone else gone through the Uptime Crew pipeline recently?
Any idea what the typical turnaround time is for the next steps?

reddit.com
u/Cold_Safe5630 — 16 days ago