u/newspupko

Switched from search filters to behavioral signals 4 months ago. Here's what the data actually looked like

Most B2B outreach fails because you're reaching people who match a filter but haven't done anything to suggest they care. I swapped search-based sourcing for behavioral signals four months ago and the lift in reply rates dwarfed any prompt or copy test I'd run all year.

Worth unpacking why this is an agent-relevant problem and not just a GTM one. A search filter is a static query: it returns a list that's identical whether you run it today or in six weeks. A behavioral signal is a real-time event: "this person just did something." Agents are genuinely good at the second type of work and genuinely mediocre at the first. Point an agent at a static list and you've built a mail-merge with extra steps. Point it at a live stream of signals and it actually starts behaving like an agent — deciding what's worth acting on, how quickly, and with what context.

The five signal sources that earned their place in my stack:

LinkedIn event attendees in your niche — someone blocked 60 minutes on your exact problem space. That's telegraphed intent and the conversion rate reflects it.

Members of small, specific groups — not the 400k-member generic ones, the 1,200-person group for the exact sub-category you sell into.

Alumni matching your ICP — shared school, company, or program. The opener writes itself.

People engaging with competitor content — if they're commenting thoughtfully on a competitor's posts, they're in-market right now. Underweighted this for too long.

Profile viewers — warm by definition. Low volume, highest per-contact conversion rate of the five.

The architectural lesson was the interesting part. Behavioral signals make the agent useful because they collapse the decision surface into something narrow enough that a model call can meaningfully contribute — does this specific event match our ICP, what's the right angle for this specific trigger, is this worth routing now or waiting. Wrapping an agent around a static list forces it to guess; wrapping it around a signal stream lets it decide.

The piece most people underbuild is the engagement layer on top of the signal layer. Sourcing is step one. Staying present in the feeds of those same people between the first touch and the moment they reply is step two, and it's where most systems quietly fail. I've been running Liseller for that — it handles contextual commenting on target accounts' posts through the official LinkedIn API, so the people surfaced by the signal layer actually see us consistently without someone burning half their day in the feed manually. The agent does the decisioning, Liseller handles the recurring surface-level presence, and the pipeline compounds because neither piece is fighting the other.

Curious from others building agent systems for outbound: what signal sources are you feeding in that I haven't listed, and how are you handling the event-stream-to-action plumbing without turning it into a full-time maintenance job?

reddit.com
u/newspupko — 2 days ago

contextual anchoring in LLMs is weirder than I thought

so I've been down a rabbit hole on this lately, specifically around why models seem, to lock onto early context and then kind of drift from anything you add later. there's actually a name for the underlying mechanism - attention sinks - where the model over-attends to the very, start of a sequence (like the BOS token) and that ends up pulling generation away from your actual input. I'd noticed this in longer content workflows but didn't realise it was this structural. what caught my attention recently is that this problem hasn't gone away even as context windows have exploded - we're talking, 400K to 1M tokens in some current models - which you'd think would make anchoring less of an issue but apparently not. there's active research on training-free fixes that work by injecting meaningful context into that BOS token position instead of letting it just passively absorb attention. one approach getting traction is AnchorAttention, which uses anchor tokens to stabilise attention across long sequences. the directional gains on long-context benchmarks look promising, though I'd want to see more real-world QA results before getting too excited. there's also separate work on prompt ordering strategies for dialogue tasks where just changing where you place, key info produced measurable improvements, which honestly makes me rethink how I structure long prompts for content stuff. the part I find most interesting is that stronger models apparently show this anchoring bias more consistently than weaker ones, not less. so scaling alone doesn't fix it - it might even entrench it. anyway curious if anyone here has found prompt-level workarounds that actually help, or if you reckon this is mostly something that needs solving at the architecture level

reddit.com
u/newspupko — 3 days ago