u/Fluid-Consequence783

▲ 2 r/remotework+1 crossposts

So there’s a company called Linear that announced about a month or two ago a major change in their strategy. I think that even if you don’t use Linear, it’s something you should pay attention to.

Linear is a quite popular project management or task management platform used mainly by developers and product leaders. It’s kind of a Silicon Valley sweetheart. The basic idea is that writing software is a complicated project with a lot of moving parts, and the bottleneck is R&D. So Linear created software to manage R&D work, giving managers the ability to allocate tasks, track progress, see who’s doing what, understand how tasks connect to larger projects, and whether things are moving according to plan and meeting deadlines.

The interesting part is that the world is changing. The distance between creating a task and executing it has shortened, which means that R&D may not be the bottleneck anymore. And if you don’t need to manage work, you don’t need software to manage work.

So Linear announced a strategic shift from being a platform for managing tasks to focusing on execution of tasks. In the context of product and software development, this means Linear will become a platform to gather more context—bugs, product inputs, history, decisions, architecture—and hold all the relevant information around a specific task. Then it will enable better execution of that task.

So it’s moving from a work management platform to an execution management platform, which I think is really interesting.

The broader question is whether this shift will happen across all industries and tasks, because most of us are people who execute tasks in one way or another. If this change plays out, it raises a bigger question: what are the roles of people in these organizations? What does it mean for individuals? If AI is executing the work, what does that mean for employees and companies?

It’s a fascinating time to witness this. It’ll be interesting to see how Linear evolves as a company and what it means for people in software development—and more broadly.

reddit.com
u/Fluid-Consequence783 — 7 days ago

I tried one of these agent-style products from a large company. I won’t mention names. It’s not exactly a direct competitor of my company, but it operates in a similar space — and the experience was honestly shocking from a security standpoint.

I added it to my WhatsApp. The interaction already felt a bit odd — instead of behaving like a separate entity, it appeared like messages I send to myself, which was just strange.

During setup, it clearly stated that it couldn’t read any of my other WhatsApp conversations and would only respond when I messaged it directly.

A few minutes later, I was in a group chat with some friends. One of them tagged me and asked for my daughter’s name (as part of a conversation about our kids). Suddenly, a message appeared **from me** (!!!) saying something like, “Hey, I’m not really sure, I’ll need to check and get back to you.”

I never wrote that message.

That’s when I realized the AI agent had responded in my name, in a group chat, without my permission - despite explicitly claiming it couldn’t access or respond to those conversations.

Again... This is a product released by a large, multi billion dollars public company.

Experiences like this are exactly why I think we still have a long long way to go in terms of basic security of AI agents at work.

reddit.com
u/Fluid-Consequence783 — 7 days ago