u/Alces_

At least once a week, I see a comment on a bug that is agent generated that multiple paragraphs long with section headings like “1.The Catalyst 2. The Perfect Storm” and the dev posting the comment not doing ANY investigation further.

reddit.com
u/Alces_ — 9 days ago

There’s a lot of discussion in this community regarding LLMs, both negative and positive. In ‘The Book of Why’, AI is considered the first level on a Ladder of Causation which is Association “What if I see”. The other ladders are Intervention, “What if I do”, and Counter Factuals, “Imagining, What if I had done”. ln my opinion, LLMs will always be stuck on level one because of how they’re built (Deep neural network built by being trained on inputs/outputs).

Working with LLMs for software engineering, there are a lot of great use cases which LLMs help with. One of which is debugging. Something I’ll often see is the model correlating a certain PR or change with a but because the timelines match up and dependencies line up. However by providing the model with a log showing the new change couldn’t possibly be the reason (the flag is turned off or otherwise).

This is a novel case, but I think these trends feel similar across use cases with AI. We try to find the perfect input to get the desired output or steer the model. We will always need to provide input, and (in my opinion) will not eliminate our jobs, but definitely change them. This might already be common sense and if you’re reading this you might say “duh” but I’m just writing some musings here and hopefully making the doom posters less scared.

I’m curious what everyone else thinks.

reddit.com
u/Alces_ — 17 days ago