
https://i.redd.it/mvxsfevkj6zg1.gif
TLDR: make LLM requests in code, review them when you're ready
heavily utilizes codecompanion and blink.cmp
https://github.com/khaninm/ainnoying.nvim
Motivation:
I noticed that I primarily interact with LLM's by asking questions about some libraries, code patterns, etc., but I never really let AI write my code for me. So I have a number of chatgpt dialogs that are disconnected to the codebase and a number of codecompanion chats with no real way to tie them to places in code. So I made a small (~180 LOC) plugin to test if anchoring llm responses to code and making them less intrusive to the coding process makes it better.
How it works:
- you type a query starting with a user-defined prefix directly in your code
- blink.cmp catches your intent and invokes relevant strategy
- codecompanion launches chat with the query in the background and asynchronously processes it
- the query line is automatically commented and highlighted
- whenever you're ready you go to the query line and open the chat via a keymap
Limitations:
- chats and highlights do not persist between sessions
- changing query comment may result in losing the link to the chat
- highlights are not togglable, but you can just delete the line
AI disclosure:
no code was written by AI
Skill disclosure:
I am not a professional developer, this is a proof of concept and not a production-level code. Expect bugs and api changes. PRs are welcome
this beautiful color scheme is called `everforest`