u/TermKey7269

need IOC notes for neet 😭

need IOC notes for neet 😭

hey guys does anyone have proper inorganic chemistry notes for neet

not really looking for those teacher ppt slides as they are too messy and hard to revise from

if you have your own e notes or handwritten ones that cover everything, please share

really need something clean and complete for revision😭😭

Thanksss in advance : )

u/TermKey7269 — 7 hours ago
▲ 18 r/LLMDevs+1 crossposts

Can a small (2B) local LLM become good at coding by copying + editing GitHub code instead of generating from scratch?

I’ve been thinking about a lightweight coding AI agent that can run locally on low end GPUs (like RTX 2050), and I wanted to get feedback on whether this approach makes sense.

The core Idea is :

Instead of relying on a small model (~2B params) to generate code from scratch (which is usually weak), the agent would

  1. search GitHub for relevant code

  2. use that as a reference

  3. copy + adapt existing implementations

  4. generate minimal edits instead of full solutions

So the model acts more like an editor/adapter, not a “from-scratch generator”

Proposed workflow :

  1. User gives a task (e.g., “add authentication to this project”)
  2. Local LLM analyzes the task and current codebase
  3. Agent searches GitHub for similar implementations
  4. Retrieved code is filtered/ranked
  5. LLM compares:
    • user’s code
    • reference code from GitHub
  6. LLM generates a patch/diff (not full code)
  7. Changes are applied and tested (optional step)

Why I think this might work

  1. Small models struggle with reasoning, but are decent at pattern matching
  2. GitHub retrieval provides high-quality reference implementations
  3. Copying + editing reduces hallucination
  4. Less compute needed compared to large models

Questions

  1. Does this approach actually improve coding performance of small models in practice?
  2. What are the biggest failure points? (bad retrieval, context mismatch, unsafe edits?)
  3. Would diff/patch-based generation be more reliable than full code generation?

Goal

Build a local-first coding assistant that:

  1. runs on consumer low end GPUs
  2. is fast and cheap
  3. still produces reliable high end code using retrieval

Would really appreciate any criticism or pointers

u/TermKey7269 — 1 day ago