u/Last_Bad_2687

Another Framework Appreciation post

My dad's Gen 13 Framework 13 Mainboard has some short when he plugged it into a dock. This is a known issue and the goal isn't to beat on the design.

I had a Gen 11 Framework board laying around from my own upgrade. In 20 minutes I had swapped it over, installed drivers (including extracting the .exe and manually installing the touchpad i2c driver), and he was up and running.

For my I now have a gen 13 board as my server instead of a gen 11, so it's immaterial to me. My dad mostly does word docs so the performance difference is negligible (infact, it went from a gen 13 i5 with 32GB to a gen 11 i7 with 64GB of DDR4 ram). My docker containers will barely notice.

What other laptop can you swap a whole mainboard to a spare and spend $0 on an emergency fix?

reddit.com
u/Last_Bad_2687 — 7 days ago
▲ 166 r/homelab

It's a start

Old gaming PC with RTX 3080 running Piper, Whisper, Qwen2.5:7b for home assistant, self hosted notes (Anchor), CopyParty and Open Web UI, running Fedora desktop

Framework Desktop running gpt-oss:120b for local AI tasks

Home Assistant Green with Zigbee and Z-wave antennas for lights, door sensors

Next steps:

Would like to move to redundant mini PCs and a 10" rack, that b450 motherboard is ancient. Slowly learning about actual server hardware.

Replace the two ancient Seagate 4tb drives with a synology NAS

Please be as mean as possible

u/Last_Bad_2687 — 7 days ago

I bought a 128 GB Framework Desktop at launch but now that category is really popular for local LLM.

A used Mac studio m3 ultra 512 GB goes for about 3k. A new Framework desktop 128GB goes for about the same (unless there's cheaper used ones somewhere?)

Now, apart from having to deal with Apple stuff (I daily Fedora, BTW) the $/GB for local LLM makes a used 512GB Mac studio seem like a steal over 3x Framework Desktops to try Kimi?

What am I missing??

reddit.com
u/Last_Bad_2687 — 8 days ago

Sometimes when I cook bacon I save the grease in a jar for later cooking.

With the harsh limits of Claude I have been working on ideas to make sure the output of any token spend is captured, cleaned and put away.

I use chatGPT/Claude quite a bit for assistance in coding and I realized that multiple tines I had asked ChatGPT the same question (for me it was where to save grub2-mkconfig in Fedora 43 after messing with kernel parameters). I had at least 3 chats where I had asked this before.

I was like... wait a minute - if a model gives me a solution for a situation and it works... that solution should immediately be aliased and stored somewhere to be re-run later.

I haven't figured out a concrete way to "cache" the results of successful scripts /receipes generated but this was just an idea.

Disclaimer: I don't run open claw I run open terminal with Qwen3-coder-next inside open webUI

reddit.com
u/Last_Bad_2687 — 11 days ago

Has anyone implemented Karpathy LLM wiki idea in Open Web UI with knowledge base?

I used open terminal and qwen3-coder-next to implement a folder structure there. I was asking a model to read the wiki when it used a *query knowledge base* tool. I wasn't aware Open Web UI had such a feature and now I'm kinda stuck debating between keeping my LLM Wiki in open terminal vs implementing it in Knowledge base

reddit.com
u/Last_Bad_2687 — 11 days ago

I work at a non profit doing experimental/STEM research + outreach. Deployed an AI server (local AI models for staff), designed a STEM van from scratch (selected the van, taught myself cad, mocked up interiors, got custom cabinets made based on design etc). Manage a team of low code developers.

Worked at a research university computer lab prior to this doing Internet of Things research. Masters in Computer Engineering.

I am being encouraged by everyone to move to private sector where my skills will be more compensated. I just don't know where or how. My path here feels maxed out.

I'm 31 now, I feel the window closing to pivot.

Because my career has been all over the place, I don't seem to meet any of the requirements of what I'm looking for. I am not "AI" enough to work in hardcore AI, I manage our server but I can't remember much from classes on how the models work, forgotten all my math at this point. I don't want to be in a pure tech role either.

I have 1 year management experience now but not sure if that helps. Done a lot of events management type work this past year too (including co-creating and running a 500 person tech event, regularly running 100 person events). Making close to 6 figures.

My big question is where do I fit? I don't mind taking a pay cut to pivot but I have no clue where I fit...

reddit.com
u/Last_Bad_2687 — 13 days ago

I have a Windows 11 VM in a Framework 16 laptop with dGPU passthrough. I had a pain last time setting up Looking Glass, so I opted to do Sunshine/Moonlight.

I have the weirded bug - The stream only loads from Tailscale (Yes, VM to host over tailscale is dumb but saves debugging network config) but not if I put the libvirt bridge IP address - I get a black screen and it says failed to connect. I have triple checked Sunshine and moonlight firewall rules, used netcat to test udp etc. Both adapters are set to Private.

Has anyone faced this? I can't for the life of me figure out why it only loads an image over tailscale.

reddit.com
u/Last_Bad_2687 — 15 days ago

Something I am working on my rooted Supernote is the ability to have an AI agent co-edit/summarize my notes with me. I already have syncthing/inotifywait based scripts to sync my "dirty" (open/ being edited) .note files (and clean ones) which I sync to copyparty. I also found I can stream the ABS_X and ABS_Y of the pen via /dev/input/wacom_real (or similar - my supernote is currently away from me).

My idea: Stream ABS_X and ABS_Y to my Framework AI server to do "live" OCR and occasionally sync the "dirty" .note file as a "ground truth". I would love to see a "live" .md appear on my laptop as I write in my note as a first milestone.

The second milestone would be to have an AI agent read this live .md and think about insights based on a RAG of my previous notes.

The third milestone would be for a multi-modal agent to insert notes INTO my .note live based on insights (highlight this, or auto link this note to your other note on a related topic. Or create a todo based on this). I tried using "sendevent" to /dev/input/event7 but no luck. I briefly see a cursor that disappears. If the actual pen is hovering, I see a point appear sometimes. This seems the hardest part.

There are already supernote-ocr-enhance (I think) and rectangular file and sn2md, and similar projects to convert "closed" .note files to .md, but I think the real value (for me) would be for this to be used by a co-agent "live".

Anyone have any advice on how to "inject" pen strokes using "sendevent"?

reddit.com
u/Last_Bad_2687 — 16 days ago

So with the FW13 Pro I think it paints a pretty good picture for FW16/12

But the Framework desktop is NOT modular - do we think the desktop "pro" path is just a beefier AI Mainboard? Or is there a FW13 Pro like improvement to be made?

reddit.com
u/Last_Bad_2687 — 17 days ago

To whomsoever this may concern,

I was able to successfully update to Fedora 44 today with no issues (yet?), successful first boot.

Sincerely,

reddit.com
u/Last_Bad_2687 — 17 days ago