u/-JustAsking4AFriend

▲ 2 r/GlInet

Disappointed in my Mudi 7 for Swiss + US usage. Is modem change possible?

I'm disappointe din my mudi7 for travel, and while I love the OpenWRT base, the band coverage is disappointing of my EU version in the US. It simply can't match my wife's NetGear M6.

Is the modem in the Mudi7 removable? I'm okay with voiding the warranty if I can purchase a new modem from Aliexpress or Taobao, and swap it in

reddit.com
u/-JustAsking4AFriend — 2 days ago

Hello!

From reading the sub I see that the UniFi Express 7 can be used as a wired AP, with an additional ethernet port for a device.

My question is, is it stable for anyone using it in a similar fashion? I see the initial firmwares and even some updates later there were minor bugs, understandable as it's not the -primary- use case for the device - I'd like to double check if the newest firmwares run this without bugs.

Many thanks!

reddit.com
u/-JustAsking4AFriend — 10 days ago

I'm trying to decide between two configurations, both with 64GB unified memory and 1TB storage. Both are in stock at my local retailer in Switzerland.

* **Mac mini M4 Pro** (12C CPU / 16C GPU, 273 GB/s memory bandwidth)
* **Mac Studio M4 Max** (16C CPU / 40C GPU, 546 GB/s memory bandwidth) - roughly $600 more

Use case is local LLM inference and a local endpoint for my OpenClaw to point to (given Anthropic shenanigans) primarily with Gemma 4 and/or Qwen, plus some experimentation with smaller models for agentic workflows. Maybe plugging it into a coding harness in VSCode or something... No training, just inference.

The Max obviously wins on paper: a ton more memory bandwidth and double the GPU cores. But $600 is $600.

For those of you running 64GB Macs for inference:

  1. How much of a real-world difference does the 273 → 546 GB/s bandwidth jump make on tok/s for Gemma 4 class models? (Q4\_K\_M, Q5\_K\_M)?
  2. Is prompt processing on the M4 Pro's 16-core GPU painful enough at long contexts to justify the Max?
  3. Anyone regret going Pro and hitting a wall? Or regret paying for Max and not using the extra headroom?

Thanks for your guidance!

reddit.com
u/-JustAsking4AFriend — 16 days ago

I'm trying to decide between two configurations, both with 64GB unified memory and 1TB storage. Both are in stock at my local retailer in Switzerland.

* **Mac mini M4 Pro** (12C CPU / 16C GPU, 273 GB/s memory bandwidth)
* **Mac Studio M4 Max** (16C CPU / 40C GPU, 546 GB/s memory bandwidth) - roughly $600 more

Use case is local LLM inference and a local endpoint for my OpenClaw to point to (given Anthropic shenanigans) primarily with Gemma 4 and/or Qwen, plus some experimentation with smaller models for agentic workflows. Maybe plugging it into a coding harness in VSCode or something... No training, just inference.

The Max obviously wins on paper: a ton more memory bandwidth and double the GPU cores. But $600 is $600.

For those of you running 64GB Macs for inference:

  1. How much of a real-world difference does the 273 → 546 GB/s bandwidth jump make on tok/s for Gemma 4 class models? (Q4\_K\_M, Q5\_K\_M)?
  2. Is prompt processing on the M4 Pro's 16-core GPU painful enough at long contexts to justify the Max?
  3. Anyone regret going Pro and hitting a wall? Or regret paying for Max and not using the extra headroom?

Thanks for your guidance!

reddit.com
u/-JustAsking4AFriend — 16 days ago