u/Substantial_Run5435

How are U.S. tariffs calculated for products made in countries that no longer exist?

Purchased an item located in the EU but originally made in West Germany, which ceased to exist in 1990 before the EU was founded.

The only other item I've imported was shipped from the UK and made in Japan, but the tariffs were calculated for the UK ¯\_(ツ)_/¯

reddit.com
u/Substantial_Run5435 — 5 days ago
▲ 13 r/MacPro2019LocalAI+1 crossposts

Hi, I made a post recently looking for advice running local LLMs on my 2019 Mac Pro. At the time, I was split on what GPUs to use, but I happened upon a really good deal for a pair of Vega II Duo GPUs (4x Vega II 32GB GPUs for 128GB total HBM2 with 1Tb/s bandwidth each). I currently have the small infinity fabric (IF) jumpers on them (connects the two GPUs inside each module) but don't have the IF bridge to connect the two modules together, though from what I'm reading I might want to skip the IF link stuff all together as support for it is spotty.

I'm looking for advice on the following:

  • What OS would be best? I've seen several people recommend Ubuntu Server or RHEL. I would also be fine with running Windows 10/11. Am more concerned with what will be easier to live with and stable. From what I understand, MacOS is not really an option for this on an Intel-based machine if I want to use the GPUs.
  • What stack/models and parameter size(s) would you recommend that would be well suited to my spec. I have a lot of VRAM so I believe I should be able to run a fairly large model. AFAIK CPU and RAM aren't as relevant, but I could have a 12 or 16 core Xeon W and up to 192GB of 6-channel DDR4 ECC RAM.
    • My primary use cases are research, text analysis, document generation, not so much image/video generation.
    • I would like to at least poke around and "vibe coding". I have no coding background whatsoever. This is more a curiosity than anything else.
  • I am considering purchasing a second W6800X to use two of those instead of the 2x Vega II Duos. That would be half the VRAM (64GB total) and lower memory bandwidth, but a newer architecture and perhaps better support (and half the power consumption). Would that be a better route to take hardware wise?
reddit.com
u/Substantial_Run5435 — 14 days ago
▲ 1 r/macpro

Hey everyone, I have a single 8GB stick of OEM Micron DDR4 that causes my Mac Pro to not boot. I've tried it in a couple different slots/configurations (4 sticks, 6 sticks, etc) but haven't tried it as a single stick or in a pair yet. When attempting to boot I get the blinking amber LED for memory failure.

Is there anything else I can do to troubleshoot that stick? Or is it just toast if the computer won't boot with it? It looks perfect physically but was DOA when I received it. I know you can check the memory status in the system information, but my computer doesn't boot with that stick installed, so that seems kind of pointless.

reddit.com
u/Substantial_Run5435 — 17 days ago
▲ 21 r/macpro

My 2019 Mac Pro currently has the following PCIe devices installed:

  • Slot 1/2: W6800X
  • Slot 3: RX 6900 XT
  • Slot 4: Empty
  • Slot 5: Afterburner
  • Slot 6: OWC Accelsior with 4 NVME drives
  • Slot 7: Empty
  • Slot 8: Apple I/O card

The Expansion Slot Utility was previously showing me 100% allocation of both pools. However; I recently opened the machine up to test fit a third GPU, but removed that GPU and left the configuration as it was previously (no new devices, though it's possible the OWC card got moved). Now, I'm getting 100% of Pool A (Afterburner) and 125% of Pool B (Apple I/O and OWC Accelsior). Does anyone know why the allocation went up? I'm sure it's not a huge issue but it's a little puzzling to me. I tried moving the OWC card from 7 to 6 where it sits now, but that made no difference. Afterburner is supposed to be installed in Slot 3, 4, or 5, so that can't be moved.

u/Substantial_Run5435 — 18 days ago
▲ 17 r/MacPro2019LocalAI+1 crossposts

I know these are both older GPUs at this point and that the W6800X generally outperforms the Vega II, but I'm wondering if there are any use cases where a Vega II is better (maybe due to the higher memory bandwidth?)

I ended up with both of these GPUs and am planning to keep one alongside my RX 6900 XT.

reddit.com
u/Faisal_Biyari — 10 days ago