r/gpu

🔥 Hot ▲ 65 r/gpu

made some art by destroying an rx580

I got the gpu for free as a PC repair shop threw it away for overheating. new thermal pads cost almost as much as it was worth.

it's powered by USB c and in made an adjustable pwm signal from a 555 timer to make the fans turn slowly.

u/No-Succotash-9576 — 6 hours ago
▲ 7 r/gpu

Is this a fair price / good upgrade

i built my pc in 2020 with a 570p xt. is this a good upgrade at a fair price?

u/Additional_Speed1559 — 10 hours ago
▲ 2 r/gpu+1 crossposts

XFX never buying again

bought a 9070xt OC magnetic air on release last year. paid the premium like a dumbass because I assumed I would get a quality product for the premium price.

within 3 months 2 out of 3 fans on the GPU died mid game playing on 4k. I only noticed after the card started sounded like an aluminum tumbling in a dryer.

they sent me 6 replacement magnetic fans

after 3 months they die. consistently on time.

asked for an rma option.

told me the card failed the 3d test when the issue was fans. they would not tell me why the fans stopped though. good to know it was defective

offered to replace with another OC magnetic fan. based on past experience I was very reluctant to do so. I asked for the non magnetic wired version.

I got the replacement and it works, yet still out $50 difference for a magnetic fan version, their customer support was short and unempathetic.

thanks XFX, you showed me to stay away from magnetic fans and your products.

reddit.com
u/Curious-Bother3530 — 6 hours ago
▲ 2 r/GamingLaptops+1 crossposts

EGPU is it worth it for me?

I bought an RTX 5070 Ti and an AOOSTAR AG02 eGPU enclosure that supports Thunderbolt 5 and OCuLink. I plan to use it with my PC on a 2.8K OLED display (2880×1800). Is this a good setup, and what kind of performance drop should I expect? Based on YouTube videos, Thunderbolt on Intel systems seems inconsistent, so I’m considering using my spare PCIe 4.0 x4 SSD slot via OCuLink instead. What do you think?

BTW CPU is intel core ultra 7 155h, with 32gb ram.

reddit.com
u/Select_Audience1171 — 6 hours ago
▲ 3 r/gpu

V100 with external mount, need to make sure I'm using the right cables.

Ok... So long story short, bought a pair 32gb and a 16gb V100s one with a PCIE adaptor mounting board and one with this external style PCIE mounting board. The expensive one went into the PCIE and long story short, it poofed the second I applied power to it. I read and re-read the instructions I tried AI, I asked elsewhere, I even looked at the Chinese manufacturers website "GPU Cables" is the most vaguest bullshit I've ever encountered. Everything told me PCIE cable or "GPU Cables", I saw redit saying CPU cables, but that was them powering the device directly or inside a server. So I figured the adaptor board must be doing something to change the polarity before it hit the card. NOPE. POOF. Dead. I took the whole thing apart and inspected everything with a voltmeter, the only damaged is one MOSFETs chip, GPU and VRAM both checked out fine. So, I got a hot air solder kit and replacement MOSFET with a spare on the way. Still not 100% sure if it was a case of wrong cables or the fact I bought this used and maybe the Mosfet was on the way out... Anyway I intend to repair. Weirdly the adaptor board has no scorch marks and tested perfectly fine.

So this brings me to my current situation, my lesser V100 card has an external mount. I literally have a separate PSU that is powering just it and connects with the other PSU to the motherboard so it will turn on. As you can see the mother wires are going to the external mount to the PCIE adaptor card taking in the signal from the V100 to the motherboard. I tested this thing with a voltmeter trying to determine if it's PCIE or CPU... I marked red all the spots the beeped on red while black was grounded in a screwhead. And once when it was at the PLUS leg of a capacitor. The results are above. AI is telling it's PCIE. But I have no idea, because I don't know which side is "up" since the clip tag is pointed down, it's upside down. Hence why I marked everything in red and took enough photos to communicate the issue/layout, etc. even found some PCIE extension cables to illustrate which side would be yellow if plugged into this.

Can someone please tell me if this is PCIE or CPU? I'm honestly too afraid to turn it on, I've already smoked a card worth over a grand and if this smoked too I'm going to spiral. My heart hurts at the amount of equipment I just lost and I need at least one V100 to be working to do my college work.

Any help would be appreciated.

This is the ebay auction I got the adaptor from, I tried messaging the seller twice, since they are selling multiple, no response.

External Nvidia Tesla P100 V100 SXM2 PCI-E X16 +1* SFF-8654 Adapter +2 Cable | eBay

The other card adaptor that blew up the 32gb card I bought the seller did get back to me, but he basically sent me the same text I got from google translating the manual... Which is vague.

Huh, apparently I forgot to add the one shot of my workbench I took so you can see the other side of what it's plugged into... Let me know if you need that. I can't seem to add another photo to this post for some reason (I rarely post on Reddit at all).

u/BlueKobold — 9 hours ago
▲ 12 r/gpu

I currently have a 3060 12gb I am interested in upgrading. I’m looking at a 5060ti 16gb or going to the blue side b580. The card would run me 400 or the cheaper alternative would be 275. I currently don’t have a 1440p monitor but I am looking to making the switch in the near future

reddit.com
u/Money_Difference_693 — 23 hours ago
▲ 4 r/gpu+1 crossposts

What even is the point of smol-GPU with this many simplifications?

https://github.com/Grubre/smol-gpu

The designer says it's for educational purposes, but the amount of stuff stripped away makes me question how much it actually teaches about real GPU architecture.

Here's what's been simplified away:

  1. Sequential warp scheduling : one warp runs to completion, then the next. No latency hiding at all.

  2. No warp-level parallelism within a core : only one warp occupies resources at a time.

  3. No cache hierarchy : cores talk directly to global memory.

  4. Separated program and data memory : Harvard style, not unified.

  5. No shared memory / scratchpad : so no cooperative algorithms between threads.

  6. No barrier / synchronization primitives : no __syncthreads() equivalent.

  7. No reconvergence stack in hardware : divergence is handled purely through manual masking.

  8. No memory coalescing : each thread issues its own memory request.

  9. No FPU, no special function units : integer only.

  10. No atomics, no fence : subset of RV32I.

At this point it's basically executing one warp after another on each core. If you squint, this is just a multicycle processor that happens to run 32 threads in lockstep. Yes, the SIMT model and execution masking are there, but without pipelining, warp interleaving, or caches, you're not really seeing what makes GPUs fast.

Is there any deeper reasoning behind stripping this much out? And more importantly, I've gone through the RTL and spotted what look like potential race conditions in a few places. Is this repo even a legit baseline to build a more advanced GPU on top of, or would you be better off starting from scratch?

u/New-Juggernaut4693 — 14 hours ago
▲ 3 r/gpu

github.com/ProjectPhysX/hw-smi - A minimal, cross-compatible CPU/GPU telemetry monitor with accurate data directly from vendor APIs and beautiful ASCII visualization

How much VRAM bandwidth does an application or a game pull? Is the traffic over PCIe a bottleneck? What's the CPU/GPU load, RAM/VRAM occupation, temperatures, power draw, clock frequencies? hw-smi works with all CPUs and all Nvidia/AMD/Intel GPUs, on both Windows and Linux.

There are lots of cool hardware monitoring tools already - but they all are either OS-specific, only for CPU or GPU, only for one particular vendor's GPUs, and none of them tell you the VRAM/PCIe bandwidth to give clues about application bottlenecks. 2 years ago I thought to myself: I can do this better. Now I share this powerful tool with the world, for free. Have fun!

https://github.com/ProjectPhysX/hw-smi

u/ProjectPhysX — 17 hours ago
▲ 1 r/gpu

how to properly test used gpu

i picked up a rtx 2070 super locally for 75. the seller said his screen would blackout under load which he thought was a gpu issue. with the gpu installed in a pc, i ran occt for an hour and had no errors or issues. is there anything else to run/look for?

reddit.com
u/_torteval — 11 hours ago
▲ 1 r/gpu

Deshrouding 5070ti questions.

Hey, I‘ll try to keep it as short as I can. I‘m looking for ways to make my pc as quiet as possible even under load and currently my gpu is the only thing that is making audible noise. I have the gigabyte gaming oc 5070ti. luckily it has no audible coil whine, however the fans are forced at a minimum 1000rpm under load which is too loud for me tbh. So I‘m here to ask if a deshroud with noctua fans (or whatever fans are quietest) can make a noticeable difference. if not, are there any other options that would work?

reddit.com
u/MemoryPatient2073 — 14 hours ago
Week