u/AirButcher

▲ 8 r/Eve

Unintended Auto renewal advice

Hi everyone,

I played some Eve last year after 12 years of absence, and even back then I was pretty casual. This time around I bought a 3 month Omega to try to enjoy it a bit more while I new I had the time (which I did, I love the game, I just don't have much time)

It looks like the subscription auto renewed for another 3 months, which missed in my emails and finances due to it being a really hectic time, and now I've just seen another 3 month renewal tick over and have realised what's been happening. I never logged on to Eve in all those 6 months as I had a super busy and hectic time with work and study and family life, and wasn't intending to do so again until probably next year.

I've emailed support to see if the charges can be reimbursed, but I'm not sure what to expect. Anyone had experience with this or advice for what to do or how to best communicate the situation, other than what I've said above?

reddit.com
u/AirButcher — 4 days ago
▲ 87 r/ollama

The weird thing about the 5080 is that it’s not actually a “mid tier” card anymore in any meaningful sense. The thing has ridiculous compute performance, massive memory bandwidth, fast GDDR7, huge tensor throughput, and can absolutely brute force modern workloads. But then you look over and see 16GB VRAM and it feels like this strange artificial limitation that exists mostly because NVIDIA needs the 5090 to exist.

And I know people will immediately say “16GB is enough for gaming” and honestly, today, for most games, yeah it mostly is. But that almost feels like outdated thinking now because high end GPUs increasingly aren’t just gaming products anymore. A huge percentage of enthusiasts are buying these things for AI inference, image generation, local LLMs, video models, coding assistants, all that stuff. VRAM has become one of the single most important specs again in a way we haven’t really seen since the Titan/3090 era.

What’s funny is NVIDIA accidentally made the rest of the 5080 too good. Like the card has nearly 4090-class memory bandwidth and absurd compute throughput, so naturally people start thinking “this thing would be incredible for local AI workloads”… then immediately hit the 16GB wall. Meanwhile people are still hanging onto 3090s purely because they have 24GB. Think about how insane that is. A several-generation-old card is still disproportionately desirable because of memory capacity alone.

And it’s not even just about running giant models. More VRAM just makes everything less annoying. Bigger context windows, larger quantizations, running multiple models, image generation without constantly optimizing settings, less offloading to system RAM... Etc etc

The current segmentation feels super obvious too. 5070-class cards get enough VRAM to game well, 5080 gets enough VRAM to almost be amazing for prosumer AI, then the 5090 gets the “real” memory configuration. It feels less engineering constrained and more product-stack constrained..seems so weird that you can get a 5060 ti with same memory as a 5080!

tl;dr I think if the 5080 had launched as a 24GB card, even at a slightly higher price, people would’ve viewed it as one of the all-time great enthusiast GPUs. Instead the conversation around it constantly circles back to “yeah but 16GB…” which is kind of crazy considering how monstrously powerful the rest of the card is.

reddit.com
u/AirButcher — 8 days ago