u/Expensive_Grape6765

I cannot take this anymore.

I'm a student, currently in uni year 1.

I will say, I am really different from the average person in SG. Throughout the years interacting with other Gen Zs of my batch (born between 2002 to 2006), all I can say is, god forbid people have hobbies.

I say my hobby and they don't grow curious. But if it's something relatable, they go on and on about it with other people. It's also not just hobbies, but behaviour and attitude. If an individual says something unique, they scurry away. If an individual wants to criticize, acts not-as-per-expected from the average Singaporean, they scurry away. What this shows is people who are different are looked at less while those who are factory-produced replicas are always given social opportunities.

Since I was young, I was always out of the norm and got bullied so many times, including in NS.

I cannot tahan this country. I've been suffering from severe social isolation it is suffocating. My social skills are so bad it is insane to find anyone who is willing to put up with my differences from the norm. I find it so difficult to talk to people while trying to hook them to the conversation, just because they don't find me relatable.

I am not going to be a replica of everyone in SG.

I had goals to help SG improve such as via public health policy, but I am trying to resist losing that motivation.

If people don't want to help me, then why the hell should I serve to improve this country?

I love being innovative, but this country suffocates that kind of characteristic in me.

reddit.com
u/Expensive_Grape6765 — 3 days ago

The Market Is A Game Of Probability

Statistics is crucial to measure this statistical significance.

Because the market is so dynamic, whether or not your strategy works is likely to require switching of strategies between reversals and trend-following.

And the problem is, it is significantly difficult to know when exactly the market is choppy or trend-following.

To help with this, I attempted to quantify this with this question, "What is the probability of a pullback happening?" or "What is the probability of a reversal happening"?

This is based off my very own strategy of solely using the 5-minute timeframe to trade.

It may seem like whatever it's measuring is noise, but when you accompany with the correct strategy, it may be promising. I personally have a strategy that can filter out fake pullbacks to a statistically significant degree (<0.001), so I'm excited to see what this can do!

I have yet to test this, as I am currently testing one strategy myself right now. Will incorporate this in the future and see how the testing goes.

u/Expensive_Grape6765 — 5 days ago

Not sure how common it is to value authenticity at all costs, but here goes.

Think about it. You are to be with this particular individual throughout the rest of your life. If you're deciding to "dress nice for the occasion" because they are dates or you want to increase your chances of attracting the individual, what happens if because of that, you're causing your potential partner to expect you to do this in your daily routine, even till marriage and through that? If post-marriage, a couple is unhappy, that's a big problem. Shouldn't we maintain our authenticity at all costs no matter what, and then find a partner who likes it, so that the attraction is long-term lasting?

[EDIT: Dressing nicely is fine, but not purposefully in a way that feels faked of yourself]

reddit.com
u/Expensive_Grape6765 — 14 days ago

Cost & Performance Efficiency

  • Training Cost-Performance (8t): +170% to +180% gain (2.7x–2.8x)
  • Inference Cost-Performance (8i): +80% gain
  • Training Power Efficiency (8t): +124% gain in performance-per-watt
  • Inference Power Efficiency (8i): +117% gain in performance-per-watt

Networking & Latency

  • Data Center Network Bandwidth: +300% gain (100 Gb/s to 400 Gb/s)
  • Inference Network Latency: -56% reduction
  • Network Routing Distance: -56% reduction (16 hops down to 7 hops)
  • Standard Superpod Chip Count: +4.2% gain (9,216 to 9,600 chips)

Memory

  • On-Chip SRAM (8i): +200% gain (3x capacity)
  • HBM Capacity (8i Inference): +50% gain (192 GB to 288 GB)
  • HBM Capacity (8t Training): +12.5% gain (192 GB to 216 GB)

Impact on Google's SOTA - Gemini 3.1 Pro Preview

  • For Gemini 3.1 Pro today, the TPU 8i means cheaper (~50% cost reduction), faster, and more responsive APIs with vastly improved long-context handling.

Impact on Future Models

  • For future Gemini models tomorrow, the TPU 8t removes the data-center bottlenecks, unlocking the compute necessary to train the next frontier of trillion-parameter, deeply multimodal AI systems.

---

Some of the network metrics like the -56% reduction from 16 hops down to 8 hops were from the presentations on the floor at Cloud Next '26, but here are the general articles.

  1. TPU 8t and TPU 8i technical deep dive | Google Cloud Blog
  2. Google announces 'Workspace Intelligence' and TPU 8t + 8i chips
  3. Inside Google's TPU V8 strategy, delivering two chips for two crucial tasks at incredible scale — network scales up to 1 million TPUs per cluster, an advantage over Nvidia AI accelerators | Tom's Hardware
u/Expensive_Grape6765 — 15 days ago

Cost & Performance Efficiency

  • Training Cost-Performance (8t): +170% to +180% gain (2.7x–2.8x)
  • Inference Cost-Performance (8i): +80% gain
  • Training Power Efficiency (8t): +124% gain in performance-per-watt
  • Inference Power Efficiency (8i): +117% gain in performance-per-watt

Networking & Latency

  • Data Center Network Bandwidth: +300% gain (100 Gb/s to 400 Gb/s)
  • Inference Network Latency: -56% reduction
  • Network Routing Distance: -56% reduction (16 hops down to 7 hops)
  • Standard Superpod Chip Count: +4.2% gain (9,216 to 9,600 chips)

Memory

  • On-Chip SRAM (8i): +200% gain (3x capacity)
  • HBM Capacity (8i Inference): +50% gain (192 GB to 288 GB)
  • HBM Capacity (8t Training): +12.5% gain (192 GB to 216 GB)

Impact on Google's SOTA - Gemini 3.1 Pro Preview

  • For Gemini 3.1 Pro today, the TPU 8i means cheaper (~50% cost reduction), faster, and more responsive APIs with vastly improved long-context handling.

Impact on Future Models

  • For future Gemini models tomorrow, the TPU 8t removes the data-center bottlenecks, unlocking the compute necessary to train the next frontier of trillion-parameter, deeply multimodal AI systems.

---

Some of the network metrics like the -56% reduction from 16 hops down to 8 hops were from the presentations on the floor at Cloud Next '26, but here are the general articles.

  1. TPU 8t and TPU 8i technical deep dive | Google Cloud Blog
  2. Google announces 'Workspace Intelligence' and TPU 8t + 8i chips
  3. Inside Google's TPU V8 strategy, delivering two chips for two crucial tasks at incredible scale — network scales up to 1 million TPUs per cluster, an advantage over Nvidia AI accelerators | Tom's Hardware
u/Expensive_Grape6765 — 15 days ago

Cost & Performance Efficiency

  • Training Cost-Performance (8t): +170% to +180% gain (2.7x–2.8x)
  • Inference Cost-Performance (8i): +80% gain
  • Training Power Efficiency (8t): +124% gain in performance-per-watt
  • Inference Power Efficiency (8i): +117% gain in performance-per-watt

Networking & Latency

  • Data Center Network Bandwidth: +300% gain (100 Gb/s to 400 Gb/s)
  • Inference Network Latency: -56% reduction
  • Network Routing Distance: -56% reduction (16 hops down to 7 hops)
  • Standard Superpod Chip Count: +4.2% gain (9,216 to 9,600 chips)

Memory

  • On-Chip SRAM (8i): +200% gain (3x capacity)
  • HBM Capacity (8i Inference): +50% gain (192 GB to 288 GB)
  • HBM Capacity (8t Training): +12.5% gain (192 GB to 216 GB)

Impact on Google's SOTA - Gemini 3.1 Pro Preview

  • For Gemini 3.1 Pro today, the TPU 8i means cheaper (~50% cost reduction), faster, and more responsive APIs with vastly improved long-context handling.

Impact on Future Models

  • For future Gemini models tomorrow, the TPU 8t removes the data-center bottlenecks, unlocking the compute necessary to train the next frontier of trillion-parameter, deeply multimodal AI systems.

---

Some of the network metrics like the -56% reduction from 16 hops down to 8 hops were from the presentations on the floor at Cloud Next '26, but here are the general articles.

  1. TPU 8t and TPU 8i technical deep dive | Google Cloud Blog
  2. Google announces 'Workspace Intelligence' and TPU 8t + 8i chips
  3. Inside Google's TPU V8 strategy, delivering two chips for two crucial tasks at incredible scale — network scales up to 1 million TPUs per cluster, an advantage over Nvidia AI accelerators | Tom's Hardware
u/Expensive_Grape6765 — 15 days ago
▲ 82 r/Bard

Cost & Performance Efficiency

  • Training Cost-Performance (8t): +170% to +180% gain (2.7x–2.8x)
  • Inference Cost-Performance (8i): +80% gain
  • Training Power Efficiency (8t): +124% gain in performance-per-watt
  • Inference Power Efficiency (8i): +117% gain in performance-per-watt

Networking & Latency

  • Data Center Network Bandwidth: +300% gain (100 Gb/s to 400 Gb/s)
  • Inference Network Latency: -56% reduction
  • Network Routing Distance: -56% reduction (16 hops down to 7 hops)
  • Standard Superpod Chip Count: +4.2% gain (9,216 to 9,600 chips)

Memory

  • On-Chip SRAM (8i): +200% gain (3x capacity)
  • HBM Capacity (8i Inference): +50% gain (192 GB to 288 GB)
  • HBM Capacity (8t Training): +12.5% gain (192 GB to 216 GB)

Impact on Google's SOTA - Gemini 3.1 Pro Preview

  • For Gemini 3.1 Pro today, the TPU 8i means cheaper (~50% cost reduction), faster, and more responsive APIs with vastly improved long-context handling.

Impact on Future Models

  • For future Gemini models tomorrow, the TPU 8t removes the data-center bottlenecks, unlocking the compute necessary to train the next frontier of trillion-parameter, deeply multimodal AI systems.

---

Some of the network metrics like the -56% reduction from 16 hops down to 8 hops were from the presentations on the floor at Cloud Next '26, but here are the general articles.

  1. TPU 8t and TPU 8i technical deep dive | Google Cloud Blog
  2. Google announces 'Workspace Intelligence' and TPU 8t + 8i chips
  3. Inside Google's TPU V8 strategy, delivering two chips for two crucial tasks at incredible scale — network scales up to 1 million TPUs per cluster, an advantage over Nvidia AI accelerators | Tom's Hardware
u/Expensive_Grape6765 — 15 days ago
▲ 10 r/AnkiAi

I'm learning medical stuff, and usually, one topic takes me a gruesome 5 days of full-fledged card creation -> optimization process. And even then, the cards still suck because whenever I review or learn a card, there's always some problem that it has (e.g., too vague, etc).

With AI, I got that down to 3-6 hours. That's insane! I reeeeeeeeally appreciate LLMs so much.

reddit.com
u/Expensive_Grape6765 — 17 days ago

While this is on a case-by-case basis, generally speaking, people in SG are pretty gloomy.

Someone can say they got a girlfriend or got married, and people don't really show happiness or support for the person.

Someone can say they got a new job, and people don't really show happiness or support for the person.

Someone can say they made progress on their hobby, and the same idea goes.

It's just super stiff here. Show some support and happiness man. We only got one life (or another life if you believe in it)! Enjoy life! Throw a celebration for your friend or something!

reddit.com
u/Expensive_Grape6765 — 18 days ago