u/Safe-Introduction946

▲ 3 r/vastai

Ramp tracks real spend data across 50,000+ businesses. Vast.ai landed in the Trending category — breakout growth relative to size — the same signal that flagged Anthropic and Perplexity before they became household names.

Read more here.

u/Safe-Introduction946 — 7 days ago
▲ 7 r/vastai

We've shipped a major upgrade to two-factor authentication on Vast. Five things to know:

  • Authenticator app support: use any TOTP app (Google Authenticator, 1Password, Microsoft Authenticator, etc.) instead of SMS
  • CLI 2FA: 2FA now works through the Vast CLI
  • Team enforcement: team owners can now create roles with permissions that require 2FA to be used
  • Backup codes: automatically generate one-time recovery codes so you're never locked out
  • Multiple methods: register more than one per account

Migration notice — Action required if you use the old (legacy) SMS 2FA The legacy 2FA is being deprecated. So if you currently use 2FA, log in normally, go to the Settings Page, and click "Regenerate" on the backup codes section to migrate to the new 2FA. Takes under a minute.

How to set up: Enable 2FA in SettingsFull setup guide

Building on Vast? Get pricing alerts, beta features, and direct access to the team in our Discord — 10,000+ members → discord.gg/vast

u/Safe-Introduction946 — 13 days ago
▲ 1 r/vastai

Kimi K2.6 is an open-source, native multimodal agentic model from Moonshot AI that advances practical capabilities in long-horizon coding, coding-driven design, proactive autonomous execution, and swarm-based task orchestration. It is a Mixture-of-Experts model with 1 trillion total parameters and 32 billion activated per token, built on the Kimi K2.5 architecture.

Learn more about the Kimi K2.6 template in the model library: https://vast.ai/model/kimi-k2.6

u/Safe-Introduction946 — 19 days ago
▲ 2 r/vastai

Gemma 4 is Google DeepMind's next-generation family of open multimodal models.

The 26B A4B variant is a Mixture-of-Experts model with 25.2B total parameters but only 3.8B active per token, delivering frontier-level quality at the inference speed of a much smaller dense model. It handles text and image input natively, supports a 256K context window, and covers 140+ languages.

The 31B variant is the dense flagship of the family, built to deliver frontier-level reasoning, coding, and multimodal understanding on consumer GPUs and workstations. It natively handles text and image input, supports a 256K context window, and covers 140+ languages.

Read more about the Gemma 4 templates in the model library:

u/Safe-Introduction946 — 22 days ago