u/Resident-Ad-5419

Current Claude Pro limit is ~$8 per session and $64 per week

Current Claude Pro limit is ~$8 per session and $64 per week

https://preview.redd.it/8fuovmvu3jwg1.png?width=2200&format=png&auto=webp&s=1a5bcaaf1b1e6c8f78099da8483db26adfe853bc

I used a variety of models on this account to see how it behaves. Based on current imposed limits and usage pattern, I think we are given $64 of usage per week and a max of $8 per session (assuming a max of 8 sessions).

If 23$ is 34%, then 100% is around $64. That's the math.

https://preview.redd.it/lfct3uyy3jwg1.png?width=2222&format=png&auto=webp&s=2848738c2fda110c068b578a50b7b06f656c33ef

Given that I paid $20 for the subscription and got $23 of usage, it's fine in a sense I guess. The only thing that hurts a bit is other open models are performing better at this price point. Specially the GLM 5.1 and Kimi K2.6 is doing far better given similar prompt.

reddit.com
u/Resident-Ad-5419 — 4 hours ago

Kimi K2.6 in Ollama Cloud is not up to the mark

Given the same prompt and environment, the quality of the Ollama Cloud is definitely not up to the mark. It thinks more, and make more mistakes in the process. Opencode Go is doing a bit better job here despite being lower cost than Ollama Cloud.

Of Course the Kimi K2.6 on original Kimi for Coding subscription feels much more superior. That makes me wonder if the models they have released publicly and the model they use in house are different or not.

I gave them two different set of tasks; Ollama Cloud and Opencode Go underperformed all of the times.

Additional note, Fireworks AI did it way better than official Kimi Subscription, almost 2x times better and faster. They don't have new Firepass passes so I had to test with the usage based api, which can be costly in the long run.

https://preview.redd.it/gq1v7v9ydiwg1.png?width=1610&format=png&auto=webp&s=00464affc41a1ecd29c3ff53851ff9950deb029a

reddit.com
u/Resident-Ad-5419 — 6 hours ago
▲ 1 r/ZaiGLM

Under no circumstance delete past run artifacts

GLM 5.1 consistently avoids following important rules over multiple runs. The initial prompt is very detailed and specific, validated by opus and many other models as part of a challenge.

Any written safety net is kind of useless against this kind of model, add strict hooks and safeguards to prevent mistakes. Containerization is definitely another solid choice.

GLM 5.1 loves to talk in circles, yapping all the way. Giving it a whole big task is a worry on it's own. Using sub agents is much more efficient with this model.

https://preview.redd.it/3ejl7ynti6wg1.png?width=1616&format=png&auto=webp&s=e2fcc7fb5d8804a009a1f847755fbdfb26037b59

reddit.com
u/Resident-Ad-5419 — 2 days ago