What is the token price for the coding plan?
There are subscriptions, I want to know what their token calculations are? how are they compared to just pay for tokens?
There are subscriptions, I want to know what their token calculations are? how are they compared to just pay for tokens?
For me I do, my graphics card is “old” GTX 1080, it is from 2016 or 2017, forgot when already, when they released it, the Nvidia Guy went on stage talking about the Pascal Architecture like they invented teleportation or something, and we all ran to give him our thousand dollars :)
So, I am still waiting for the "teleportation" feature to be enabled in the next driver :)
Today the error messages are sorry, blah blah pascal, balh blah unsupported, legacy, blah blah
Looks like 30b to 50b AI Models are evolving to become the sweet spot, the one “able to do work” models, and I will get a card that runs one the moment it is 1000$ ~ 2000$ and can do a few hundred tokens per second, which is maybe far away, or just a normal mobile phone in 2030 or 2035
So, meanwhile, I use subscriptions.
I am wondering if other Local LLM users are doing the same?
I am not sure if Open AI does this on purpose to delay some projects, or they are experimenting? or that guy working on context management is giving the LLM Alzheimer, but in general, I ask for something, I get something else the LLM wanted to do :)
We made an app together, yayyyyy, finally.
the app has a ui and functionality
Bro, my man, codex, xhigh, let's put some automated testing to that shit, lets write test cases to test these buttons, and this functionality to make sure it works, so, next time when we do a lot of changes, and they break something else, we get an idea
Codex, yeh, i will do ... and gets out of topic very fast, never lists the ui items we have to test, not even aware of that the screens have, nothing, just goes randomly here and there and patches stuff
So, if you treat it as software developer aware of what it is working on, no it is not, all what it wants is to patch something, that's it
I keep hearing the model is good, I don't have the hardware for it, and I will wait to the end of the year for the hardware to evolve.
But, I still need coding, people are saying qwen3.6 35b a3b is good, so the question is now how much will it cost me to host it somwhere until I get new hardware.