
▲ 2 r/openrouter
Does anyone else has the problem, that kimi models just uses every reasoning token until it hits 64k and then cancels with finish reason "length".
The Model should be cheaper compared to other top models but right now it just burns tokens and is unusable for my usecase. This even happens if I set it to low. You can limit the token budegt, but that will just result in a faster abort.
Is this an openrouter thing or kimi specific, does anyone know?
u/BeoOnRed — 20 days ago