u/Just_Lingonberry_352

▲ 7 r/Bard

3.2 flash pricing: 20x cheaper than GPT 5.5, 95% of its capability, sub 200ms response.

biggest problem with GPT 5.5 is how ridiculously long it takes to respond even on medium.

the leap from 5.4 to 5.5 isn't too big either so we have a more expensensive but more efficient but still extremely slow model

if the rumors are true and gemini ships 3.2 flash it could have a big leg up because even if you take 2~3 promps vs what it takes 1 prompt for GPT 5.5 that takes a long time

I'm always going to prefer the one that responds quicker.

reddit.com
u/Just_Lingonberry_352 — 1 hour ago
▲ 10 r/codex

I am finding more often that 5.5 misses details and will inform you that it has completed the task. This is very reminiscent of the 5.1 days where it would outright just lie about completing some task.

I wonder if anybody else is coming across this issue, honestly I'm a bit flabbergasted that this old trait has come back, wonder if there is any mitigation around it.

It started happening a while ago, not sure if this has to do with the restraint of contexts but the same old small context and compaction is issue from the 5.1 days appear to be back at least from my extensive usage.

reddit.com
u/Just_Lingonberry_352 — 9 days ago
▲ 0 r/flet

before we had ai code gen flet had a good proposition using python abstractions vs dart

but now there is no barrier to just producing good quality dart code

this is where I feel flet loses its relevance

reddit.com
u/Just_Lingonberry_352 — 29 days ago