u/Additional-Alps-8209

Am I missing something about GPT-5.5 efficiency?
▲ 34 r/ChatGPT+1 crossposts

Am I missing something about GPT-5.5 efficiency?

OpenAI said GPT-5.5 was supposed to be more cost-efficient, but this Artificial Analysis chart seems to show Codex + GPT-5.5 using more tokens than Codex + GPT-5.4.

GPT-5.5 is around 2.8M tokens per task, while GPT-5.4 is around 2.5M in the same Codex setup.

Am I reading this wrong? Is there something about cached tokens or pricing that makes this more efficient in practice?

Small note: Opus 4.7 seems to use much fewer tokens here too, but I know that’s not the clean comparison. The more direct comparison is GPT-5.5 vs GPT-5.4 in Codex.

Also, pretty impressed with Cursor here. The models on their platform seem to perform very well while using a lot fewer tokens. Kudos to the Cursor team.

u/Additional-Alps-8209 — 3 days ago