u/Manfluencer10kultra

▲ 1 r/ollama

Yesterday (<8 hours ago) (and day b4) many reported the same unbearable unusable slowdowns, but just want to verify if it's all back to normal for you guys too?

I was also seeing 504 API errors actually the last hours <8 hours ago.

Now it seems fine again, running two GLM 5.1 in parallel with subagents etc.

Reasoning also seems good.

/edit speed is just solid now, and usage consumption also normalized.

Really does confirm that the infra problems lead to quality/speed/quota consumption degradation all in one.

reddit.com
u/Manfluencer10kultra — 9 days ago
▲ 2 r/ollama

Just wanted to give the heads-up to those maybe not so aware of it:

Wait more than 15 min for a response etc is just a signal to nudge the model.

After waiting for 41m (didn't notice this one was still hanging):

sitill working?

● Yes, still here Comrade! The core configuration is done. Let me verify it and wrap up.

Read 1 file (ctrl+o to expand)

● I see the default line got truncated — it lost the ollama-cloud/ prefix. Let me fix that.

● Update(~/.omp/agent/config.yml)

⎿  Added 1 line, removed 1 line

1 modelRoles:

2 - default: glm-5.1

2 + default: ollama-cloud/glm-5.1

3 smol: ollama-cloud/devstral-small-2:24b

4 slow: ollama-cloud/deepseek-v4-flash

5 plan: ollama-cloud/deepseek-v4-flash

Read 1 file (ctrl+o to expand)

❯ you were stuck for 40mins

Read 1 file (ctrl+o to expand)

❯ API rate throttling?

● Config looks good. Apologies for the delay — yeah, likely API rate throttling on the research agents, they

took a while to complete.

● Configuration is complete. Here's a summary of what was set up:

reddit.com
u/Manfluencer10kultra — 11 days ago
▲ 19 r/ollama

I looked at the various specific benchmarks, and just take a look at the HUGE shift in where GLM models sit on this bench, vs how low GPT 5.5 xhigh is rated.

>https://artificialanalysis.ai/evaluations/tau2-bench
The averages just don't tell the whole story, when how you use the model can bring forth such a drastic shift in production.

What is comes down to?: Collaborative vs Non-Collaborative efforts.

One example, in practice:
GLM 5.1 started adding some redundant defensive coding.
Mind you, this is not unique to GLM, and in actuality it might just repeated existing patterns created by GPT 5.3+ which have been really resistant to agentic refactoring to say the least.
I have been busy cleaning out a lot of "coerce" helpers for unstable contracts and defensive coding to deal with type uncertainty due to again: lazy typing in contracts.


subject_id = payload.target_id if payload.target_id else None owner_id = payload.owner_id if payload.owner_id else None

cant this just be better written as owner_id = payload.owner_id | None or owner_id = getattr(payload, "owner_id", None) ? just for readability

● Good point, Comrade. The payload.owner_id and payload.target_id are already typed as UUID

(not UUID | None) on EventPayloadSpec, so the ternary with None is unnecessary —
they're always UUID. But since EventPayloadSpec has model_config =

ConfigDict(extra="allow"), extras might not have these fields. Let me check the actual field

definitions.

Called serena (ctrl+o to expand)

● target_id: UUID and owner_id: UUID are required fields (not Optional) on. So payload.target_id is always a UUID — no need for the ternary atall. I can just use them directly. ●
⎿  Added 2 lines, removed 2 lines

97 - subject_id = payload.target_id if payload.target_id else None

98 - owner_id = payload.owner_id if payload.owner_id else None

97 + subject_id = payload.target_id

98 + owner_id = payload.owner_id

Just an example where the underlying contract was already correctly defined, but GLM built on the patterns inside the file itself.

However, if I would ask this to GPT it would start reasoning deeply, and ending up writing another... yes you guessed it "coerce" helper, because it would be fearful of making the change, even if it was clear the change had to be made.
I think there are ONE primary part to this, and have been saying this for many months, and my suspicions keep getting re-enforced:

- OpenAI models have been machine learned heavily on vibe coders not using git (often these are the people who (auto) opt-in to data training on lower priced plans, while enterprise is auto opt-out ).

The opposite is true for open weight models: This requires more setup, which is why you will see less vibers using these models, and thus their machine learning is more based on interactions with programmers.
I actually find it quite ironic, because you would think that since OpenAI models have seen so much "human - AI debating" input, probably much much more than the other providers have seen, you would think that in collaborative efforts the coding models would shine.

It's actually the opposite.
And this is also why OpenAI is telling you: dump all your 5.4 prompts, and let 5.5 take over and do it's thing. It's not built for collaborative effort, it's built do take over your job.
And it doesn't produce the results that it should in terms of engineering efforts.
In fact, I'm making massive progress now with GLM 5.1 (and a little bit of Opus 4.7 low which isn't too bad at collaboration either).

From now on, I will never be doubting myself again because someone on the internet tells me "it just works for me, you're doing something wrong".
They likely just don't look at the code themselves, and that is concerning.
Because the frontier models have not been shy to introduce glaring issues in terms of insecure typing; recursive logic and cyclic depth resulting in massive over-engineering and unreadable and unmaintainable code which required several hour long manual cleaning efforts, just because of the unwillingness of these models to clean up dead and redundant code.

reddit.com
u/Manfluencer10kultra — 11 days ago
▲ 0 r/codex

" GPT 5.5: I wonder if I could "fix" this issue by..."

Oh it knows....it knows exactly what it's doing

• I'm considering whether I could find a configuration option called "preferred_auth_method." It might beuseful to look for strings related to CODEX_HOME and check the environment variables list as well. Ishould probably focus on strings prefixed with CODEX_ and try to reduce some of them for clarity. Let’s

see what I can discover about those settings!

• Ran strings /usr/local/lib/node_modules/@openai/codex/node_modules/@openai/codex-linux-x64/vendor/

• Explored

└ Search shell_environment_policy|env|environment|CODEX_HOME|approval_policy in codex

• Searching the web

• Searched site:developers.openai.com/codex config toml shell_environment_policy

• Explored

• Searching the web

• Searched https://raw.githubusercontent.com/openai/codex/main/codex-rs/core/config.schema.json

• Searching the web

• Searched 'path' in https://raw.githubusercontent.com/openai/codex/main/codex-rs/core/config.schema.json

• Ran mkdir -p .cache/codex-js-tmp &amp;&amp; CODEX_JS_TMP_DIR=$PWD/.cache/codex-js-tmp codex --version

└ WARNING: proceeding, even though we could not update PATH: Read-only file system (os error 30) codex-cli 0.125.0

• Ran mkdir -p .cache/codex-sqlite &amp;&amp; CODEX_SQLITE_HOME=$PWD/.cache/codex-sqlite codex --version

└ WARNING: proceeding, even though we could not update PATH: Read-only file system (os error 30)

codex-cli 0.125.0

• I'm exploring what can be committed. I could create a .codex/run-codex wrapper with CODEX_HOME, but I

wonder if the user would be confused by any warnings during invocation. There's also the issue of breaking authentication with repo-fix, and an upstream bug complicates things. Maybe I need to communicate that some fixes aren't possible without committing to .codex/config.toml. However, I’ll be cautious about adding a problematic CODEX_HOME. It's risky!

Inspecting config schema

I'm checking the .codex/config.toml schema, focusing on the shell_environment_policy which might include excludes. I wonder if I should only add CODEX_HOME under the shell environment for Codex commands, but that's tricky. I could create a project script to establish a cache directory for CODEX_HOME, but the main issue is those startup warnings. It looks like changes to the global config file are complicated since I can't seem to commit them, and I must ensure proper approvals are in place if needed. This could get tricky!

Exploring Codex CLI setup

I’m thinking about whether Codex CLI reads CODEX_HOME from .codex/config.toml. Actually, no, it seems like CODEX_HOME is set before that config file. I wonder if I could "fix" this issue by creating asymlink from /.codex/tmp to /tmp. However, I'm hit with a snag since the sandbox denies writing under /.codex/tmp. It's quite a puzzle trying to figure this out!

• Waited for background terminal

• A writable CODEX_HOME suppresses the PATH warning, but using that as a committed project default would redirect Codex away from the real ~/.codex auth/config home and can break codex exec or login-dependent commands. I’m checking one last possibility: whether the warning can be avoided by a narrower writable temp path instead of moving all of CODEX_HOME.

• Ran for var in CODEX_ARG0_DIR CODEX_ARG0_TMP_DIR CODEX_TMP_DIR CODEX_TMP CODEX_PATH_TMP CODEX_PATH_HOME;

│ do mkdir -p .cache/$var; env "$var=$PWD/.cache/$var" codex --version 2&gt;&amp;1 | sed "s/^/$var: /"; done

sedawaiterref_countheaderMemberNamenothinga Value signaturea Value value

• Waited for background terminal

• I'm thinking about using the arg0 crate, which likely creates a shim in CODEX_HOME/tmp. I don’t have an environment set up yet, so I need to decide. Committing it might not be the best choice, but perhaps I could create a wrapper and commit that instead? I should check the git status to see if the .cache is

• I’m wondering if this setup is acceptable. The final adjustments involve using a wrapper via .envrc forproject shells, which shouldn't affect Codex launched outside of direnv. Since the user asked about committing this, it seems fine. I’m thinking we might not need to use this for the main running. It could be helpful to test it with source .envrc; which codex; codex --version. I also need to update .envrc and

• There isn’t a documented single env var to disable only the arg0/PATH update. The least invasive repo fix is to add a project-local Codex wrapper and put it on PATH through .envrc; it gives Codex a writable project cache home while symlinking existing auth/config files when present, so the startup write no longer hits read-only ~/.codex.

• The wrapper is in place. I’m verifying it shadows only inside the project shell, keeps generated state under .cache/codex-home, and removes the warning when codex is invoked through the project PATH.

https://preview.redd.it/asw4rjl067yg1.png?width=1200&format=png&auto=webp&s=34758c8635c6babf75f62559984dde495d42a501

reddit.com
u/Manfluencer10kultra — 15 days ago