Gitlab Runners not running
anyone else is facing this issue? the pipelines are stuck at "Created" or multiple pipelines are created automatically and are still stuck at "Created". layoffs yesterday, runners resigned today 👀
anyone else is facing this issue? the pipelines are stuck at "Created" or multiple pipelines are created automatically and are still stuck at "Created". layoffs yesterday, runners resigned today 👀
I asked both the models to "Review the code for improvements, use graphify" on a small codebase of my hobby project and asked Opus 4.7 thinking with max efforts to review and here's the output.
| Dimension | Minimax m2.7 | Deepseek v4 Pro | Opus 4.7 |
|---|---|---|---|
| Bugs caught | drag undo, structuredClone, history singleton |
none | both + verified mechanism |
| Architecture insight | store-slicing, subscription perf | community cohesion | union + concrete splits |
| Line counts | canvas.ts 554 (actual 561, close) | Popup.tsx 467 (actual 657), Toolbar.tsx 388 (actual 546) | verified all |
| Dead code | motion-path caught | missed | confirmed |
| Used graph data | no — manual review only | yes — cited cohesion + god nodes | yes |
| Hallucinations | minor (line numbers off by ~7) | major (line counts off by 30–40%) | none |
| Actionable fixes | yes, prioritized | partial (suggested split points but no specifics) | yes |
Minimax wins on substance. It found 3 real bugs Deepseek missed (drag undo, structuredClone, history singleton) and the dead motion-path tool. Its perf observations are concrete and correct.
Deepseek wins on graph utilization. It actually used cohesion scores and god-node analysis from graphify, which is the whole point of running it. But it invented line counts and missed every concrete bug.
Best play: Minimax's bug list + Deepseek's community-cohesion framing. Minimax did real code reading; Deepseek did graph reading. Mine combined both and verified line numbers.
I asked both the models to "Review the code for improvements, use graphify" on a small codebase of my hobby project and asked Opus 4.7 thinking with max efforts to review and here's the output.
| Dimension | Minimax m2.7 | Deepseek v4 Pro | Opus 4.7 |
|---|---|---|---|
| Bugs caught | drag undo, structuredClone, history singleton |
none | both + verified mechanism |
| Architecture insight | store-slicing, subscription perf | community cohesion | union + concrete splits |
| Line counts | canvas.ts 554 (actual 561, close) | Popup.tsx 467 (actual 657), Toolbar.tsx 388 (actual 546) | verified all |
| Dead code | motion-path caught | missed | confirmed |
| Used graph data | no — manual review only | yes — cited cohesion + god nodes | yes |
| Hallucinations | minor (line numbers off by ~7) | major (line counts off by 30–40%) | none |
| Actionable fixes | yes, prioritized | partial (suggested split points but no specifics) | yes |
Minimax wins on substance. It found 3 real bugs Deepseek missed (drag undo, structuredClone, history singleton) and the dead motion-path tool. Its perf observations are concrete and correct.
Deepseek wins on graph utilization. It actually used cohesion scores and god-node analysis from graphify, which is the whole point of running it. But it invented line counts and missed every concrete bug.
Best play: Minimax's bug list + Deepseek's community-cohesion framing. Minimax did real code reading; Deepseek did graph reading. Mine combined both and verified line numbers.
I have been using minimax 2.7 with tokens plan on cursor, i have noticed that it YAPS for too long and talks too much. Is there a solutions for faster development?