u/utnapistim99

Complete your listing content bug (Solved)

https://preview.redd.it/33atydt03ctg1.jpg?width=924&format=pjpg&auto=webp&s=cfb863d8159a5b820697076920ab7c0f37cac4b6

Yesterday I struggled a lot with this "You started working on your listing content, but it's still incomplete or has errors. Finish it up, or address the errors." problem, but no matter what I did, I couldn't solve it. Someone in the Shopify community said that changing the language to another language and then back to English solved the problem. I tried it and it really did solve the problem. If anyone else is experiencing this issue, this is the solution. Good luck to everyone.

I wrote bug at the title because i think it's not a feature :d

reddit.com
u/utnapistim99 — 7 days ago

Problems I encountered while publishing the application.

https://preview.redd.it/tnql0j26d9tg1.png?width=924&format=png&auto=webp&s=1bb6d2ebb9c2341562d2f22fcfc2473e8933dfe1

https://preview.redd.it/hktnzpu6d9tg1.png?width=922&format=png&auto=webp&s=9d1895ffc30e8f274923130fbb5d65cc30768d5e

I can't solve these two problems. The first one is still waiting for me to enter content, even though I've filled in all the necessary information. For the second one, I've done everything required; I guess I have no choice but to wait. But I'm having a lot of trouble solving the first one.

"You started working on your listing content, but it's still incomplete or has errors. Finish it up, or address the errors."

Has anyone experienced this problem before?

reddit.com
u/utnapistim99 — 7 days ago
▲ 8 r/ollama

MLX Local LLM for M5 Pro 15C 16G 24GB Ram (coding)

Hi there!

I have M5 Pro 15C 16G 24GB Ram setup, and i need to find the best choice for me.

I think we can now run MLX versions with Ollama. That's great! I actively write code and the agents I use are Opus 4.6, Sonnet, Flash, and Gemini 3.1 Pro. While not as good as those agents (ideally close), I need a similar local LLM recommendation. I found a few but couldn't get them to work. Could you please provide a direct link to the ideal MLX or non-MLX versions for this setup below? Or if there's another way to install it, please explain it.

I'm running the model by `continue` and `vsCode`. If there's a better method, please share it with me. I open Ollama in the terminal and run it using `run <localllm>`.

reddit.com
u/utnapistim99 — 9 days ago