u/Fovane

I trained a tiny 59M parameter GameDev coding model for Unity, Godot, and Unreal

Hello,

I wanted to share a small local LLM experiment and get feedback from people who run small models locally.

It is a lightweight 59M parameter decoder-only model trained specifically for direct game-development coding commands across Unity, Godot, and Unreal Engine.

The goal is not to compete with frontier models in general reasoning. The goal is to have a very small, self-hostable fallback model that can answer practical game-dev coding prompts such as:

- “add WASD movement logic to the player object”

- “create a capsule with collider and movement in Unity”

- “create a red cube in Godot”

- “add camera follow to player”

- “add a reusable health component”

I benchmarked it against:

- qwen2.5:0.5b

- a fine-tuned qwen2.5 0.5B LoRA

- qwen2.5 7B

On my direct game-command benchmark, the result was:

| Model | Score |

|---|---:|

| Yuspec GameDev AI 60M | 116/120 |

| Qwen2.5 7B | 102/120 |

| Qwen2.5 0.5B LoRA | 90/120 |

| Qwen2.5 0.5B | 74/120 |

This is a narrow benchmark, so I’m not claiming it is generally smarter than Qwen. The model is specialized for short Unity/Godot/Unreal coding commands, and it can still make mistakes, especially with more complex Unreal C++.

The interesting part for me is that it is tiny and fast. On my local benchmark it averaged around 2.1s per answer, and I’m planning to use it as the final fallback model for my website after Groq/Cerebras/Gemini fail or rate-limit.

My website for game developers: yuspecai.com.tr

Repo:

https://github.com/Fovane/yuspec-gamedev-ai

Release:

v0.3.0 - Yuspec GameDev AI 59M

I’d love feedback, especially from Unity/Godot/Unreal developers. If anyone wants to try prompts or suggest benchmark cases, that would help a lot.

reddit.com
u/Fovane — 5 days ago
▲ 2 r/vibecoding+1 crossposts

Hey r/vibecoding,

Finally, it turned out the way I wanted. I had to change the concept many times, but I succeeded this time.

Been vibe coding my own game projects for a while and got frustrated that tools like Copilot and Cursor don't understand Godot scene trees or Unity prefabs — they just treat everything as generic code.

So I built YUSPEC AI: you connect your GitHub repo, browse your files in a Monaco editor, and chat with an AI that actually understands your engine context (.tscn hierarchy, .unity scenes, etc.). When it proposes changes you see a diff, click Apply, and it commits directly to GitHub. No git commands, no copy-paste.

Works in the browser — nothing to install.

Engines: Godot 4, Unity, Unreal Engine 5.

Free tier always available (lighter model). Credit packs for the heavy lifting, no subscription.

Would love feedback from anyone vibe coding games — what's your biggest pain point with current tools?

yuspecai.com.tr

reddit.com
u/Fovane — 6 days ago
▲ 2 r/GodotEngine+2 crossposts

Hello guys,

I have vibe coded this website: https://yuspecai.com.tr

It basically does prompt to game prototype (for HTML5, Unity, Unreal, Godot). I have used this template to make prototypes: Prompt → JSON → .yuspec → Engine zip
I have also used this github repo while making this flow: Fovane/yuspec: Text-based gameplay rule layer for Unity. Write gameplay rules, not script spaghetti. Yes it looks only for unity, but I tried to make it engine agnostic (if it is OK 😃)

The outputs of product is not exactly perfect, but it is improving (I am still dealing with output quality)

I just wanted to inform you. What do you think generally?

I am going to sell AI Credits to use the service per export.

Here is the detailed explanation:

I built YUSPEC AI: a prompt-to-game-prototype system that exports playable HTML5 plus Unity/Godot/Unreal starter projects

I’ve been building YUSPEC AI, a web-based game prototype generator. The goal is not “AI makes a finished commercial game,” because that would be dishonest. The goal is more practical: turn a text idea into a working, inspectable, editable prototype package fast enough that a designer or solo developer can validate mechanics before spending days in an engine.

The current pipeline takes a game prompt, converts it into a structured game spec, validates that spec, generates a playable HTML5 preview, and can export engine starter projects for Unity, Godot, and Unreal.

The core idea is that the AI does not directly “write a random game.” It produces an intermediate format first.

That format is YUSPEC JSON, plus a readable .yuspec file. The spec describes:

  • game metadata
  • player, enemies, goals, hazards, collectibles
  • behavior cards
  • movement rules
  • combat/projectile rules
  • win/loss conditions
  • HUD requirements
  • uploaded asset bindings
  • engine export metadata
  • quality and validation reports

This intermediate layer is important because it makes the generation deterministic enough to debug. Instead of asking an LLM to generate a full Unity/Godot/Unreal project directly, the system asks the LLM/planner to create a constrained game design document that my exporters can compile into runtime outputs.

Pipeline overview

  1. User writes a prompt.
  2. The backend analyzes the prompt and extracts intent: genre, dimension, perspective, mechanics, theme, objectives, hazards, enemies, etc.
  3. A structured YUSPEC spec is created.
  4. Quality gates validate the spec:
    • does it have a player?
    • does it have a goal?
    • does the prompt ask for shooting, and if so are projectiles/combat represented?
    • does the prompt ask for dark ambience, racing, platforming, maze logic, survival, etc.?
    • are required mechanics actually present?
  5. The system generates an HTML5 preview.
  6. It packages downloadable outputs:
    • JSON
    • .yuspec
    • HTML5 prototype
    • Unity project zip
    • Godot project zip
    • Unreal starter project zip

Why an intermediate spec?

Direct prompt-to-engine-code is fragile. Small prompt changes can break runtime logic, asset references, or scene structure.

With YUSPEC JSON, I can keep the LLM in a planning role and keep the engine exporters deterministic. The exporter can read the same spec and produce different targets.

For example, a “top-down shooter with enemy waves and coins” becomes a spec containing entities, spawn logic, projectile behavior, health/damage rules, collectible rules, HUD counters, and a win/loss condition. Then each exporter maps that into its own engine conventions.

Asset handling

Uploaded assets are not just stored as files. They go through an asset binding layer. The system tries to assign uploaded models/textures/audio to semantic roles like:

  • player
  • enemy
  • weapon
  • projectile
  • environment prop
  • ground/floor
  • goal/portal
  • collectible
  • UI/audio

There is also fallback logic. If no uploaded asset matches a role, the generator uses built-in procedural placeholder assets so the prototype still works.

A recent issue I fixed was that uploaded assets existed in storage but were not consistently used inside generated engine outputs. The fix was to make the asset binding map part of the spec/export path instead of treating upload handling as a separate side feature.

Backend stack

The backend is FastAPI with async SQLAlchemy and PostgreSQL. Redis is used for runtime support. The generation system is split into routers/services/exporters so that billing, auth, assets, projects, generation, and legal pages are separate modules.

Current backend responsibilities:

  • authentication
  • email verification
  • project storage
  • asset uploads
  • generator orchestration
  • usage/credit tracking
  • billing integration
  • Paddle webhook handling
  • export packaging
  • health/readiness endpoints
  • telemetry/quality reports

Billing is currently integrated with Paddle. It uses one-time credit packs, not subscriptions. Paddle acts as Merchant of Record. The backend creates Paddle transactions, receives transaction.completed webhooks, verifies signatures, deduplicates events, and grants credits to the user account.

Frontend stack

The frontend is React + Vite + TypeScript. It includes:

  • generator UI
  • asset upload panel
  • pricing page
  • account page
  • billing success page
  • Paddle checkout page
  • projects page
  • public showcase/play pages
  • legal pages
  • bilingual Turkish/English support

I’m trying to keep the product positioning honest: HTML5 is the fastest playable validation target, while Unity/Godot/Unreal outputs are editable starter projects, not magically finished production games.

Deployment

The app is deployed with Docker Compose on a small Oracle server. The public web container runs Nginx and proxies /api to the FastAPI container. PostgreSQL and Redis are private compose services. Only the web container exposes public ports.

Current services:

  • web: Nginx + built React frontend
  • api: FastAPI backend
  • worker: background generation/export worker
  • postgres
  • redis

Deployment scripts rebuild and redeploy frontend/backend separately or together, run Alembic migrations, restart containers, and perform guard checks.

What works today

  • Prompt to structured YUSPEC spec
  • Playable HTML5 preview
  • JSON and .yuspec export
  • Unity/Godot/Unreal starter project zip export
  • Uploaded asset binding into generated outputs
  • Turkish/English site translation
  • Legal pages: pricing, terms, privacy, refund, EULA
  • Paddle one-time credit pack integration
  • Webhook-based credit granting
  • Usage and credit accounting
  • Public project play/share pages

What still needs work

The biggest remaining challenge is quality consistency. Some generations are surprisingly useful, others still need stronger prompt fidelity, better level design, better art direction, and more engine-specific polish.

The engine exports are intentionally starter projects. They need human work for:

  • final art
  • animations
  • level design
  • balancing
  • platform builds
  • performance optimization
  • QA
  • publishing

I’m also working on improving:

  • better procedural level layouts
  • stronger asset role inference
  • more reliable 3D scene composition
  • better Godot/Unity/Unreal idiomatic output
  • more transparent generation reports
  • stronger automated end-to-end testing

Why I built it this way

I don’t think the near-term useful tool is “AI replaces game developers.” I think the useful tool is “AI makes the first playable design draft cheap enough to throw away.”

A good prototype generator should help answer:

  • Is this mechanic fun?
  • Is the core loop understandable?
  • Is the prompt idea technically representable?
  • What would the project structure look like?
  • Which assets are missing?
  • What should I build manually next?

That is the niche I’m aiming for with YUSPEC AI.

reddit.com
u/Fovane — 9 days ago