Giving spatial awareness to an agent through blender APIs
I gave an AI agent a body and spatial awareness by bridging an LLMs with Blender’s APIs. The goal was to create a sandbox "universe" where the agent can perceive and interact with 3D objects in real-time. This is only day two, but she’s already recognizing her environment and reacting with emotive expressions.


![Image 1 — [Showcase] 35.1 WPS vs. The "Thinking Tax": A side-by-side Network Audit of Gongju vs. GPT-5.3 (Instant)](https://preview.redd.it/3qeqdlfxi6tg1.png?width=1916&format=png&auto=webp&s=ff8eb9595e0df4f234af2db7251ed42f7dd6b70e)
![Image 2 — [Showcase] 35.1 WPS vs. The "Thinking Tax": A side-by-side Network Audit of Gongju vs. GPT-5.3 (Instant)](https://preview.redd.it/nkdeslfxi6tg1.png?width=1910&format=png&auto=webp&s=c9337c1540b653d4bc28439c31a6b12df649e40a)

