
I've been working on a project that combines generative audio synthesis with space-themed visuals, and I wanted to share the result with this community because the process felt very much at home here.
The audio was built entirely in SuperCollider, with layered drones, low-frequency oscillators, and subtle noise textures designed to evoke the atmosphere of Mars: thin, cold, vast, and slightly hostile. No loops, no samples. Everything is synthesized from scratch.
The Python side handled the structural composition, sequencing events, controlling parameter automation over time, and managing the overall arc of the 2-hour session so it doesn't feel static. The goal was for it to breathe and evolve slowly, the way a real environment does.
The visual concept follows the same logic: a slow journey across a Martian landscape, designed to sit in your peripheral vision while you work. Not distracting — just present.
I built this specifically as a Pomodoro study session (25 min focus / 5 min break cycles), which is baked into the structure of the piece itself.
Would love to hear thoughts from people who work with generative systems, especially around long-form synthesis and how you handle keeping 2 hours of audio feeling intentional rather than random.