u/RoundBeach

4K Soundscapes with Vortessa® \ Drift, Feedback Networks (Max MSP + Node.js)
▲ 5 r/MaxMSP

4K Soundscapes with Vortessa® \ Drift, Feedback Networks (Max MSP + Node.js)

Over the past few days, some users have asked me what I actually mean when I talk about “drift” inside Vortessa.

In many ways, it comes from an obsession I’ve carried for years: feedback, autopoietic networks, systems that are not simply programmed but allowed to evolve on their own. Vortessa was built entirely around this idea. Not a machine that executes events, but an unstable ecosystem that slowly organizes, destabilizes, collapses and regenerates itself over time.

With the right conditions, minimal intervention and very restrained control, the system gradually begins to drift. Internal relationships between the engines continuously reshape themselves, feedback paths accumulate memory, certain frequencies emerge while others dissolve. It’s not a behavior I compose linearly: I simply create the conditions for it to happen.

What I love in this video is how clearly you can perceive that continuous transformation. The soundscape could unfold for hours without falling into the kind of rigid or overly cerebral looping structure that often reveals the mechanics behind generative systems. The structures keep mutating, but in an almost organic way, like a living environment slowly reconfiguring itself over time.

A lot of people have been writing to me saying they leave it running for long periods and somehow experience a sort of perceptual or even mental benefit from it. And honestly, that’s probably what interests me the most: creating complex systems that are ultimately perceived first as timbre, presence and space.

Behind this timbral world, though, there were also months of benchmarking and extreme listening sessions. To develop these behaviors I spent an enormous amount of time exposed to highly stressful frequencies, high-energy feedback and systems left collapsing into themselves for hours, trying to understand how far I could push the sound without losing depth and listenability. A large part of the work was precisely this: finding a balance between real instability and a timbral language capable of breathing and evolving over long durations without turning into pure sonic aggression

youtu.be
u/RoundBeach — 2 days ago
▲ 4 r/MaxMSP

Some ENDOGEN Timbral Explorations \ Electroacoustic Lowercase Synthesis Environment (4K Video)

ENDOGEN is a Max/MSP + SuperCollider environment for lowercase synthesis, microsound and deep drone work: very low levels by default, subtle dynamics, long fades, companding, organic modulation and a listening-first approach.

It is not a preset machine or an instant “wow” instrument. It is closer to a small electroacoustic ecosystem: micro-events, fragile resonances, noise, feedback, corpus-based sampling, wavetable layers, resonant objects and slow modulation interact over time.

The idea is to work with sound at the threshold of perception: tiny details, unstable materials, physical behaviours, contact-mic-like resonances, tape/mechanical traces, near-silence and long-form evolution.

ENDOGEN includes synthesis modules, advanced sampling, corpus exploration, Reservoir sampler, nsight sequencer, Phase Garden spatial engine, live recording, MIDI performance and a multi-channel LFO modulation system.

Designed for lowercase, microsound, electroacoustic textures, drones, fragile soundscapes and slow listening.

No presets. No instant results. Just a system to inhabit slowly.

CDM's Article: https://cdm.link/endogen-lowercase-synthesis/

Website: https://www.peamarte.it/endogen/main.html

Youtube Playlist
https://youtube.com/playlist?list=PLLITukQh1_l5mNWO8_qe1XA6cPveYTWtT&si=Fwply0pt2U79_oi1

youtu.be
u/RoundBeach — 3 days ago
▲ 50 r/musiconcrete+1 crossposts

ASSEMBLY~7 \ A polymetric algorithmic drum synthesizer.

ASSEMBLY-7 is not a conventional drum machine or just another sequencer for Max/MSP.
It is an autonomous probabilistic machine built on a dual DSP architecture: Max/MSP handles sequencing and logic, while SuperCollider generates synthesis and sound processes via OSC.

The system includes 6 engines based on algorithmic synthesis and one dedicated advanced sampling engine, all operating inside an unstable polymetric environment with continuous drift and no true global reset. Each line runs with independent BPMs, lengths and phases: patterns slowly collapse, realign and deform over time, generating strange grooves, primitive rhythmic structures, but also textures, dense sonic masses and real percussive soundscapes.

With Tamburi Web you can load huge folders of WAV or AIFF files and the system will randomly distribute them across the available slots, transforming any sound archive into unstable rhythmic material. Field recordings, noise, concrete fragments, voices, metal anything can become part of the machine’s rhythmic geometry.

ASSEMBLY-7 can also record 10 stems simultaneously in a single pass: stereo master, 7 synth tracks and 2 separate sampler tracks, ready to be processed inside any DAW.

It is not a Max for Live device but more like a small autonomous generative DAW focused on rhythmic drift, stratification and continuous mutation.

If you are into unconventional polymeters, modular-style sequencing like Teletype/O_C, grooves emerging from controlled chaos and rhythmic systems that continuously evolve over time, take a look at ASSEMBLY-7.

This is only a brief description. To better understand how it works, explore all the features, watch the updated YouTube playlist and read the user reviews on Gumroad to see if it fits your practice, visit the website.

Discovery: https://www.peamarte.it/assembly_7/assembly_7.html

u/RoundBeach — 6 days ago
▲ 20 r/MaxMSP

Recording and Reamping Feedback Soundscape from Vortessa 3.0 to Ableton and 2 Modular.
Download and Discovery (url in the comment)

u/RoundBeach — 6 days ago
▲ 28 r/MaxMSP

A collection of experimental instruments for sound design, field recording transformation, lowercase synthesis, feedback systems and polymetric rhythm generation. Designed for exploratory composition, emergent structures, textural manipulation and non-linear audio processes across studio, performance and experimental research contexts.

emilianopennisi.net

#sounddesign, #experimentalmusic, #fieldrecording, #lowercase, #noise, #electroacoustic, #musiqueconcrete, #feedbacksystems, #generativemusic, #maxmsp, #supercollider, #audiosynthesis, #soundart, #modularsynth, #polymeter, #glitchmusic, #ambientmusic, #computermusic, #algorithmiccomposition, #experimentaltools

u/RoundBeach — 8 days ago
▲ 21 r/musiconcrete+1 crossposts

I’m putting these three systems together in the same video because that’s exactly where something interesting happens: they’re not three separate tools, but three different ways of letting sound emerge.

Orbit is the only one living directly inside Ableton, as a Max for Live device. It’s compact but very dense, a feedback system built in gen~ where internal relationships constantly shift, reorganize, collapse and regenerate. It’s perfect when you want to stay inside a timeline but keep that unstable, evolving behavior.

The other two live outside, and that matters. Vortessa and Interfera run as standalone Max patches, so to bring them into Ableton I route them through Blackhole. But that’s not required, all three systems have their own internal recording engines, so they can be used completely standalone without a DAW.

Interfera is the most open toward the world. At any moment you can geolocate field recordings by description, cities, environments, keywords like ocean, airport, wind. Or use its seeder connected to freesound.org to directly access the database and process sounds instantly inside the patch. There’s no separation between searching and composing, it’s a continuous flow. Inside Interfera there’s also a very powerful sampler based on Mutable Instruments Rings (Volker Böhm implementation), which allows you to turn any fragment into resonant textures and complex soundscapes.

Across the systems, sound can also be organized and explored through audio classification using FluCoMa and DataKnot, introducing a layer of machine learning that lets you navigate, cluster, and reshape material based on its intrinsic features rather than fixed categories. This adds another dimension where composition becomes interaction with a learned sonic space.

Vortessa is a much larger and deeper environment, and it’s fully multichannel. It’s built around feedback networks, non-linear interactions and layering. You’re not writing sound, you’re setting a system in motion and observing how it evolves. On the recording side it can capture up to 40 separate mono channels, or render directly to stereo. This means you can preserve every internal component for later reconstruction, or just print the final result.

When used together, something very specific happens: Orbit keeps a foot inside Ableton, while Vortessa and Interfera inject continuously evolving material. Field recordings transforming in real time, feedback generating structures, systems reacting to each other. It stops being a linear chain and becomes an ecosystem.

If this approach resonates with you, you can find these tools and others from the series on my Gumroad, designed for cinematic sound design and contemporary musique concrète.

You can find these tools and others from the series in my gumroad (in the comments)

emilianopennisi.net

#sounddesign, #musiqueconcrete, #experimentalmusic, #generativesystems, #feedbacksystems, #maxmsp, #maxforlive, #abletonlive, #fieldrecording, #freesound, #granularsynthesis, #nonlinear, #soundart, #electronicmusic, #avantgarde, #soundscape, #modularmindset, #genpatch, #audioprocessing, #cinematicsound, #noiseart, #drone, #algorithmiccomposition, #digitalaudio

u/RoundBeach — 12 days ago
▲ 16 r/MaxMSP

Hi everyone, after working almost incessantly on Vortessa, today the latest update came out and, well, the most important one: version 3.0. Basically, I found myself at a point where the patch had more than 40 complex sources, and this unexpectedly led me to think about how to use those same sources.

To explain it a bit better, each source has its own envelopes, but I started thinking about how powerful it would really be to route the sources through an ITB modeled Low Pass Gate. In the studio I’ve been experimenting for years with different kinds of LPG modules, each with its own peculiarities, from the Natural Gate by Rabid Elephant to the QMMG by Make Noise. I was able to observe that the more complex the sources routed into these vactrol-based modules are, the more organic the resulting sound becomes, and that’s exactly what happened.

I worked and made hundreds of benchmarks in gen~ to find the right compromise. I listened to the hardware frequency response, compared it with the modeled version, always and only using the complex sources living inside the patch itself: chaotic attractors, dozens and dozens of feedback chains, and this new major update came out of it.

So the sources are routed through a patchbay that splits them into 10 stereo groups (containing 42 sources, like 42 complex oscillators) into pairs of LPGs. Add probabilistic sequencers and the possibility of triggering the sources through gigantic corpora of sounds based on FluCoMa and DataKnot descriptors, and this is what came out. In my opinion I’m satisfied with how it sounds now, and I can also say that the project is finished.

I also add a Retrospective Buffer
A stereo circular buffer running continuously across the entire Vortessa ecosystem. Instead of recording fixed loops, it constantly accumulates and overwrites material in layers, creating a shifting memory of the system itself. The blend control crossfades between old and new material, allowing textures to gradually emerge, dissolve and stratify over time. More a living temporal memory than a conventional looper.

Also, the circular buffer system remains fully integrated inside the architecture.

What's new in version 3
https://www.peamarte.it/luci.../vortessa/strike_landing.html

Explore the system here: https://www.peamarte.it/lucien_dargue_series/vortessa/vortessa_landing.html

u/RoundBeach — 17 days ago

A curated collection of tools for feedback, field recordings, microsound and polymetric synthesis

A collection of experimental instruments for sound design, field recording transformation, lowercase synthesis, feedback systems and polymetric rhythm generation.

This collection is not just a set of instruments, but a compositional ecosystem where sound is discovered rather than designed. Each tool operates as a system: interacting processes, unstable behaviors and emergent structures replace linear control and predictable outcomes.

From microscopic textures to dense sonic matter, the suite works across multiple scales. It allows you to extract and transform field recordings into evolving materials, generate polymetric rhythmic structures that drift over time, and build feedback networks that continuously reshape themselves.

Includes:

Endogen ~ a synthesis and advanced sampling environment for lowercase, microsound and drone

Interfera ~ a geosonic field recordings engine based on coordinates, archives and contextual listening

Assembly-7 ~ a polymetric algorithmic drum synthesizer built around drift, instability and raw synthetic percussion

Orbit ~ Esoteric feedback instrument for Max for Live focused on self-organization, bifurcation and unstable sonic evolution

Vortessa® Contemporary Noise Machine Built on Mathematics

Designed for experimental composition, electroacoustic music, film, installation, live performance and advanced sound design.

emilianopennisi.net

u/RoundBeach — 19 days ago