r/MaxMSP

How to make custom DSP guitar pedals with Max Msp Gen~ and an Electrosmith Daisyseed
▲ 49 r/MaxMSP+1 crossposts

How to make custom DSP guitar pedals with Max Msp Gen~ and an Electrosmith Daisyseed

Hey there everyone,

I just wanted to share a video breaking down how to flash and map custom Max Msp Gen~ patches to an Electrosmith Daisyseed so you can make custom Max driven hardware. For this project I've been using my open source Black Box pedal platform, originally derived from the Fun Box project.

Both the JSON and Max Msp files I mention in the video can be found on my GitHub repository: https://github.com/ZackBerw/CausticFX

If anyone has any questions or concerns please feel free to let me know.

Thanks!

youtu.be
u/6Guitarmetal6 — 4 days ago
▲ 50 r/MaxMSP+1 crossposts

ASSEMBLY~7 \ A polymetric algorithmic drum synthesizer.

ASSEMBLY-7 is not a conventional drum machine or just another sequencer for Max/MSP.
It is an autonomous probabilistic machine built on a dual DSP architecture: Max/MSP handles sequencing and logic, while SuperCollider generates synthesis and sound processes via OSC.

The system includes 6 engines based on algorithmic synthesis and one dedicated advanced sampling engine, all operating inside an unstable polymetric environment with continuous drift and no true global reset. Each line runs with independent BPMs, lengths and phases: patterns slowly collapse, realign and deform over time, generating strange grooves, primitive rhythmic structures, but also textures, dense sonic masses and real percussive soundscapes.

With Tamburi Web you can load huge folders of WAV or AIFF files and the system will randomly distribute them across the available slots, transforming any sound archive into unstable rhythmic material. Field recordings, noise, concrete fragments, voices, metal anything can become part of the machine’s rhythmic geometry.

ASSEMBLY-7 can also record 10 stems simultaneously in a single pass: stereo master, 7 synth tracks and 2 separate sampler tracks, ready to be processed inside any DAW.

It is not a Max for Live device but more like a small autonomous generative DAW focused on rhythmic drift, stratification and continuous mutation.

If you are into unconventional polymeters, modular-style sequencing like Teletype/O_C, grooves emerging from controlled chaos and rhythmic systems that continuously evolve over time, take a look at ASSEMBLY-7.

This is only a brief description. To better understand how it works, explore all the features, watch the updated YouTube playlist and read the user reviews on Gumroad to see if it fits your practice, visit the website.

Discovery: https://www.peamarte.it/assembly_7/assembly_7.html

u/RoundBeach — 6 days ago
▲ 1 r/MaxMSP

Migrating 6.1 patches to latest version.

Hi all,
For theatre work I use a system where a max patch (made with max6.1 and running on 6.1 on an old mac) sends audio to Live 9 via Rewire.
I will soon acquire a new mac and I'm a bit scared of the whole migration.
I know I won't be able to run 6.1 on the new Silicon machine so I wanted to ask the following:

Should I upgrade to tthe latest max version?

If yes, is there anything that I should be aware of when opening my old patch in the new version? The patch is quite simple as it just deals with cueing audio samples and automating their volume. It also controls lights through a enntec driver usb dmx box but that is the topic for another thread :-)
Also the audio produced by max is now passed to Live (live will need to be updated as well) via rewire and I'm pretty sure rewire is dead and buried now. Any native way to pass audio from max to live? The only posts dealing with this are more than 3 years old and wonder if something changed in the meantime

Thank you

reddit.com
u/Scott_Korman — 23 hours ago
▲ 7 r/MaxMSP

Ghost clips in ableton: somebody make this!! (Alias clips)

A while ago I started developing ghost/alias clips in ableton (and was planning a larger idea of ghost sections - where you could make editable groups of clips and duplicate them across a song). I got a degree in computer science so this aint my first rodeo coding something and after understanding max4live I am 100% positive that a ghost clips plugin could be created (and 90% positive abt ghost sections). Someone just needs to develop it!!!!

I don't have the time to work on it rn so I hope someone else can make this. Personally I'd pay $100 for it. Its an incredible feature that many DAWs have and I wish Ableton did!!

If you want a head start working on this please lmk and I'll send you all the plans I had laid out for it! I figured out a pretty good way to architect the whole thing and have the software stack laid out clean enough.

Best of luck to anyone who wants to undertake this project!! 😄

reddit.com
u/StaffPotential4408 — 1 day ago
▲ 5 r/MaxMSP+1 crossposts

Would it be possible to create a Max4Live device that loads in a random preset from a specific folder?

I have a template that I use for my Push. It has tracks set up for Bass/Chords/Drums/Lead and on each of those tracks I have a default preset. Mainly the preset is there to make it easy to hotswap with other presets that are similar. It would be really cool if I could open Ableton, and each track has a randomized preset in it. Similar to how the Move works.

Any of you M4L magicians know if this is possible? I tried to create it myself with with AI, but that shit kinda sucks and I hit a wall.

reddit.com
u/MovinToChicago — 3 days ago
▲ 136 r/MaxMSP+5 crossposts

Audio-reactive geometry TouchDesigner + AE patch I made some time ago. Hope you guys enjoy it!

If you're curious about my experiments, you can watch more [and even access its project files] through my YouTubeInstagram, or Tools Store.

u/TasTepeler — 10 days ago
▲ 1 r/MaxMSP

Tutorials for Syncing 3D Characters to Dance with BPM?

Hello,

I'm wondering what's the most efficient way to sync a dancing 3D character to the BPM of an Ableton Live project? Are there any tutorials outlining this process?

Thanks in advance!

reddit.com
u/SpeezioFunk — 2 days ago
▲ 1 r/MaxMSP

sequencer question

hi there,

so i'm trying to find the cleanest solution to this issue:

Let's say I have a steady pulse of a short sine wave sound going at 120bpm, every now and then, randomly, I want the sine wave to be twice as long, meaning, it taking up 2 pulses in the sequence, without then retriggring it while it's playing.

So far I've used gate to accomplish this: every time the long impulse appears, the gate closes, so it won't pass through another trigger while it's playing.

I feel like there has to be a better option than this. This gets especially more complicated if I have e.g. 3 different sustain values for the sine wave, meaning e.g. first one taking up 1 pulse, second one 2 and the third one 3 of them.

reddit.com
u/Ill_Significance6157 — 3 days ago
▲ 1 r/MaxMSP

Macbook M5- Pro or Air

Hey y’all, I’m currently running a data-to-visual sequence in Max/Jitter and my M2 MacBook Air is starting to struggle with handling the program. I’m looking into upgrading, but it’s hard to justify the prisce jump to a MacBook Pro on a student budget.

Is anyone here running Max/Jitter on an M5 MacBook Air? I’d really appreciate hearing about your experience, especially performance with real-time visuals/rendering. Thanks in advance!

reddit.com
u/Ok_Quote_3337 — 4 days ago
▲ 9 r/MaxMSP

A long time ago I tried Max/MSP and fell in love with it, but it was out of my price range at the time. Now that there is a subscription model I'm very interested in it. I tried to like Pure Data and just was disappointed in it.

But now I am confused about Max/MSP and RNBO.

I have a very specific use case. I would like to be able to export code for VST/AU plugins, as well as C++ code I could adapt for embedded platforms (Teensy, Raspberry Pi, ESP32).

I have around thirty years experience in C++, so I could adapt generic C++ code to various embedded platforms.

An absolute requirement is an HRTF function, which there are several extensions for Max/MSP.

Does RNBO have an HRTF or support Max/MSP extensions, and can I create patches in Max/MSP and easily move them to RNBO, or should I concentrate on just using RNBO?

For my idea I need multiple delays with whose length can be modulated, all pass filters, and control over phase offsets for multiple LFOs.

Any help would be appreciated.

reddit.com
u/MentionPleasant2635 — 8 days ago
▲ 20 r/MaxMSP

Recording and Reamping Feedback Soundscape from Vortessa 3.0 to Ableton and 2 Modular.
Download and Discovery (url in the comment)

u/RoundBeach — 6 days ago
▲ 28 r/MaxMSP

A collection of experimental instruments for sound design, field recording transformation, lowercase synthesis, feedback systems and polymetric rhythm generation. Designed for exploratory composition, emergent structures, textural manipulation and non-linear audio processes across studio, performance and experimental research contexts.

emilianopennisi.net

#sounddesign, #experimentalmusic, #fieldrecording, #lowercase, #noise, #electroacoustic, #musiqueconcrete, #feedbacksystems, #generativemusic, #maxmsp, #supercollider, #audiosynthesis, #soundart, #modularsynth, #polymeter, #glitchmusic, #ambientmusic, #computermusic, #algorithmiccomposition, #experimentaltools

u/RoundBeach — 7 days ago
▲ 2 r/MaxMSP

Hey guys

Would be cool if theres any way to press a key, and that it will produce a timestamp mark while youre recording. That mark will sit on the arrangement view at the exact time code, so you can get back to the audio chunk while u finished recording.

Right now its just stop recording, or adding manually which both irrelevant

Any ideas?

reddit.com
u/AquiverSOUND — 9 days ago
▲ 1 r/MaxMSP

Hello. I want to get started with jitter in Max for Live so I followed a tutorial on a simple audio-reactive patch using .jxs shaders on a png. As you can see from my very organized patch I'm only using 3 shaders.

I have a humble laptop with AMD Radeon 780M and as soon as I try to read the image (4000x4000 px) my GPU use rockets to >90% and Live becomes unresponsive. My CPU doesn't crash so I think it's only using GPU? I know I have a crappy card but I didn't think it was thiiis bad. I also know my image is quite big but is that all?

Do you guys have any tips on improving this situation on my laptop, or am I cooked? I figure if I can't load this picture onto jitter I should forget about making more advanced stuff lol :( Let me know!

u/RedDevil3500 — 8 days ago
▲ 5 r/MaxMSP+1 crossposts

nice to meet you. im a student who study about performance producing. I started a convergence art project these days, and it's a performance related to music, dance, and interactive technology using Max. I'm going to post a performance that changes the score composed in advance and makes the video change according to the dancer's gestures. Webcam, so I'm thinking about using media pipes because they can be unstable in real-time performances. But we're students without money, and we've only been working hard in each field. Convergence art is completely new!!!!! 🥹Let me ask you a question about Kinect sensor. Azure Kinect was really expensive, but when I looked it up, I think there are some cheaper ones too, there are so many products that come out when I type Kinect sensor, can you tell me what you recommend? Do Kinect sensors also have cost-effective products? Also, I would appreciate it if you could let me know the products that need to be filtered,,,

reddit.com
u/According_Bluejay655 — 6 days ago
▲ 21 r/MaxMSP+1 crossposts

I’m putting these three systems together in the same video because that’s exactly where something interesting happens: they’re not three separate tools, but three different ways of letting sound emerge.

Orbit is the only one living directly inside Ableton, as a Max for Live device. It’s compact but very dense, a feedback system built in gen~ where internal relationships constantly shift, reorganize, collapse and regenerate. It’s perfect when you want to stay inside a timeline but keep that unstable, evolving behavior.

The other two live outside, and that matters. Vortessa and Interfera run as standalone Max patches, so to bring them into Ableton I route them through Blackhole. But that’s not required, all three systems have their own internal recording engines, so they can be used completely standalone without a DAW.

Interfera is the most open toward the world. At any moment you can geolocate field recordings by description, cities, environments, keywords like ocean, airport, wind. Or use its seeder connected to freesound.org to directly access the database and process sounds instantly inside the patch. There’s no separation between searching and composing, it’s a continuous flow. Inside Interfera there’s also a very powerful sampler based on Mutable Instruments Rings (Volker Böhm implementation), which allows you to turn any fragment into resonant textures and complex soundscapes.

Across the systems, sound can also be organized and explored through audio classification using FluCoMa and DataKnot, introducing a layer of machine learning that lets you navigate, cluster, and reshape material based on its intrinsic features rather than fixed categories. This adds another dimension where composition becomes interaction with a learned sonic space.

Vortessa is a much larger and deeper environment, and it’s fully multichannel. It’s built around feedback networks, non-linear interactions and layering. You’re not writing sound, you’re setting a system in motion and observing how it evolves. On the recording side it can capture up to 40 separate mono channels, or render directly to stereo. This means you can preserve every internal component for later reconstruction, or just print the final result.

When used together, something very specific happens: Orbit keeps a foot inside Ableton, while Vortessa and Interfera inject continuously evolving material. Field recordings transforming in real time, feedback generating structures, systems reacting to each other. It stops being a linear chain and becomes an ecosystem.

If this approach resonates with you, you can find these tools and others from the series on my Gumroad, designed for cinematic sound design and contemporary musique concrète.

You can find these tools and others from the series in my gumroad (in the comments)

emilianopennisi.net

#sounddesign, #musiqueconcrete, #experimentalmusic, #generativesystems, #feedbacksystems, #maxmsp, #maxforlive, #abletonlive, #fieldrecording, #freesound, #granularsynthesis, #nonlinear, #soundart, #electronicmusic, #avantgarde, #soundscape, #modularmindset, #genpatch, #audioprocessing, #cinematicsound, #noiseart, #drone, #algorithmiccomposition, #digitalaudio

u/RoundBeach — 11 days ago
▲ 15 r/MaxMSP+1 crossposts

Hey there everyone,

I just wanted to share a little jam featuring a Euclidean Tremolo pedal I built on an Electrosmith Daisyseed with Max Msp Gen~.

This tremolo pedal allows for rhythmic chopping based on the mathematics as found with Euclidean sequencing. The "Generating Sound and Organizing Time" book was a huge source of guidance and inspiration when it came to programming the effect.

So for this video I enabled random mode on the pedal and then generated rhythms on the fly with a synth loop running on the OP-1. 

Euclid is fully MIDI compatible, so you're able to keep things tightly in sync with the MIDI clock as opposed to feeling it out with the Rate knob. In this particular instance I was clocking it with the Dirtywave M8 so I could set the pedal to 70bpm.

If anyone has any questions, concerns or ideas please feel free to let me know.

Thanks!

u/6Guitarmetal6 — 11 days ago