Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
That’s amazing and it’s going to change my life! It’ll be used in many future concerts and allow many extended possibilities. Thank you!
I will write a short orchestration with Atom and make a video about it when it’s available. When can I get my hands on it!?!?!
What should I go and read to learn about what I can script in this app?
I hope Atom performs up to your expectations. People using it "for real", in a live setting, is a milestone that I know has been already crossed, but I can't help but always an unnerving feeling when I witness it – it could be someone's paid gig that I have a responsibility to not ruin!
I hope to release the new beta in the next day or two, then the final version to the App Store before or during next week. If you'd like to join the beta, DM me your testflight.
Here is the documentation: http://github.com/victorporof/atom
Thanks! This is (the back end of) the codebase that I use to generate MIDI orchestration with code:
https://github.com/OscarSouth/theHarmonicAlgorithm (I post that link all over the place so apologies if you've seen it before). I'll be using it heavily with Atom2 -- already got it working very smoothly, and this upcoming update makes it actually practical to do use it live (I tried previously with 16 instances, which worked but wasn't really practical.
I'll DM you.
@_ki big velocity is just what I need at the moment, will be so useful as my first layer of triggering uses velocity.
@blueveek The new update looks like it will open up some new workflows, really like the visibility filters on the layers.
Regarding the API, I think being able to manipulate already recorded notes in the piano roll would be most useful to me as I could then run a filter operation, see the effects, then undo if necessary. I would need:
Not sure how we would trigger these scripts.
Probably more value to the community would be the ability to filter notes as they arrive. At note off you would be presented with the final note/duration/velocity and can choose whether to keep it or not. You could also at this point transform the note properties before they are passed through.
Having a similar API for notes being sent from the instance would be useful. For example, I could implement my StreamByter script which converts notes to PC values inside the sending Atom rather than introducing delay by going via a third party AU. Once full automation control is available then this use case becomes less interesting as I would be able to send PC natively.
Yeah, I think it's incredible what @_ki managed to do with basically no documentation other than the examples!
Right. I'm thinking a
getData
that gives you an array of all MIDI data, and anuploadData
that is used to overwrite everything, could work great. The downside is that editing single notes or events is rather inefficient, but if that ends up being a common use case, I'll think about optimizing it then.Yeah, this is actually the hard bit. UI needs to be built for all of this, but perhaps starting simple with just a list of "processing scripts", as a separate category from "controller scripts", and a way to run each one manually, is a good start.
Yes, both of the above represent scripts that would run for incoming and outgoing data.
Note that for processing incoming data, and emitting it for recording into Atom, or passing it through to your instruments or other AUs in the chain, you can already do this with a combination of the MIDI callbacks and the "receive*" functions in the sandbox, as long as you're working with virtual MIDI ports.
However, generally, I don't want to overlap Mozaic here in terms of functionality. What makes the iOS music making scene so great is the modularity and the plethora of specialized apps, and I want to continue encouraging that. Atom might seem like a MIDI monolith, but I'm growing more and more weary of unintentionally turning it into a walled garden with features that already exists in other AUs. Yes, it's sometimes a nicer UX having everything under one roof, but this can often be fixed by better AU management and switching in hosts.
Still working on my first Atom 2 piece and really enjoying it but my current way of working (which might be bonkers) makes me ask for a couple of enhancement requests from @blueveek …
First is can I have one or more of… brighter room lights, new eyes, new glasses or a bigger IPad 😊
Second is that what I have is multiple atoms (being triggered by a master Atom) driving multiple instruments, playing all at once and when I hear something I don’t like I want to edit particular notes in one instance of atom. I can hear the notes in that instance by switching on Listen but what I can’t do (at all easily) is hear the new notes in relation to the other instances. I can loop the instance I am editing but again the other instances are off doing there own thing.
So I think what I am after is some way of defining a set of instances (the Layers facility seems the obvious way) and then within one instance, defining a loop that the other instances follow, allowing me to keep editing the notes until I am happy. Once the loop is removed then all instances continue playing as per the master clock in e.g. AUM.
Hope this makes sense and is not just a bizarre use case! Thanks
@GeoTony in your setup are you able to set a loop on the master instance over the parts you want looped? This is how I do it, but the loop needs to start before or at a trigger note for each of the instances you want to play at the same time.
Like this:
Each note triggers a different Atom instance and will loop through the section of music at that point in the master instance timeline.
Edit: I am using Hold mode which may be a difference between our setup. The advantage with hold mode is that everything stops when the note off or loop point is hit.
I see what you mean @MisplacedDevelopment . I am using Hold mode but had it set to START AT NOTE ON but only had a single note I.e. not continuous. Because of this I had switched off STOP AT NOTE OFF as per screenshot. I can see the advantage of doing it how you suggest though so will give it a go later… many thanks
Hi @blueveek , is this a bug? I imported a long (70 kb) midi-file (with 22 'tracks/voices' (devided over 16 midi channels). in Atom2 in AUM; after loading (which went well) I wanted to MERGE these 22 patterns into one pattern, so I clicked the MERGE ALL-icon. Then - after 10 minutes (!!!) , while Atom2 was 'working' - Atom2 crashed... Am I asking to much for this operation from my iPad Pro 10.5 (IOS 14.4.2)? Or is this indeed a bug?
@Harro Working with files that huge isn't something that Atom is currently capable of handling well. I hope to be able to optimize for that particular type of workload at some point in the future.
However, since you've mentioned that importing went well, consider using multiple instances (one for each track) instead of attempting to merge everything into a single pattern in a single instance.
Thank you for your quick reply. So what's the max length (in kb) that Atom2 can handle without crashing? Or is it about an overruled max number of 'tracks/voices' in a midifile?
NB. Photon AU has no problem with importing such a long multi-track midifile, but I very much prefer to use Atom2...
Also: is it possible - for the time being - to popup a message with a warning + can't proceed-button when a midifile is too long and/or too complicated to merge?
Well, I might need to go start a new “MPE help needed” thread, as I’ve still had no luck with this whole using my Launchpad Pro Mk3 to control the GeoSWAM sax in GeoShred, and have it all recorded in an Atom 2 instance.
I’m dying to get this working so I can record my sleazy sax solos on top of my 1980s crime tv/drama sounding soundtracks (😉), so I’ll try asking for help one more time here. Might start that new thread anyway.. but here goes...
I bought @blueveek ’s MIDI tools, (and also have StreamByter and Mozaic if needed)
Does anyone have, or know of a working chain to make this work? ——> Launchpad Pro MK3 controlling the GeoSWAM tenor sax in GeoShred and have it all, expressions included, recorded into Atom 2.?
Could someone please “ELI5” it to me?
Things I know:
Questions:
I have all the stuff (I think!) to make it work... just need help wiring it all up!
I’ll get close! But then it’ll start squealing and the GeoShred presets start switching all over the place, or if I do get some stable sounding sax sounds, it’ll start being riddled with stuck notes, either that or random notes hitting all over the place. Basically everything goes haywire.. 😢
@Intrepolicious: as far as I know the Launchpad doesn’t do MPE.
Have you tried with a MPE controller/app? Have you tried with the launchpad and a non MPE synth?
@Intrepolicious do you have the LP to GeoSWAM working without Atom? If not, I’d focus on that first. Then I’d ask if you tried LP -> Atom -> GeoSWAM?
It's not so much about the file size, but more about the drawing of all that data. 70kb is massive, and that means there's a lot of things in there to draw.
I actually think merging was probably very quick, but then it was the rendering itself that slowed down or crashed the UI. Atom looks good, but that comes at a cost (that I now slowly start to think is a little unreasonable), and in hindsight some of the technical decisions I've made early on (which are very different from Atom 1's approach) aren't suitable for dealing with massive amounts of data that need to be drawn on screen. It should, of course, be possible to render all of this very efficiently, and I know how, it's just a quite a bit of work to force this into the current architecture. This is why I don't think it's your device at fault here, but more so my current approach to rendering.
EDIT: For the nerdy: here's the exact same issue faced by Google with Docs, and their approach is exactly what I'm planning to do as well: https://workspaceupdates.googleblog.com/2021/05/Google-Docs-Canvas-Based-Rendering-Update.html
@Intrepolicious : why don't you start a separate thread. Your issue has to do with setting up the Launchpad pro to GeoShred chain and not Atom.
Tip: you should to put the Midi filtering/cloning between the Launchpad and GeoShred. To start, forget about Atom and focus on Launchpad to midi filter of some sort to GeoSwam
I do know that (I’ll add it to my “things I know” list). This is about the “workaround” for just this thing.
Like “Velocity Keyboard”? I have. Same results, even GeoShred’s controlling surface gives me the same results when I try to record it into Atom. I don’t think my problem is my controller, but more of this MPE workaround thing /“funneling midi” thing. I can get notes to show up in Atom, but they’re not the sweet expressive notes that I intended to be recorded.
Yes. Works perfect, just like all my other midi controllers.
Looks amazing! Sorry to bother you, I’ve read your doc and it seems that in my iCloud Drive, I’ve got 3 folders for PianoRoll 2.0.1 to 2.0.3 (despite being at 2.0.8). Furthermore, all of them seem « locked » meaning that whenever I try to drag files to their Styles folders, I’m stuck. I’ve checked my iCloud Drive settings and Atom2 is correctly enabled. Is there some workaround to this issue? Thank you!
While the Launchpad doesn't do MPE, setting up some filtering/cloning between it and GeoSwam will let you play the GeoSwam instruments from a Launchpad. How to do that has been documented elsewhere on the AB forum -- and seems appropriate to address in a different thread.
The version numbers on iCloud Drive correspond to the API version number, not the app's version number. I know this is confusing, I'll try to unify things moving forward.
Simply drag and drop your files onto an Atom instance in your favorite host. That's it, Atom will take care of the rest.
Amazing, that worked! I remember you mentioning this drag and drop function but I was so focused on following the guide that I didn’t thought of trying that!👍
Thank you very much!
This takes me back to my original thinking that there could be special "event" markers in the timeline. These would be like notes but graphically different somehow, or in a separate section at the bottom of the screen. These events could do things like trigger CC's, PC's, Sysex, or ... trigger script functions.
Long-press on the event would bring up a popup allowing you to select the event type. If one of the three-byte midi message types was selected you could select the other two bytes from drop-downs. If sysex was selected you could enter a string of bytes into a text box and it would automatically be wrapped in 0F .. F7. If scripting was selected you could enter a function call into a text box.
This could be a terrifically powerful thing, and at least in my mind, of relatively low UI impact.
I'm confused how one could use something like this for processing data through, live, without introducing delay equal to the length of the note? If you have to wait for the note-off to arrive to decide what to do, you can't send anything until the end of the note.
I can see deferring saving the note to the piano roll to the end. I can see processing already recorded notes by looking ahead. But I don't see how passing notes coming through could work.
On the other hand, there's the Midi Tools note filter, which does work live. So maybe there's some magic available that I'm not seeing?
I’m 100% sure this has been answered before, so excuse me in advance.
Does Atom2 support Launchkey mini mk3?. I mean if there’s already a script for it... I can’t see it in the app. thanks!.
You're correct. But not all algorithms need to wait for a note-off to make a decision, and I'm going to hazard a conjecture and say that most won't. Waiting for the professionals to rip my bogus conjecture to shreds on this one
If you don't need to wait for a note-off, then it's all fine and dandy, right?
Yeah, Filter and others like De-Ghost / Gate, don't care about waiting for note-off.
Not yet, but @sinosoidal was kind enough to link me to its programmer's manual, so I hope to add support for it pretty soon.
Interesting. I can't think of how De-Ghost can work without waiting for the next note to arrive to decide whether or not to send it, but I'll take your word for it.
@MisplacedDevelopment , it nearly works but not quite because of the ‘ but the loop needs to start before or at a trigger note for each of the instances you want to play at the same time’ bit… so if I have 64 bar piece and I want to loop every 8 bars and then at my next attempt every 3 or 4 or whatever bars I can’t see an easy way of doing it. You have to keep moving the master note triggers as far as I can see which makes it incredibly difficult. I think somewhere there is a simple solution but I can’t see it at the moment…
Oh, it does wait for the next note to arrive, but that's a note-on message, not a note-off.
But still has to introduce a delay to any note that you do want to let through, right? Because you have to wait some time to decide if the note is long enough.
Sorry, I know you have better things to do than answer my off topic questions.