Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
That kind of thing keeps getting more and more convincing, but for a convincing live performer to sound like the person they are emulating it still requires a measure of mimicry on their part.
I just want an overdub on/off function that works EXACTLY as it does in Drambo, in every single midi recording app as a standard. If it’s on, it overdubs, if it’s off, the new notes replace the old but ONLY when the two coincide. This allows one to blend takes in real time and it’s a rare gem that should be standard in every applicable app. As standard as tap tempo
Yes, having to dig through menus to change this is a bit of a pain. Would be a nice top-level option while recording.
Yes, that one looks like on a similar track. thanks.
I can't see how that can be done for live auto as it would require a huge amount of latency. With great singers like Billie Holiday the artistry is in the phrasing. AI can't transform a phrase or even a word until it has heard it.
You can do basic things like pitch correction, formant filtering, changing head resonance, harmonizing, etc near realtime, but true and convincing singing transformation to another person really needs to be done after the fact.
That was true until recently. Saw this from a performance artist singer who has been working with transforming live vocals and it appears they are really doing it live, not via post-processing:
(Go to about 6 minutes into the video)
If it’s not a fake…
https://www.youtube.com/shorts/KCBJF5dDklk
Sounds pretty convincing to me…
I hear latency there. Sure it's not a lot, but then there's not a whole lot of transformation going on there. I was thinking more like full phrase character such as a Billie Holiday or Frank Sinatra would do.
But, point made.
And of course, these things will continue to improve as real-time processing power improves.
(Sorry ... this turned out to be a really long read.)
In / Out universal midi note translator intended for drums:
The app would allow free swapping of input app and target app and would translate freely between them.
This is meant to address the difficulties of each app having only so many pads, and those not always corresponding to a single sound.
Take the example of trying to drive an eight-pad drum app from a GM midi file. Even if the app has fixed sounds per pad (kick, snare, tom, hats, etc.), the GM file can have four different notes for a kick, three or more for "snares", several for different toms, hats and crashes.
Typically when you try to play a GM midi file even into an app such as Ruismaker that has a GM option, you can still end up with entirely missing parts, or parts that play very different sounds than expected. This is further complicated by the fact that any pad in Ruismaker can be any sound.
Additionally, perhaps an app puts out midi and you want to translate that output over to another app. Rozeta XOX and Rhythm are nice in that they have customizable mapping. But most apps don't. So how do we conveniently get from something that outputs one note set to something that accepts another. And we need to start over with the mapping if we decide to switch the input or output app.
So, we set up an app mapping that is something like:
Ruismaker Map
47(B1)
etc...
Kick (hard Alt)
Now let's say we have an app that we've mapped note C3 as a Kick (meaning we've defined all of the GM "kick" notes as mapped to note C3 for this app). An incoming note 49 finds its mapping to note C3 from the internal lookup table. But then we decide to swap out a different app for the output. This one has note C2 mapped to kick notes. It works with no further mapping needed because of the internal mapping table. Likewise, if we sent a GM midi file to the input, as long as there's a GM map (which in this case would be the same as the internal lookup table), again we don't have to do a thing to keep correct routing.
Yet another idea I put a lot of thought into but couldn't get past the AUv3 learning hump to implement. 🤨
@wim, these detailed posts are like gold for any developer who chooses to jump on these ideas.
Well yeh ... but to really, really do the job realtime, you'd need time travel.
Still, I'm sure things will get good enough to convince most people.
It's nice to think that perhaps all that thought might not be completely wasted. 😂
Menus isn’t the problem. Most apps straight up lack this functionality. They may have an overdub switch, but it won’t function like Drambo. Oh well I guess people don’t get how to use make use of it.
I’d love an app like Apple Music Memos…
Very cool tool!
Would’ve been nice if Apple had open sourced it instead of killing it entirely.
@wim said:
In / Out universal midi note translator intended for drums
It wouldn't be a pure translator anymore, but it might be cool to have buttons that remap things temporarily, like play rides instead of hats or add a crash to the kick. Maybe knobs for scaling velocity for each element, too. Jamstix 4 might be a good inspiration for things to add.
Yeh, @tyslothrop1, lots could be added, such as round robin or randomization in the case of multiple mappings for a single note, groove templates, etc., but the idea was hard enough to get across as it is. 😉
Anyway what’s the point of a developer trying to load in features when no matter how many they add, 50 more will be requested by the AB forum within hours of the app being released? 😂
Yes!
As I do House electro I focus on kicks so nothing natural but need to use AUv3 apps on GB or Cubasis. Even the drummer in LP is not fit to industrial house so I use samples that I create or record myself.
Yup, it’s in MiRack, and MiRack can have multiple ins and outs.
@bygjohn thanks, good to know.
Yes, the guy in TED video is amazing, but this just seems like a great harmonizer effect - male to female translation-. If the male singer wasn't that good, the processed female voice would probably be terrible.
I was thinking of an app/plugin capable of grabbing an artist's personality and making it available to sing new songs (imagine how much all deceased artists could still give to humanity with this algorithm). I feel that with AI we are close to having these results, but I haven't heard anything really convincing yet.
In other words: I would like to sing into a mic with my off-key voice and hear Bob Marley's voice, in all his glory.
I know we're almost there
I'd like to see a Tab Print option in Riffler.
I’d like reverse midi control automation. You connect the controller to the instrument, get it to learn the midi numbers then you turn the knob on the instrument and the automation sequencer performs it. Too much to hope for I know…… but why?
I wonder if that's even possible with iOS/iPadOS?
I don't understand what you're saying. 🤷🏼♂️
😀 Yes I see your point.
I was meaning that you would be able to tweak the knobs on a synth and they would automate because they are connected via midi learn to a sequencer. Instead of turning the knobs etc on the sequencer to automate the synth.
So you would have the midi notes coming through via the sequencer- but all the tweaking is done on the actual synth - and the sequencer remembers it and plays the automation because of this idea - giving your apps more of a Groove-box/ hands on feel.
Ahh. What you're describing is dependent on each individual app.
If a host made in-app AUv3 parameter movement able to be routed to other apps, this would be huge. You could then effectively use any AUv3 app as a controller for any other AUv3 app at many times higher resolution than midi allows. That would open the way for high-resolution AUv3 parameter recording, editing, and playback.
So, I think there really two app ideas here:
Idea that came up in a different thread:
An app that can take appropriate length waveforms from live input or imported files at adjustable or manually triggered points, then assemble those slices into Serum compatible wavetable format files.
These files could then be exported for use in compatible wavetable synthesizers. Optionally include a wavetable synth in the app.