Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Attention Developers: Free App Ideas From This Community of Musicians

13

Comments

  • @recycle said:
    I've been waiting for it all my life: an app that when I speak, it transforms my voice into that of Morgan Freeman in real time, when I sing it brings out the voice of Billie Holyday, when I shout, I hear a scream by James Brown. We can probably get there With the use of AI, unfortunately what I've already heard about it is quite disappointing (eg. Uberduck) and still no real time

    That kind of thing keeps getting more and more convincing, but for a convincing live performer to sound like the person they are emulating it still requires a measure of mimicry on their part.

  • edited May 2023

    I just want an overdub on/off function that works EXACTLY as it does in Drambo, in every single midi recording app as a standard. If it’s on, it overdubs, if it’s off, the new notes replace the old but ONLY when the two coincide. This allows one to blend takes in real time and it’s a rare gem that should be standard in every applicable app. As standard as tap tempo

  • @db909 said:
    I just want an overdub on/off function that works EXACTLY as it does in Drambo, in every single midi recording app as a standard. If it’s on, it overdubs, if it’s off, the new notes replace the old but ONLY when the two coincide. This allows one to blend takes in real time and it’s a rare gem that should be standard in every applicable app. As standard as tap tempo

    Yes, having to dig through menus to change this is a bit of a pain. Would be a nice top-level option while recording.

  • Yes, that one looks like on a similar track. thanks.

  • wimwim
    edited May 2023

    @recycle said:
    I've been waiting for it all my life: an app that when I speak, it transforms my voice into that of Morgan Freeman in real time, when I sing it brings out the voice of Billie Holyday, when I shout, I hear a scream by James Brown. We can probably get there With the use of AI, unfortunately what I've already heard about it is quite disappointing (eg. Uberduck) and still no real time

    I can't see how that can be done for live auto as it would require a huge amount of latency. With great singers like Billie Holiday the artistry is in the phrasing. AI can't transform a phrase or even a word until it has heard it.

    You can do basic things like pitch correction, formant filtering, changing head resonance, harmonizing, etc near realtime, but true and convincing singing transformation to another person really needs to be done after the fact.

  • edited May 2023

    @wim said:

    @recycle said:
    I've been waiting for it all my life: an app that when I speak, it transforms my voice into that of Morgan Freeman in real time, when I sing it brings out the voice of Billie Holyday, when I shout, I hear a scream by James Brown. We can probably get there With the use of AI, unfortunately what I've already heard about it is quite disappointing (eg. Uberduck) and still no real time

    I can't see how that can be done for live auto as it would require a huge amount of latency. With great singers like Billie Holiday the artistry is in the phrasing. AI can't transform a phrase or even a word until it has heard it.

    You can do basic things like pitch correction, formant filtering, changing head resonance, harmonizing, etc near realtime, but true and convincing singing transformation to another person really needs to be done after the fact.

    That was true until recently. Saw this from a performance artist singer who has been working with transforming live vocals and it appears they are really doing it live, not via post-processing:

    (Go to about 6 minutes into the video)

  • edited May 2023

    @wim said:

    @recycle said:
    I've been waiting for it all my life: an app that when I speak, it transforms my voice into that of Morgan Freeman in real time, when I sing it brings out the voice of Billie Holyday, when I shout, I hear a scream by James Brown. We can probably get there With the use of AI, unfortunately what I've already heard about it is quite disappointing (eg. Uberduck) and still no real time

    I can't see how that can be done for live auto as it would require a huge amount of latency. With great singers like Billie Holiday the artistry is in the phrasing. AI can't transform a phrase or even a word until it has heard it.

    You can do basic things like pitch correction, formant filtering, changing head resonance, harmonizing, etc near realtime, but true and convincing singing transformation to another person really needs to be done after the fact.

    If it’s not a fake…

    https://www.youtube.com/shorts/KCBJF5dDklk

    Sounds pretty convincing to me…

  • wimwim
    edited May 2023

    @NeuM said:

    @wim said:

    @recycle said:
    I've been waiting for it all my life: an app that when I speak, it transforms my voice into that of Morgan Freeman in real time, when I sing it brings out the voice of Billie Holyday, when I shout, I hear a scream by James Brown. We can probably get there With the use of AI, unfortunately what I've already heard about it is quite disappointing (eg. Uberduck) and still no real time

    I can't see how that can be done for live auto as it would require a huge amount of latency. With great singers like Billie Holiday the artistry is in the phrasing. AI can't transform a phrase or even a word until it has heard it.

    You can do basic things like pitch correction, formant filtering, changing head resonance, harmonizing, etc near realtime, but true and convincing singing transformation to another person really needs to be done after the fact.

    That was true until recently. Saw this from a performance artist singer who has been working with transforming live vocals and it appears they are really doing it live, not via post-processing:

    (Go to about 6 minutes into the video)

    I hear latency there. Sure it's not a lot, but then there's not a whole lot of transformation going on there. I was thinking more like full phrase character such as a Billie Holiday or Frank Sinatra would do.

    But, point made.

  • @wim said:

    @NeuM said:

    @wim said:

    @recycle said:
    I've been waiting for it all my life: an app that when I speak, it transforms my voice into that of Morgan Freeman in real time, when I sing it brings out the voice of Billie Holyday, when I shout, I hear a scream by James Brown. We can probably get there With the use of AI, unfortunately what I've already heard about it is quite disappointing (eg. Uberduck) and still no real time

    I can't see how that can be done for live auto as it would require a huge amount of latency. With great singers like Billie Holiday the artistry is in the phrasing. AI can't transform a phrase or even a word until it has heard it.

    You can do basic things like pitch correction, formant filtering, changing head resonance, harmonizing, etc near realtime, but true and convincing singing transformation to another person really needs to be done after the fact.

    That was true until recently. Saw this from a performance artist singer who has been working with transforming live vocals and it appears they are really doing it live, not via post-processing:

    (Go to about 6 minutes into the video)

    I hear latency there. Sure it's not a lot, but then there's not a whole lot of transformation going on there. I was thinking more like full phrase character such as a Billie Holiday or Frank Sinatra would do.

    But, point made.

    And of course, these things will continue to improve as real-time processing power improves.

  • wimwim
    edited May 2023

    (Sorry ... this turned out to be a really long read.)

    In / Out universal midi note translator intended for drums:

    The app would allow free swapping of input app and target app and would translate freely between them.

    • Internally there is a "neutral" mapping table - which I envision as being the General Midi (GM) Drum map.
    • One or more maps could be set up for each app one wants to use with the translator.
    • Each map would specify the note the app uses and internal map note(s) it corresponds to.
    • A map can be selected for the input. Another map can be selected for the output.
    • Incoming notes get sent to the internal table, then are translated to the output.
    • Comes with a substantial set of pre-made maps.
    • Allows easy creation, editing, export, and sharing of user maps.

    This is meant to address the difficulties of each app having only so many pads, and those not always corresponding to a single sound.

    Take the example of trying to drive an eight-pad drum app from a GM midi file. Even if the app has fixed sounds per pad (kick, snare, tom, hats, etc.), the GM file can have four different notes for a kick, three or more for "snares", several for different toms, hats and crashes.

    Typically when you try to play a GM midi file even into an app such as Ruismaker that has a GM option, you can still end up with entirely missing parts, or parts that play very different sounds than expected. This is further complicated by the fact that any pad in Ruismaker can be any sound.

    Additionally, perhaps an app puts out midi and you want to translate that output over to another app. Rozeta XOX and Rhythm are nice in that they have customizable mapping. But most apps don't. So how do we conveniently get from something that outputs one note set to something that accepts another. And we need to start over with the mapping if we decide to switch the input or output app.

    So, we set up an app mapping that is something like:

    Ruismaker Map

    Note #NameGM Note(s)
    49 (C#2)Kick 48 C2
    47(B1)
    etc...
    Kick (hard)
    Kick (hard Alt)
    51 (D#2)etc...   

    Now let's say we have an app that we've mapped note C3 as a Kick (meaning we've defined all of the GM "kick" notes as mapped to note C3 for this app). An incoming note 49 finds its mapping to note C3 from the internal lookup table. But then we decide to swap out a different app for the output. This one has note C2 mapped to kick notes. It works with no further mapping needed because of the internal mapping table. Likewise, if we sent a GM midi file to the input, as long as there's a GM map (which in this case would be the same as the internal lookup table), again we don't have to do a thing to keep correct routing.

    Yet another idea I put a lot of thought into but couldn't get past the AUv3 learning hump to implement. 🤨

  • @wim, these detailed posts are like gold for any developer who chooses to jump on these ideas.

  • @NeuM said:

    @wim said:

    @NeuM said:

    @wim said:

    @recycle said:
    I've been waiting for it all my life: an app that when I speak, it transforms my voice into that of Morgan Freeman in real time, when I sing it brings out the voice of Billie Holyday, when I shout, I hear a scream by James Brown. We can probably get there With the use of AI, unfortunately what I've already heard about it is quite disappointing (eg. Uberduck) and still no real time

    I can't see how that can be done for live auto as it would require a huge amount of latency. With great singers like Billie Holiday the artistry is in the phrasing. AI can't transform a phrase or even a word until it has heard it.

    You can do basic things like pitch correction, formant filtering, changing head resonance, harmonizing, etc near realtime, but true and convincing singing transformation to another person really needs to be done after the fact.

    That was true until recently. Saw this from a performance artist singer who has been working with transforming live vocals and it appears they are really doing it live, not via post-processing:

    (Go to about 6 minutes into the video)

    I hear latency there. Sure it's not a lot, but then there's not a whole lot of transformation going on there. I was thinking more like full phrase character such as a Billie Holiday or Frank Sinatra would do.

    But, point made.

    And of course, these things will continue to improve as real-time processing power improves.

    Well yeh ... but to really, really do the job realtime, you'd need time travel.
    Still, I'm sure things will get good enough to convince most people.

  • @NeuM said:
    @wim, these detailed posts are like gold for any developer who chooses to jump on these ideas.

    It's nice to think that perhaps all that thought might not be completely wasted. 😂

  • @NeuM said:

    @db909 said:
    I just want an overdub on/off function that works EXACTLY as it does in Drambo, in every single midi recording app as a standard. If it’s on, it overdubs, if it’s off, the new notes replace the old but ONLY when the two coincide. This allows one to blend takes in real time and it’s a rare gem that should be standard in every applicable app. As standard as tap tempo

    Yes, having to dig through menus to change this is a bit of a pain. Would be a nice top-level option while recording.

    Menus isn’t the problem. Most apps straight up lack this functionality. They may have an overdub switch, but it won’t function like Drambo. Oh well I guess people don’t get how to use make use of it.

  • I’d love an app like Apple Music Memos…
    Very cool tool!

  • @Stuntman_mike said:
    I’d love an app like Apple Music Memos…
    Very cool tool!

    Would’ve been nice if Apple had open sourced it instead of killing it entirely.

  • edited May 2023

    @wim said:

    In / Out universal midi note translator intended for drums

    It wouldn't be a pure translator anymore, but it might be cool to have buttons that remap things temporarily, like play rides instead of hats or add a crash to the kick. Maybe knobs for scaling velocity for each element, too. Jamstix 4 might be a good inspiration for things to add.

  • wimwim
    edited May 2023

    Yeh, @tyslothrop1, lots could be added, such as round robin or randomization in the case of multiple mappings for a single note, groove templates, etc., but the idea was hard enough to get across as it is. 😉

    Anyway what’s the point of a developer trying to load in features when no matter how many they add, 50 more will be requested by the AB forum within hours of the app being released? 😂

  • @NeuM said:

    @Stuntman_mike said:
    I’d love an app like Apple Music Memos…
    Very cool tool!

    Would’ve been nice if Apple had open sourced it instead of killing it entirely.

    Yes!

  • As I do House electro I focus on kicks so nothing natural but need to use AUv3 apps on GB or Cubasis. Even the drummer in LP is not fit to industrial house so I use samples that I create or record myself.

  • @tyslothrop1 said:
    https://library.vcvrack.com/AudibleInstruments/Warps

    The second one is part of miRack, too, if I'm not mistaken, although I 'm not sure if you can route two audiotracks into miRack.

    Yup, it’s in MiRack, and MiRack can have multiple ins and outs.

  • @bygjohn thanks, good to know.

  • edited May 2023

    @wim said:

    @NeuM said:

    @wim said:

    @recycle said:
    I've been waiting for it all my life: an app that when I speak, it transforms my voice into that of Morgan Freeman in real time, when I sing it brings out the voice of Billie Holyday, when I shout, I hear a scream by James Brown. We can probably get there With the use of AI, unfortunately what I've already heard about it is quite disappointing (eg. Uberduck) and still no real time

    I can't see how that can be done for live auto as it would require a huge amount of latency. With great singers like Billie Holiday the artistry is in the phrasing. AI can't transform a phrase or even a word until it has heard it.

    You can do basic things like pitch correction, formant filtering, changing head resonance, harmonizing, etc near realtime, but true and convincing singing transformation to another person really needs to be done after the fact.

    That was true until recently. Saw this from a performance artist singer who has been working with transforming live vocals and it appears they are really doing it live, not via post-processing:

    (Go to about 6 minutes into the video)

    I hear latency there. Sure it's not a lot, but then there's not a whole lot of transformation going on there. I was thinking more like full phrase character such as a Billie Holiday or Frank Sinatra would do.

    But, point made.

    Yes, the guy in TED video is amazing, but this just seems like a great harmonizer effect - male to female translation-. If the male singer wasn't that good, the processed female voice would probably be terrible.
    I was thinking of an app/plugin capable of grabbing an artist's personality and making it available to sing new songs (imagine how much all deceased artists could still give to humanity with this algorithm). I feel that with AI we are close to having these results, but I haven't heard anything really convincing yet.
    In other words: I would like to sing into a mic with my off-key voice and hear Bob Marley's voice, in all his glory.
    I know we're almost there

  • I'd like to see a Tab Print option in Riffler.

  • I’d like reverse midi control automation. You connect the controller to the instrument, get it to learn the midi numbers then you turn the knob on the instrument and the automation sequencer performs it. Too much to hope for I know…… but why?

  • @robosardine said:
    I’d like reverse midi control automation. You connect the controller to the instrument, get it to learn the midi numbers then you turn the knob on the instrument and the automation sequencer performs it. Too much to hope for I know…… but why?

    I wonder if that's even possible with iOS/iPadOS?

  • @robosardine said:
    I’d like reverse midi control automation. You connect the controller to the instrument, get it to learn the midi numbers then you turn the knob on the instrument and the automation sequencer performs it. Too much to hope for I know…… but why?

    I don't understand what you're saying. 🤷🏼‍♂️

  • @wim said:

    @robosardine said:
    I’d like reverse midi control automation. You connect the controller to the instrument, get it to learn the midi numbers then you turn the knob on the instrument and the automation sequencer performs it. Too much to hope for I know…… but why?

    I don't understand what you're saying. 🤷🏼‍♂️

    😀 Yes I see your point.
    I was meaning that you would be able to tweak the knobs on a synth and they would automate because they are connected via midi learn to a sequencer. Instead of turning the knobs etc on the sequencer to automate the synth.
    So you would have the midi notes coming through via the sequencer- but all the tweaking is done on the actual synth - and the sequencer remembers it and plays the automation because of this idea - giving your apps more of a Groove-box/ hands on feel.

  • wimwim
    edited May 2023

    @robosardine said:

    @wim said:

    @robosardine said:
    I’d like reverse midi control automation. You connect the controller to the instrument, get it to learn the midi numbers then you turn the knob on the instrument and the automation sequencer performs it. Too much to hope for I know…… but why?

    I don't understand what you're saying. 🤷🏼‍♂️

    😀 Yes I see your point.
    I was meaning that you would be able to tweak the knobs on a synth and they would automate because they are connected via midi learn to a sequencer. Instead of turning the knobs etc on the sequencer to automate the synth.
    So you would have the midi notes coming through via the sequencer- but all the tweaking is done on the actual synth - and the sequencer remembers it and plays the automation because of this idea - giving your apps more of a Groove-box/ hands on feel.

    Ahh. What you're describing is dependent on each individual app.

    • If an app reports its AUv3 parameter movements to the host, then the host can record them or make other use of them. Other apps cannot because there's no way for them to get the parameter movements except from the host. Many but not all hosts are able to record and play back such automation.
    • If an app outputs midi CC's from it's controls, the host can and usually does provide routing to other apps. But most apps, if they do output CC's don't output them for all or even most of their controls.

    If a host made in-app AUv3 parameter movement able to be routed to other apps, this would be huge. You could then effectively use any AUv3 app as a controller for any other AUv3 app at many times higher resolution than midi allows. That would open the way for high-resolution AUv3 parameter recording, editing, and playback.

    So, I think there really two app ideas here:

    1. A host that provides routing of AUv3 parameter data between apps.
    2. A recorder / editor / playback AUv3 app that could use that data. This can only work via idea #1.
  • Idea that came up in a different thread:

    An app that can take appropriate length waveforms from live input or imported files at adjustable or manually triggered points, then assemble those slices into Serum compatible wavetable format files.

    These files could then be exported for use in compatible wavetable synthesizers. Optionally include a wavetable synth in the app.

Sign In or Register to comment.