Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

VS - Visual Synthesizer by Imaginando Lda

1246733

Comments

  • @sinosoidal thank you for filling a space I had on ios and something I've wanted for a long time. Loving it

  • @sinosoidal said:

    @lukesleepwalker said:

    @sinosoidal said:

    @lukesleepwalker said:

    @Identor said:
    I have watched the tutorial from @Gavinski , and although it's very cool to manipulate the visuals through LFO, MIDI and Audio, and have 8 layers + Background, i wonder if i could add content, like short video clips. What i hoped to see, are more "traditional" video manipulation, alike effect layers ala Photoshop (Posterise, negative, outline etc.) and mirroring, caleido etc. Or even video scrubbing.
    As of yet, this is not my cup of tea, due to the fact that you are bound to a set of shaders, with no room for own content.
    This is an excellent program for people who want a visual to their music, but less for people who want to be more creative and expressive.
    All in all, this is and excellent piece of software, and i do like the Trigger options, but less for the visuals.

    I watched the tutorial and read the manual and I stopped at the fact that I have to pick from the pre-loaded content in the Materials section. Even if I could load my own photos to a layer and then manipulate the image with MIDI or audio-based LFOs then I'd buy instantly. If I could load short video clips or, ideally, GIFs from something like giphy (the Glitch Clip app is SO GREAT but is not AUv3) or my own collection of GIFs, then I'd be over the moon. I may still buy this just to support Imaginando with hopes that they expand the feature set in this direction--they have certainly proven with LK that they develop apps for the long haul.

    For now you can load videos and images into the background layer. Most of the factory presets have videos on the background that brings a lot of texture and personality to the presets.

    We might be able to expand the ability to load images to layers. But a video in each layers is a CPU and memory smasher.

    Animated gif support is definitely possible. I will open issue right away.

    This is why I love the way you work. THANK YOU for listening to your customers!

    This is the way! :blush:

    As a product manager in my professional life it makes me so happy to see human-centered design fueling great products in my hobbies!

  • @sinosoidal said:

    Can you send me a screenshot? I have never seen such white bar and we have the latest iPad Pro 2021 11" (M1)

    The same thing is happening with a 12.9-inch 2018 iPad Pro.

  • edited June 2021

    @muzka said:
    @sinosoidal thank you for filling a space I had on ios and something I've wanted for a long time. Loving it

    Same here -

    Just one question …

    Is there a way to record the video? Or to export it?

  • @DavidEnglish said:

    @sinosoidal said:

    Can you send me a screenshot? I have never seen such white bar and we have the latest iPad Pro 2021 11" (M1)

    The same thing is happening with a 12.9-inch 2018 iPad Pro.

    We need to investigate. I think we are not able to reproduce this any of our devices but will check. The fact that we can see the handle to push the dock, lets me think that this is just a matter of setting a property. Not sure about it. We will see what we can do.

  • @Bon_Tempi said:

    @muzka said:
    @sinosoidal thank you for filling a space I had on ios and something I've wanted for a long time. Loving it

    Same here -

    Just one question …

    Is there a way to record the video? Or to export it?

    Right now you need to rely on iOS screen capture. We don't have that feature yet.

  • @sinosoidal said:

    @Bon_Tempi said:

    @muzka said:
    @sinosoidal thank you for filling a space I had on ios and something I've wanted for a long time. Loving it

    Same here -

    Just one question …

    Is there a way to record the video? Or to export it?

    Right now you need to rely on iOS screen capture. We don't have that feature yet.

    But is it planned for a future update?

  • @sinosoidal

    The screenshots have peaked my interest.

    You’ve mentioned being
    able to use video files?

    How long can these files be?
    Can we set markers on the video clips
    so that they can be triggered using midi?
    What’s the file length limit?
    Are the files imported into VS or
    are they streamed?

  • @Gravitas said:
    @sinosoidal

    The screenshots have peaked my interest.

    You’ve mentioned being able to use video files?

    Correct! In background layer.

    How long can these files be?
    What’s the file length limit?

    We are not quite sure yet. We have only tested with clips with less than 2 minutes. We should have more info anytime soon.

    @Gavinski tolds us he was having problems with a file with 300 Mb. We still need to investigate if this is an isolated issue or hard limit.

    Can we set markers on the video clips
    so that they can be triggered using midi?

    No, but I can already see the utility of doing it so. You could set notes to trigger the video at some point. I think that would be awesome. Anything else that I'm seeing?

    Are the files imported into VS or
    are they streamed?

    You reference the files from the hard drive. But theoretically it could also work from a remote endpoint. We are not supporting it at the moment though.

  • @sinosoidal said:

    We need to investigate. I think we are not able to reproduce this any of our devices but will check. The fact that we can see the handle to push the dock, lets me think that this is just a matter of setting a property. Not sure about it. We will see what we can do.

    Thanks

  • @sinosoidal said:

    @Gravitas said:
    @sinosoidal

    The screenshots have peaked my interest.

    You’ve mentioned being able to use video files?

    Correct! In background layer.

    How long can these files be?
    What’s the file length limit?

    We are not quite sure yet. We have only tested with clips with less than 2 minutes. We should have more info anytime soon.

    Cool.

    @Gavinski tolds us he was having problems with a file with 300 Mb. We still need to investigate if this is an isolated issue or hard limit.

    Can we set markers on the video clips
    so that they can be triggered using midi?

    No, but I can already see the utility of doing it so. You could set notes to trigger the video at some point. I think that would be awesome. Anything else that I'm seeing?

    Possibly video blending per marker
    could be quite useful.
    There are quite a few things that can be done with layers reacting with each other.
    ie difference, alpha channel, screen overlay
    etc standard in any video editor.
    Automated transitions for instance
    fades, layered swipes etc.
    Being able to draw in the curves would be
    .........
    Obviously this would be heavy
    cpu stuff when done live.

    Are the files imported into VS or
    are they streamed?

    You reference the files from the hard drive. But theoretically it could also work from a remote endpoint. We are not supporting it at the moment though.

    Okay cool, good to know.

    I’ve just seen a video of
    VS in another thread
    and I’m very impressed.

    My rig consists of two iPads so one
    could easily be dedicated for visuals
    something I’ve been waiting to do.
    I had tried STAELLA but it doesn’t have
    support for external audio interfaces
    with multiple inputs.

    Looks like you’re going to have
    another happy customer by
    tomorrow.

    Thank you.

  • This is gonna be so much fun.

  • @sinosoidal said:

    @Jumpercollins said:
    @sinosoidal Is there anyway of saving content out into LumaFusion with this version or is that coming in a update?

    Do you mean, render to video file? Not yet. But there will be. It is a matter of time.

    @sinosoidal Yes render to video file that great news it’s coming in time thanks for being top developers.

  • First off, I have to say that this is totally cool. I love the idea of visualizing synthesis.

    That said, if I am trying to create trippy visuals to go with music, I think there are better options out there.

    Not trying to take away from this app, it’s really fantastic, but the visuals seem limited. Faithful, but limited.

  • I really like the visuals and all the options you have for creative input but without a real powerful iPad (M1) it's not much you can do. I have just a Beathawk piano and Klevgrand Pipa playing the same notes, via LK, and 5 layers of visuals in VS and the framerate is 10fps = choppy. I'm on an AIR3 which I think should be able to handle that with at least 20-30fps. So, I guess it have to wait for me saving up to a M1 :D Also curious about plans to be able to render to video, if there are any?
    Cheers!

  • i reallly really hope there a plan to render video. I know they have said its planned, but i hope they follow through.

  • @xor said:
    This is gonna be so much fun.

    Nice one! :wink:

  • @Pxlhg said:
    I really like the visuals and all the options you have for creative input but without a real powerful iPad (M1) it's not much you can do. I have just a Beathawk piano and Klevgrand Pipa playing the same notes, via LK, and 5 layers of visuals in VS and the framerate is 10fps = choppy. I'm on an AIR3 which I think should be able to handle that with at least 20-30fps. So, I guess it have to wait for me saving up to a M1 :D Also curious about plans to be able to render to video, if there are any?
    Cheers!

    Even the M1, has not the performance I was expecting. But it is much more faster.

    Try to lower the qualiy setting (Menu -> Settings) until you meet the FPS. By default it starts in medium. You might need to lower it to low.

    This will always depende on the polyphony being played and mostly, the material being used. There are materials heavier than others.

  • @shinyisshiny said:
    i reallly really hope there a plan to render video. I know they have said its planned, but i hope they follow through.

    It's planned. It is a matter of time.

  • Brilliant, thank you very much for such a cool tool !!!

    Greetings

  • edited June 2021

    @sinosoidal said:

    @lukesleepwalker said:

    @Identor said:
    I have watched the tutorial from @Gavinski , and although it's very cool to manipulate the visuals through LFO, MIDI and Audio, and have 8 layers + Background, i wonder if i could add content, like short video clips. What i hoped to see, are more "traditional" video manipulation, alike effect layers ala Photoshop (Posterise, negative, outline etc.) and mirroring, caleido etc. Or even video scrubbing.
    As of yet, this is not my cup of tea, due to the fact that you are bound to a set of shaders, with no room for own content.
    This is an excellent program for people who want a visual to their music, but less for people who want to be more creative and expressive.
    All in all, this is and excellent piece of software, and i do like the Trigger options, but less for the visuals.

    I watched the tutorial and read the manual and I stopped at the fact that I have to pick from the pre-loaded content in the Materials section. Even if I could load my own photos to a layer and then manipulate the image with MIDI or audio-based LFOs then I'd buy instantly. If I could load short video clips or, ideally, GIFs from something like giphy (the Glitch Clip app is SO GREAT but is not AUv3) or my own collection of GIFs, then I'd be over the moon. I may still buy this just to support Imaginando with hopes that they expand the feature set in this direction--they have certainly proven with LK that they develop apps for the long haul.

    For now you can load videos and images into the background layer. Most of the factory presets have videos on the background that brings a lot of texture and personality to the presets.

    We might be able to expand the ability to load images to layers. But a video in each layers is a CPU and memory smasher.

    Animated gif support is definitely possible. I will open issue right away.

    This is really interesting, 👏
    Can you manipulate existing videos or photos via midi?. I think that’s what @Identor and @lukesleepwalker are mentioning. I’d be a lot more interested in changing levels, colorizing, blurring and so on to my own picture or video. It’d render more original material.

  • lotta potential here - I have been wanting a really good visualizer thing - I am getting a lot of crashes though..

  • edited June 2021

    So am I supposed to run this as an audio effect after a synth and then also make sure it’s receiving midi from my keyboard? It seems cool and I love imaginando

  • @pantsofdeath said:
    lotta potential here - I have been wanting a really good visualizer thing - I am getting a lot of crashes though..

    Are you using AUM? Can you send us the AUM that replicates the crashes?

  • @oat_phipps said:
    So am I supposed to run this as an audio effect after a synth and then also make sure it’s receiving midi from my keyboard? It seems cool and I love imaginando

    Oat, it could receive midi from anything. But yeah, if u r playing the synth with your keyboard, also pipe the keyboard midi into the hamburger menu on the left of the VS icon.

    @pantsofdeath yes, also getting occasional problems with crashes in aum. Sometimes quitting aum and retrying works. There's definitely some kind of bug that needs squashing.

  • @sinosoidal

    You remember I mentioned the whole screen overlay thingys etc, etc, etc???

    Yeah, well...
    I’ve just had a little look.

    It’s (playing it cool) awesome. 😁

    I do have a question though.

    Is it supposed to have a midi input in AUM and dRambo?
    and when I first instantiated it I couldn’t play it using my LP X.
    I closed it and ran it again and it started reacting to midi.
    Which was fun.

    I couldn’t have asked for more.

    It’s wow. 🙌🏾

  • @sinosoidal

    multi-input would be useful to enable track-specific audio modulations (and triggers).

    The documentation mentions multiple visual voices enabling up to four materials per layer. There’s nothing in the documentation that describes how to add additional materials to a layer though and I haven’t been able to empirically derive it either.

    Voices - number of simultaneous polyphonic visual voices. Ex: With 4 polyphonic voices, each layer can display 4 simultaneous materials at once

    What does the up/down arrow at the bottom-left of the materials browser supposed to do?

    What are the up/down buttons at the top-right supposed to do? I just caused VS to hang by triple-or-quadruple tapping on the down button. (Not reproducible)

    Ah, those buttons aren’t part of the material browser, I really couldn’t tell.

    Oh, I’m on an iPad on 14.6 in AUM.

  • edited June 2021

    @xor said:
    @sinosoidal

    multi-input would be useful to enable track-specific audio modulations (and triggers).

    The documentation mentions multiple visual voices enabling up to four materials per layer. There’s nothing in the documentation that describes how to add additional materials to a layer though and I haven’t been able to empirically derive it either.

    Voices - number of simultaneous polyphonic visual voices. Ex: With 4 polyphonic voices, each layer can display 4 simultaneous materials at once

    What does the up/down arrow at the bottom-left of the materials browser supposed to do?

    What are the up/down buttons at the top-right supposed to do? I just caused VS to hang by triple-or-quadruple tapping on the down button. (Not reproducible)

    Ah, those buttons aren’t part of the material browser, I really couldn’t tell.

    Oh, I’m on an iPad on 14.6 in AUM.

    Start with the default patch. Connect your keyboard input to VS. Set layer TRIGGER MODE to MIDI. Ensure that TRIGGER channel is set to the same channel as the keyboard input. Press more than one key at the same time. You will observe polyphony.

  • @sinosoidal said:

    @xor said:
    @sinosoidal

    multi-input would be useful to enable track-specific audio modulations (and triggers).

    The documentation mentions multiple visual voices enabling up to four materials per layer. There’s nothing in the documentation that describes how to add additional materials to a layer though and I haven’t been able to empirically derive it either.

    Voices - number of simultaneous polyphonic visual voices. Ex: With 4 polyphonic voices, each layer can display 4 simultaneous materials at once

    What does the up/down arrow at the bottom-left of the materials browser supposed to do?

    What are the up/down buttons at the top-right supposed to do? I just caused VS to hang by triple-or-quadruple tapping on the down button. (Not reproducible)

    Ah, those buttons aren’t part of the material browser, I really couldn’t tell.

    Oh, I’m on an iPad on 14.6 in AUM.

    Start with the default patch. Connect your keyboard input to VS. Set layer TRIGGER MODE to MIDI. Ensure that TRIGGER channel is set to the same channel as the keyboard input. Press more than one key at the same time. You will observe polyphony.

    Oh, I knew that. I thought it meant I could add multiple materials to the same layer, not multiple instances of the same material. D’oh.

  • @xor said:

    @sinosoidal said:

    @xor said:
    @sinosoidal

    multi-input would be useful to enable track-specific audio modulations (and triggers).

    The documentation mentions multiple visual voices enabling up to four materials per layer. There’s nothing in the documentation that describes how to add additional materials to a layer though and I haven’t been able to empirically derive it either.

    Voices - number of simultaneous polyphonic visual voices. Ex: With 4 polyphonic voices, each layer can display 4 simultaneous materials at once

    What does the up/down arrow at the bottom-left of the materials browser supposed to do?

    What are the up/down buttons at the top-right supposed to do? I just caused VS to hang by triple-or-quadruple tapping on the down button. (Not reproducible)

    Ah, those buttons aren’t part of the material browser, I really couldn’t tell.

    Oh, I’m on an iPad on 14.6 in AUM.

    Start with the default patch. Connect your keyboard input to VS. Set layer TRIGGER MODE to MIDI. Ensure that TRIGGER channel is set to the same channel as the keyboard input. Press more than one key at the same time. You will observe polyphony.

    Oh, I knew that. I thought it meant I could add multiple materials to the same layer, not multiple instances of the same material. D’oh.

    If you want multiple materials, set a new material to another layer! :blush:

Sign In or Register to comment.