Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

VS - Visual Synthesizer by Imaginando Lda

13468933

Comments

  • @Gravitas said:
    How do we delete presets?

    use the trash icon, to the right of the preset name in the patch manager

  • @EvilClivE said:

    @Gravitas said:
    How do we delete presets?

    use the trash icon, to the right of the preset name in the patch manager

    One moment...this is my screenshot.

    I have two defaults and I would like to get rid of one of them.

  • @Gravitas said:

    @EvilClivE said:

    @Gravitas said:
    How do we delete presets?

    use the trash icon, to the right of the preset name in the patch manager

    One moment...this is my screenshot.

    I have two defaults and I would like to get rid of one of them.

    I think that's a bug currently affecting the factory bank. Presets should not be able to be saved (or deleted) to the factory bank, they should go into 'Local' bank, hence the lack of trash icon on Factory presets.

  • @EvilClivE said:

    @Gravitas said:

    @EvilClivE said:

    @Gravitas said:
    How do we delete presets?

    use the trash icon, to the right of the preset name in the patch manager

    One moment...this is my screenshot.

    I have two defaults and I would like to get rid of one of them.

    I think that's a bug currently affecting the factory bank. Presets should not be able to be saved (or deleted) to the factory bank, they should go into 'Local' bank, hence the lack of trash icon on Factory presets.

    I thought so hence my question.
    Thanks for clarifying.

  • @sinosoidal this app is brilliant. You knew exactly what audio visual enthusiasts needed

  • Great app !! Really impressed how smooth it runs on a previous gen iPad Pro.
    To be able to import or edit the shaders would be wonderful, and if it would support vidvox isf shaders, that would make it compatible with VDMX, Touch Designer etc, it would be just perfect, even more perfect then it already is ;)

  • Is there a way to make the background layer white, and the shader layers black or coloured but still visible on the white background ? For example get it looking like this

  • @NimboStratus said:

    @janpieter said:

    @sinosoidal said:

    @janpieter said:
    Anyone know how to MIDI ‘unlearn’ in VS?
    I manage to assign MIDI to functions using MIDI learn, but once parameters are mapped I have no idea how to ‘unassign’ them.

    Enable midi mode and double tap the control. It should clear the mapping.

    @sinosoidal said:

    @janpieter said:
    Anyone know how to MIDI ‘unlearn’ in VS?
    I manage to assign MIDI to functions using MIDI learn, but once parameters are mapped I have no idea how to ‘unassign’ them.

    Enable midi mode and double tap the control. It should clear the mapping.

    Thank you! But I’m not sure what you mean by MIDI mode:

    I was controlling a background layer parameter with MIDI (which works fine!) but there’s no MIDI trigger mode there, and simply double tapping amounts to nothing.
    If I try this with a regular layer (setting trigger to MIDI and double tapping a MIDI learned parameter there) it doesn’t seem to work. (the MIDI signal keeps triggering the parameter).
    In the MIDI settings menu it says ‘no input devices’ (I am using VS in AUM iOS 14.6 Air 2021).

    I hope I am missing something obvious. :)

    Edit: I was. Your advice seems to work fine now. (I didn’t realize you meant the MIDI learn mode). Sorry!

    furthermore: really liking your app..

    Is the face a background video taken from something else?

    Very cool!

  • rozeta lfo scrubbing pexels vid in vs (full screen) and pixel nodes

  • Sorry what is scrubbing and pixels mode?

  • @gritcorp said:
    Is there a way to make the background layer white, and the shader layers black or coloured but still visible on the white background ?

    In addition to setting a white background layer, any material layer can use the 'Plain color' material to render a solid fill colour.

    Then for any shader layers above the white layer, change the blend mode to something like 'subtract'.

  • @EvilClivE said:
    In addition to setting a white background layer, any material layer can use the 'Plain color' material to render a solid fill colour.

    Then for any shader layers above the white layer, change the blend mode to something like 'subtract'.

    Thank you Clive ! ;)

  • @NimboStratus said:
    Sorry what is scrubbing and pixels mode?

    https://apps.apple.com/us/app/pixel-nodes/id1313351782 (gives you a lot of freedom to build your own effects and control things by MIDI)

    and by scrubbing (hope I'm using the term correctly) I mean that you control the position (timeline) of the video by the MIDI CC sent by Rozeta LFO (the arms going up and down is the video being moved forwards and backwards again by a the sinus wave CC's)

  • So render that then pull into VS?

  • edited June 2021

    I was only comparing them. They're doing the same job in the screenshot and they're doing it fine...

    (I am not at all sure but I think pixel nodes is capable of doing some things with shaders people are asking about in this thread)

  • Very cool app, good job. My iPad Pro 10.5” is choking to death tho. This definitely needs a renderer in the standalone version (ie import audio file and render at a consistent frame rate when you’re done), obviously in the Auv3 too if possible.

  • edited June 2021

    Reposted below with gif instead of Twitter link

  • @janpieter Thanks for sharing that app Pixel Nodes. Looks interesting

  • edited June 2021

    @sinosoidal Found a little bug, Layer object’s transform positions and rotations are not kept precisely when saving and reloading a preset. (iOS)

  • Think Id know how to set it up for live. Even though Id likely only use if clip launching and doubt id setup my tv behind gear and forsake my ps4. Could have been cool for iphone se. If visual tweakng from phone were enough on phone size.

    But. I should be able to add transparent effects of app over me tweaking gear live? After recording.

    When is the sale to?

  • @janpieter said:

    @NimboStratus said:
    Sorry what is scrubbing and pixels mode?

    https://apps.apple.com/us/app/pixel-nodes/id1313351782 (gives you a lot of freedom to build your own effects and control things by MIDI)

    and by scrubbing (hope I'm using the term correctly) I mean that you control the position (timeline) of the video by the MIDI CC sent by Rozeta LFO (the arms going up and down is the video being moved forwards and backwards again by a the sinus wave CC's)

    Tried Pixel Nodes before and was baffled. Looks however like it can do some interesting things. I click on Midi In and it shows no midi connections when a number are available Odd? It is loaded as a standalone?

  • _ki_ki
    edited June 2021

    @sinosoidal In order to achieve certain visual effects, I would like to have additional blend modes such as multiply, color or color-burn. And i am missing a collection of simple shaders like different blurs (for instance radial blur with center parameter and width or motion blur with direction and size parameter) or simple single color or gradient fades, or color wipes in different directions.

    Some of these requests (like color wipes, color fills) could be done with the existing materials (like the Simple Shapes shader with a 4 sided ngon) if their parameter range for size would be larger and they would offer an additional aspect ratio for the height. Such aspect/rotation parameters also could be usefull for several of exisiting shapes, if it has ‚empty/unused‘ modulation slots.

    If custom shaders were supported, a PatchStorage community would probably quickly emerge - not only for VS shaders, but also for full VS presets. Since the shaders are not part of the presets json (these only contain shaders UUIDs) each of these shared VS presets would need to state which shaders need to be installed. Currently you don‘t have this problem with the fixed set of shaders - but for user imported background video used in shared presets

    .

    These comments are absolutely not meant as complaining! I have a lot of fun experimenting and playing around with the VS shaders and feeding midi and audio into the AUv3 to then affect the visuals :)

  • @_ki said:
    @sinosoidal In order to achieve certain visual effects, I would like to have additional blend modes such as multiply, color or color-burn. And i am missing a collection of simple shaders like different blurs (for instance radial blur with center parameter and width or motion blur with direction and size parameter) or simple single color or gradient fades, or color wipes in different directions.

    Some of these requests (like color wipes, color fills) could be done with the existing materials (like the Simple Shapes shader with a 4 sided ngon) if their parameter range for size would be larger and they would offer an additional aspect ratio for the height. Such aspect/rotation parameters also could be usefull for several of exisiting shapes, if it has ‚empty/unused‘ modulation slots.

    If custom shaders were supported, a PatchStorage community would probably quickly emerge - not only for VS shaders, but also for full VS presets. Since the shaders are not part of the presets json (these only contain shaders UUIDs) each of these shared VS presets would need to state which shaders need to be installed. Currently you don‘t have this problem with the fixed set of shaders - but for user imported background video used in shared presets

    .

    These comments are absolutely not meant as complaining! I have a lot of fun experimenting and playing around with the VS shaders and feeding midi and audio into the AUv3 to then affect the visuals :)

    We are listening! :blush:

  • I'm really unsettled about some statements that refer to the performance, the possible FPS and the lowering quality of some shaders until they get blurry and pixelated: “If it is below 30 FPS, try to change the quality settings to a lower setting until the FPS is met.“ Hmm, not exactly something that would be desirable.

    I don't get it: There is a YT video in which the M1 MAC Mini plays 24 virtual instruments in Logic Pro X with 960 (!) plugins in 96 audio channels with 128 samples and doesn’t work up a sweat in the slightest! Luma Fusion is simultaneously playing 4 videos in a 4k timeline without effort and even an 8k file from a Canon R5, which the highest spec’d out Notebooks and Desktops are struggling with, is playing absolutely smooth in LF without any dropped frames. The M1 in an iPad Pro can decode 8k-25-fps (= 166 MB/sec) in realtime but is struggling with some comparatively less complex shader graphics computation task?

    Sinosoidal stated “Even the M1, has not the performance I was expecting.” ☺️ Compared to the above examples something hard to believe. Maybe I’m pushing too far ahead and might get a shitstorm for it but I rather think Imaginando needs to give its VS code at this point a proper rework. But this issue will hopefully be resolved soon with a render to file update although this does not take a reasonable preview in AUM at 30 fps for an adequate assessment of the visuals off the table.

    I understand that I am comparing two probably fundamentally different calculation algorithms. But the sheer and extremely drastic difference between the mentioned examples and the VS shader calculation tells me that the performance of the M1 doesn't seem to be the problem. Either way, I love you guys at Imaginando, you’re the best! 🤗

  • _ki_ki
    edited June 2021

    @Polyphonix For some tasks (like video encoding and decoding) there is specialized hardware inside the graphics cores - they are not computed on the CPU. The shaders of VS depend on the speed of a different part of the graphics core, that is able to run user defined shader code. It depends on how many effort the apple devs have put into that part of the graphics chip. As hardware dev, you can build a superfast graphics chip rendering tons of shaded texture trianges, but still have poor and slow GLSL shader support.

    But we are lucky, that many of the newer game engines are using shaders and games are a big market for the apple platform. But i don‘t know if these games use Metal Shaders (which is an Apple development and probably has direct hardware support) or the more common and device independent GSSL ES (OpenGL Shading Language for Embedded Systems) that VS uses.

  • @Polyphonix said:
    I'm really unsettled about some statements that refer to the performance, the possible FPS and the lowering quality of some shaders until they get blurry and pixelated: “If it is below 30 FPS, try to change the quality settings to a lower setting until the FPS is met.“ Hmm, not exactly something that would be desirable.

    I don't get it: There is a YT video in which the M1 MAC Mini plays 24 virtual instruments in Logic Pro X with 960 (!) plugins in 96 audio channels with 128 samples and doesn’t work up a sweat in the slightest! Luma Fusion is simultaneously playing 4 videos in a 4k timeline without effort and even an 8k file from a Canon R5, which the highest spec’d out Notebooks and Desktops are struggling with, is playing absolutely smooth in LF without any dropped frames. The M1 in an iPad Pro can decode 8k-25-fps (= 166 MB/sec) in realtime but is struggling with some comparatively less complex shader graphics computation task?

    Sinosoidal stated “Even the M1, has not the performance I was expecting.” ☺️ Compared to the above examples something hard to believe. Maybe I’m pushing too far ahead and might get a shitstorm for it but I rather think Imaginando needs to give its VS code at this point a proper rework. But this issue will hopefully be resolved soon with a render to file update although this does not take a reasonable preview in AUM at 30 fps for an adequate assessment of the visuals off the table.

    I understand that I am comparing two probably fundamentally different calculation algorithms. But the sheer and extremely drastic difference between the mentioned examples and the VS shader calculation tells me that the performance of the M1 doesn't seem to be the problem. Either way, I love you guys at Imaginando, you’re the best! 🤗

    Shader computation is done at the GPU and not at the CPU. So you can exclude the comparision of M1's ability to have 960 plugins loaded because those are all being computed in CPU.

    CPU wise M1 is really fast. I have a laptop with an Intel i5 quad core and it takes 2:33 minutes to compile all VS code. It takes only 1:33 minutes to compile the very same code on the M1. Almost 100% quicker!

    One of M1's biggests advantages is having dedicated systems for certain tasks. Hardware video decoding is definitely one of them. All programs that are specifically developed for Apple using Apple's API's are taking advantage of this systems. That's why you can see a lot of improvement in video handling. So you can exclude this direct comparision as well.

    So that shaders can take the most out of the M1 GPU, they would need to be higly optimized and written in Metal. Since we want a system that is compatible, we are using GLSL shaders. Therefore, this will never take the most out of the system.

    Try to run VS on a computer with an NVIDIA card and you wont have any performance issue at 60 fps... Can we squeeze it? Maybe we can... with time! Right now it is important to choose carefully the shaders and if they should use polyphony or not.


  • @sinosoidal is this a known bug ? Layer transformations are not properly saved, every time I reload the same preset the positions and rotations are off. I am using the imaginando simple shape shader on each layer, and there is no modulation applied on any. ( looks like a gl transform bug to me, center point of the transform is not saved properly?)

  • @sinosoidal said:

    @Polyphonix said:
    I'm really unsettled about some statements that refer to the performance, the possible FPS and the lowering quality of some shaders until they get blurry and pixelated: “If it is below 30 FPS, try to change the quality settings to a lower setting until the FPS is met.“ Hmm, not exactly something that would be desirable.

    I don't get it: There is a YT video in which the M1 MAC Mini plays 24 virtual instruments in Logic Pro X with 960 (!) plugins in 96 audio channels with 128 samples and doesn’t work up a sweat in the slightest! Luma Fusion is simultaneously playing 4 videos in a 4k timeline without effort and even an 8k file from a Canon R5, which the highest spec’d out Notebooks and Desktops are struggling with, is playing absolutely smooth in LF without any dropped frames. The M1 in an iPad Pro can decode 8k-25-fps (= 166 MB/sec) in realtime but is struggling with some comparatively less complex shader graphics computation task?

    Sinosoidal stated “Even the M1, has not the performance I was expecting.” ☺️ Compared to the above examples something hard to believe. Maybe I’m pushing too far ahead and might get a shitstorm for it but I rather think Imaginando needs to give its VS code at this point a proper rework. But this issue will hopefully be resolved soon with a render to file update although this does not take a reasonable preview in AUM at 30 fps for an adequate assessment of the visuals off the table.

    I understand that I am comparing two probably fundamentally different calculation algorithms. But the sheer and extremely drastic difference between the mentioned examples and the VS shader calculation tells me that the performance of the M1 doesn't seem to be the problem. Either way, I love you guys at Imaginando, you’re the best! 🤗

    Shader computation is done at the GPU and not at the CPU. So you can exclude the comparision of M1's ability to have 960 plugins loaded because those are all being computed in CPU.

    CPU wise M1 is really fast. I have a laptop with an Intel i5 quad core and it takes 2:33 minutes to compile all VS code. It takes only 1:33 minutes to compile the very same code on the M1. Almost 100% quicker!

    One of M1's biggests advantages is having dedicated systems for certain tasks. Hardware video decoding is definitely one of them. All programs that are specifically developed for Apple using Apple's API's are taking advantage of this systems. That's why you can see a lot of improvement in video handling. So you can exclude this direct comparision as well.

    So that shaders can take the most out of the M1 GPU, they would need to be higly optimized and written in Metal. Since we want a system that is compatible, we are using GLSL shaders. Therefore, this will never take the most out of the system.

    Try to run VS on a computer with an NVIDIA card and you wont have any performance issue at 60 fps... Can we squeeze it? Maybe we can... with time! Right now it is important to choose carefully the shaders and if they should use polyphony or not.

    Thanks @_ki and @sinosoidal for the extensive and clear explanations. A native Metal shader would be really cool but it makes sense from an economic point of view to stay compatible to the Windows version as far as possible. In addition, one reserves the option for the integration of many partly freely available OpenGL or own shader. I personally don't put the highest value on realtime 60fps optimization as long as there is the possibility of rendering to file. I like to wait for the most optimal result, no problem.

  • @Toastedghost said:

    @janpieter said:

    @NimboStratus said:
    Sorry what is scrubbing and pixels mode?

    https://apps.apple.com/us/app/pixel-nodes/id1313351782 (gives you a lot of freedom to build your own effects and control things by MIDI)

    and by scrubbing (hope I'm using the term correctly) I mean that you control the position (timeline) of the video by the MIDI CC sent by Rozeta LFO (the arms going up and down is the video being moved forwards and backwards again by a the sinus wave CC's)

    Tried Pixel Nodes before and was baffled. Looks however like it can do some interesting things. I click on Midi In and it shows no midi connections when a number are available Odd? It is loaded as a standalone?

    MIDI can be a bit tricky. Dev told me he'll try to expand the MIDI mode in the next version of pixel nodes. I have the idea that working with Audiobus' virtual MIDI bridge has made it more usable for me.

Sign In or Register to comment.