Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Audio Evolution Mobile DAW

14243444547

Comments

  • edited February 2023

    @dwrae said:

    Damn man thanks for your hard work.

    If you can add an option to use it like the fx which allows you to use an endless fx grid instead of 4 inserts, I would gladly pay for it, 5-10$ easily.
    And as it caters to specific users, I think it will be worth it.
    I know I would, such a low price for something like this shouldn’t become a problem, especially when aem already starts at such a low price point.

    P.S let me explain why it might be beneficial.
    We could load atom 2 midi sequencer, each instance could be mapped to be a different clip number for every channel.
    So this might allow a type of ableton live workflow in aem without you needing to build it and it will be possible to record it from the clips straight to the timeline.

  • @dwrae said:

    What?.. A developer of the year! Thank you!

  • Audio Evolution Mobile Studio v5.6.6 for iOS is rolling out. Unfortunately, the release notes that are displayed in the App Store were not updated, but they are displayed in the release notes dialog in the app. Here they are:

    • Added 2 more AU MIDI effect slots.
    • Solved an issue with pasting from the iOS clipboard introduced in the last version.
    • Reduced pops when restarting playback when AUv3 effects or instruments are used.
    • Improved timing of AU MIDI effects.
    • Added an option 'Convert to MIDI track' to the pop-up menu of instrument tracks.
    • When connecting a USB MIDI device, the output of a (pure) MIDI track is now set to the new device automatically.
  • @dwrae said:
    Audio Evolution Mobile Studio v5.6.6 for iOS is rolling out. Unfortunately, the release notes that are displayed in the App Store were not updated, but they are displayed in the release notes dialog in the app. Here they are:

    • Added 2 more AU MIDI effect slots.
    • Solved an issue with pasting from the iOS clipboard introduced in the last version.
    • Reduced pops when restarting playback when AUv3 effects or instruments are used.
    • Improved timing of AU MIDI effects.
    • Added an option 'Convert to MIDI track' to the pop-up menu of instrument tracks.
    • When connecting a USB MIDI device, the output of a (pure) MIDI track is now set to the new device automatically.

    BIG NEWS!
    Great update as always.

  • It's hard to use midi piano roll because it freezes and my eyes start to hurt. :'(

  • @Artem said:
    It's hard to use midi piano roll because it freezes and my eyes start to hurt. :'(

    Can you please explain what you mean by freezes? Perhaps you can tell the sequence of steps to reproduce it?

  • @dwrae said:

    @Artem said:
    It's hard to use midi piano roll because it freezes and my eyes start to hurt. :'(

    Can you please explain what you mean by freezes? Perhaps you can tell the sequence of steps to reproduce it?

    Apparently I said it wrong, I meant low FPS, for example, compared to Cubasis.

  • @Artem said:

    @dwrae said:

    @Artem said:
    It's hard to use midi piano roll because it freezes and my eyes start to hurt. :'(

    Can you please explain what you mean by freezes? Perhaps you can tell the sequence of steps to reproduce it?

    Apparently I said it wrong, I meant low FPS, for example, compared to Cubasis.

    True, the piano roll uses CoreGraphics which is slow on iOS unfortunately. This cannot be easily fixed but is planned in 6 months or so.

  • @dwrae said:

    @Artem said:

    @dwrae said:

    @Artem said:
    It's hard to use midi piano roll because it freezes and my eyes start to hurt. :'(

    Can you please explain what you mean by freezes? Perhaps you can tell the sequence of steps to reproduce it?

    Apparently I said it wrong, I meant low FPS, for example, compared to Cubasis.

    True, the piano roll uses CoreGraphics which is slow on iOS unfortunately. This cannot be easily fixed but is planned in 6 months or so.

    Thanks :)

  • @dwrae said:

    @Artem said:

    @dwrae said:

    @Artem said:
    It's hard to use midi piano roll because it freezes and my eyes start to hurt. :'(

    Can you please explain what you mean by freezes? Perhaps you can tell the sequence of steps to reproduce it?

    Apparently I said it wrong, I meant low FPS, for example, compared to Cubasis.

    True, the piano roll uses CoreGraphics which is slow on iOS unfortunately. This cannot be easily fixed but is planned in 6 months or so.

    This will make a lot of difference in how professional the app "feels". It's great to see AEM evolve so consistently.

  • edited February 2023

    I wonder, if there is an hidden option, somwhere, to click once to select all midi. It is really time-saving. Ps.After last update, I’m not going to miss Auria pro. :) thanks once again!

  • Just a small update this time, v5.6.7:

    • AU MIDI was not displayed when using a pure MIDI track on phones. Solved.
    • Tapping outside the Settings dialog will now store the changes as well.
  • @dwrae said:
    Just a small update this time, v5.6.7:

    • AU MIDI was not displayed when using a pure MIDI track on phones. Solved.
    • Tapping outside the Settings dialog will now store the changes as well.

    Hello! Are you planning to add multicore processing? I think it should be a priority. 🤔

  • @Artem said:

    @dwrae said:
    Just a small update this time, v5.6.7:

    • AU MIDI was not displayed when using a pure MIDI track on phones. Solved.
    • Tapping outside the Settings dialog will now store the changes as well.

    Hello! Are you planning to add multicore processing? I think it should be a priority. 🤔

    Yes.

  • @dwrae said:

    @Artem said:

    @dwrae said:
    Just a small update this time, v5.6.7:

    • AU MIDI was not displayed when using a pure MIDI track on phones. Solved.
    • Tapping outside the Settings dialog will now store the changes as well.

    Hello! Are you planning to add multicore processing? I think it should be a priority. 🤔

    Yes.

    Could you please let me know when approximately you plan to do this?

  • @Artem said:

    @dwrae said:

    @Artem said:

    @dwrae said:
    Just a small update this time, v5.6.7:

    • AU MIDI was not displayed when using a pure MIDI track on phones. Solved.
    • Tapping outside the Settings dialog will now store the changes as well.

    Hello! Are you planning to add multicore processing? I think it should be a priority. 🤔

    Yes.

    Could you please let me know when approximately you plan to do this?

    No :)

  • @dwrae said:

    @Artem said:

    @dwrae said:

    @Artem said:

    @dwrae said:
    Just a small update this time, v5.6.7:

    • AU MIDI was not displayed when using a pure MIDI track on phones. Solved.
    • Tapping outside the Settings dialog will now store the changes as well.

    Hello! Are you planning to add multicore processing? I think it should be a priority. 🤔

    Yes.

    Could you please let me know when approximately you plan to do this?

    No :)

    Good answer… if you took a guess I’d feel sorry for you. No one gets these guesses right and it just creates frustrated
    Users that took your guess as something they should plan on. Good luck opening the multi-processing box of snakes.

  • @McD said:

    Could you please let me know when approximately you plan to do this?

    No :)

    Good answer… if you took a guess I’d feel sorry for you. No one gets these guesses right and it just creates frustrated
    Users that took your guess as something they should plan on. Good luck opening the multi-processing box of snakes.

    Yes, indeed. It's not really promising to read that in Cubasis, you need to increase the number of buffers to not get it to glitch, with a factor of 8 even on Android. So there goes your latency. It's a fun project nevertheless though. The audio system inside AEMS is prepared for it, so it shouldn't be that hard, but since only one thread can run in real-time and the other threads for other cores don't, it's a recipe for trouble!

  • @dwrae said:

    @McD said:

    Could you please let me know when approximately you plan to do this?

    No :)

    Good answer… if you took a guess I’d feel sorry for you. No one gets these guesses right and it just creates frustrated
    Users that took your guess as something they should plan on. Good luck opening the multi-processing box of snakes.

    Yes, indeed. It's not really promising to read that in Cubasis, you need to increase the number of buffers to not get it to glitch, with a factor of 8 even on Android. So there goes your latency. It's a fun project nevertheless though. The audio system inside AEMS is prepared for it, so it shouldn't be that hard, but since only one thread can run in real-time and the other threads for other cores don't, it's a recipe for trouble!

    Some problems lend themselves to multi-tasking… realtime audio is NOT one of those problems. Rendering MIDI tracks to frozen audio tracks might be.

    I admire the history of your DAW and the features you continue to implement across so many platform targets. I hope your IOS
    Work has justified the effort since you face little competition on Android, I think.

  • Agreed. No competition from where I am (which is that without AEM on Android I wouldn't be able to DIY master my tracks).

  • A first tutorial video on using Vocal Tune Studio inside Audio Evolution Mobile Studio for Android and iOS is now available:

    Another tutorial will follow explaining how to use time correction.

  • edited March 2023

    @Artem said:
    Hello! Are you planning to add multicore processing? I think it should be a priority. 🤔

    I want to check the experiences you guys have with multi-core processing. I have an implementation now and I'm running some test case. The test involves 4 audio tracks, each one with 10 or so heavy Toneboosters effects. I verified that maximum parallellization is done, meaning that up to the bus mixer, the effects are processed in parallel and do not wait unnecessarily. Furthermore, the threads are real-time threads and joined the audio workgroup.

    On my iPad Pro with M2 processor with 4 performance cores and 4 efficiency cores, I get a 93% speed up when using 2 threads. Using more threads will result in worse performance. In Cubasis, I also see around 2x better performance. There is no way to set the thread affinity as with Android, so I don't get to choose on which core each thread runs on. On an old iPad Pro 1st gen 2-core, performance improvement is similar (2 threads of course). The difference to Cubasis is that there are no additional buffers needed, so latency remains the same.

    Is there anyone with any multi-processing enabled DAW that can achieve better than 2x performance compared to serial/single-core processing?

  • @dwrae said:

    @Artem said:
    Hello! Are you planning to add multicore processing? I think it should be a priority. 🤔

    I want to check the experiences you guys have with multi-core processing. I have an implementation now and I'm running some test case. The test involves 4 audio tracks, each one with 10 or so heavy Toneboosters effects. I verified that maximum parallellization is done, meaning that up to the bus mixer, the effects are processed in parallel and do not wait unnecessarily. Furthermore, the threads are real-time threads and joined the audio workgroup.

    On my iPad Pro with M2 processor with 4 performance cores and 4 efficiency cores, I get a 93% speed up when using 2 threads. Using more threads will result in worse performance. In Cubasis, I also see around 2x better performance. There is no way to set the thread affinity as with Android, so I don't get to choose on which core each thread runs on. On an old iPad Pro 1st gen 2-core, performance improvement is similar (2 threads of course). The difference to Cubasis is that there are no additional buffers needed, so latency remains the same.

    Is there anyone with any multi-processing enabled DAW that can achieve better than 2x performance compared to serial/single-core processing?

    Now I've done a little test. I'm using an iPad Pro 2020. I managed to add 17 identical tracks of the synthesizer SynthMaster 2 to Cubasis 3 until the moment when the application became overloaded.
    Then I added exactly the same tracks to AEM, but I only managed to add 7 of them.

    To be honest, I know only one more DAW with support for multicore processing - N-Track, but it works so unstable for me that I no longer see the point in testing it.

  • @dwrae said:

    @Artem said:
    Hello! Are you planning to add multicore processing? I think it should be a priority. 🤔

    I want to check the experiences you guys have with multi-core processing. I have an implementation now and I'm running some test case. The test involves 4 audio tracks, each one with 10 or so heavy Toneboosters effects. I verified that maximum parallellization is done, meaning that up to the bus mixer, the effects are processed in parallel and do not wait unnecessarily. Furthermore, the threads are real-time threads and joined the audio workgroup.

    On my iPad Pro with M2 processor with 4 performance cores and 4 efficiency cores, I get a 93% speed up when using 2 threads. Using more threads will result in worse performance. In Cubasis, I also see around 2x better performance. There is no way to set the thread affinity as with Android, so I don't get to choose on which core each thread runs on. On an old iPad Pro 1st gen 2-core, performance improvement is similar (2 threads of course). The difference to Cubasis is that there are no additional buffers needed, so latency remains the same.

    Is there anyone with any multi-processing enabled DAW that can achieve better than 2x performance compared to serial/single-core processing?

    I think the best way to do that is to try a synth that is using a lot of cpu, something in the likes of moog model d
    Results in cubasis -
    Sample rate 48khz, Bit depth 32
    Single thread - 5 instances
    Multi thread - with lowest guard buffer - 18 instances, not 100% stable but good enough and I’m not going to use more than 5 instances any time soon.

    Didn’t test latency but in this quick experiment I didn’t feel it changing much using a synth, could be more noticeable with an acoustic drum kit but I haven’t tried it.

  • I’d love to get AEM working well enough to use it. Time stretching is important to me. i’m following the instructions in the manual, but at the last step (set the new endpoint), nothing happens. It’s supposed to calculate a new stretch factor and return to the dialog, but it doesn’t. I emailed the dev. Has anyone gotten this to work?

  • Never mind. I just discovered it only works in expert mode. I got it to work. Also, unless you expand the track with the clip vertically, you can’t see all the clip handles. This stuff is kinda fiddly. I can handle “fiddly” as long as you know what you’re working with.

  • @bargale said:
    I think the best way to do that is to try a synth that is using a lot of cpu, something in the likes of moog model d
    Results in cubasis -
    Sample rate 48khz, Bit depth 32
    Single thread - 5 instances
    Multi thread - with lowest guard buffer - 18 instances, not 100% stable but good enough and I’m not going to use more than 5 instances any time soon.

    Ok, my tests:

    • Cubasis, buffer size 512 (10.7 msec), single core: 9 tracks of Model D
    • AEMS, buffer size 512, single core; 9 tracks of Model D
    • Cubasis, multi-core, buffer size gets increased by 10.7 msec (so 1024 frames I suppose in total), 26 tracks
    • AEMS 2 threads, buffer size set to 1024f, 13 tracks
    • AEMS 3 threads, buffer size set to 1024f, 20 tracks
    • AEMS 4 threads, buffer size set to 1024f, 23 tracks

    So it scales better with AUv3 instruments than the tests made with the effects chain.

  • edited March 2023

    @dwrae said:

    @bargale said:
    I think the best way to do that is to try a synth that is using a lot of cpu, something in the likes of moog model d
    Results in cubasis -
    Sample rate 48khz, Bit depth 32
    Single thread - 5 instances
    Multi thread - with lowest guard buffer - 18 instances, not 100% stable but good enough and I’m not going to use more than 5 instances any time soon.

    Ok, my tests:

    • Cubasis, buffer size 512 (10.7 msec), single core: 9 tracks of Model D
    • AEMS, buffer size 512, single core; 9 tracks of Model D
    • Cubasis, multi-core, buffer size gets increased by 10.7 msec (so 1024 frames I suppose in total), 26 tracks
    • AEMS 2 threads, buffer size set to 1024f, 13 tracks
    • AEMS 3 threads, buffer size set to 1024f, 20 tracks
    • AEMS 4 threads, buffer size set to 1024f, 23 tracks

    So it scales better with AUv3 instruments than the tests made with the effects chain.

    Forgot to mention, I’m using the lowest buffer, which is an rtl of 10.6, so yeah 512, pretty bad I’d say.
    I always try to start a project at the lowest latency I can, if a daw allows me a buffer of 64 samples, I’ll use it as I feel it’s essential when trying to record a drum track, be it midi or audio.
    So, as aem allows freezing, I’m fine with or without multi-core, although, I support the endeavor.

    P.S
    As the latest iPadOS allows you to implement drivers in the app for controllers or audio interfaces, have you thought about implementing it for lower latency/extra features if the interface api allows it?
    Just a random thought that I had, not pushing in any direction.

  • @bargale said:
    As the latest iPadOS allows you to implement drivers in the app for controllers or audio interfaces, have you thought about implementing it for lower latency/extra features if the interface api allows it?
    Just a random thought that I had, not pushing in any direction.

    Haven't heard of that, but lower latency than Core Audio is not really possible I think, neither necessary IMHO.

  • @bargale said:

    @dwrae said:

    @bargale said:
    I think the best way to do that is to try a synth that is using a lot of cpu, something in the likes of moog model d
    Results in cubasis -
    Sample rate 48khz, Bit depth 32
    Single thread - 5 instances
    Multi thread - with lowest guard buffer - 18 instances, not 100% stable but good enough and I’m not going to use more than 5 instances any time soon.

    Ok, my tests:

    • Cubasis, buffer size 512 (10.7 msec), single core: 9 tracks of Model D
    • AEMS, buffer size 512, single core; 9 tracks of Model D
    • Cubasis, multi-core, buffer size gets increased by 10.7 msec (so 1024 frames I suppose in total), 26 tracks
    • AEMS 2 threads, buffer size set to 1024f, 13 tracks
    • AEMS 3 threads, buffer size set to 1024f, 20 tracks
    • AEMS 4 threads, buffer size set to 1024f, 23 tracks

    So it scales better with AUv3 instruments than the tests made with the effects chain.

    Forgot to mention, I’m using the lowest buffer, which is an rtl of 10.6, so yeah 512, pretty bad I’d say.
    I always try to start a project at the lowest latency I can, if a daw allows me a buffer of 64 samples, I’ll use it as I feel it’s essential when trying to record a drum track, be it midi or audio.
    So, as aem allows freezing, I’m fine with or without multi-core, although, I support the endeavor.

    P.S
    As the latest iPadOS allows you to implement drivers in the app for controllers or audio interfaces, have you thought about implementing it for lower latency/extra features if the interface api allows it?
    Just a random thought that I had, not pushing in any direction.

    Writing drivers is there for device manufacturers to provide custom drivers for the hardware they manufacture. Particularly for hardware (like my Boss katana amp) that is not class compliant.

    Writing a custom driver won’t result in less latency than the minimum that results from the buffer size, it can reduce additional latency n cases where the hardware has additional latency when using class compliant drivers

Sign In or Register to comment.