Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

new ipad pro / a14x m1 processor

2

Comments

  • Too bad very few app use multi core at this point.

  • Cubasis 3 is still the only host that support multi-core, right?

  • @richardyot said:
    That's a bit disappointing, I was hoping to see a bigger leap.

    That’s the performance of the A14 in the iPad Air and iPhones 12 compared to the M1 in a MacBook Air.

    The reason single core performance isn’t much higher is that the M1 basically is an A14. The difference in single core performance is down to the CPU in the Mac being clocked higher (3.2 v 3.0 GHz).

    These numbers have been out since the M1 Macs first shipped.

    There will be a fairly big jump in performance from the A12 powered iPads pro to the M1 Pros as Apple skipped a generation.

    Single core performance of the M1 iPad Pro’s are going to be somewhere between those 2 numbers. Just a tad over the A14 in the Air 4 if they click it a bit higher in the pro. Pretty much the same as an air 4 if they clock the same.

  • @NoiseFloored said:
    Cubasis 3 is still the only host that support multi-core, right?

    Honestly, I’ve never seen anyone (in this AB forum) saying hey, I get better performance in multi-core mode... I heard only the opposite
    Or if I was wrong, guys with the latest zillion generation iPad pros raise your hands and shout :)

  • @klownshed said:

    @richardyot said:
    That's a bit disappointing, I was hoping to see a bigger leap.

    That’s the performance of the A14 in the iPad Air and iPhones 12 compared to the M1 in a MacBook Air.

    The reason single core performance isn’t much higher is that the M1 basically is an A14. The difference in single core performance is down to the CPU in the Mac being clocked higher (3.2 v 3.0 GHz).

    These numbers have been out since the M1 Macs first shipped.

    There will be a fairly big jump in performance from the A12 powered iPads pro to the M1 Pros as Apple skipped a generation.

    Single core performance of the M1 iPad Pro’s are going to be somewhere between those 2 numbers. Just a tad over the A14 in the Air 4 if they click it a bit higher in the pro. Pretty much the same as an air 4 if they clock the same.

    Yes I'm sure you're right. The M1 designation has more to do with Thunderbolt support etc compared to the A14, in terms of performance they are similar. So the new Pros are not going to be more powerful than the current gen Air. The only real reason to go for the Pros is if you need more storage or want the 8Gb of RAM and Thunderbolt support.

  • Here we go again, there is a big difference between cores and threads. I blame Intel for the confusion. Kinda like Fender and the eternal confusion brought about by their use of the terms vibrato and tremolo (how do you get them both wrong?).

    There is no such thing as a single threaded iOS application. There's no such thing as a single core iOS application. Even for an AUv3 that uses one audio processing thread, the OS will move that thread from core-to-core to optimize performance.

    On my simplest AU where I definitely only use one audio thread, there are a minimum of seven threads running. All of these help the performance of the DSP because it is offloading everything else from the core that the DSP is running on.

    I don't know this for sure, but I would be very doubtful of GarageBand iOS only using one audio thread. I posted a link on another thread recently to what Apple does with the audio threading on Logic. It is massive. They use almost as many threads to run Logic as I used to use to do galactic simulations on supercomputers.

  • edited April 2021

    @richardyot said:

    The only real reason to go for the Pros is if you need more storage or want the 8Gb of RAM and Thunderbolt support.

    My reason is I would really like to try a 12.9 once. But if the price of the 2020 Pro should drop, it might be a better deal for that.

    Wonder how much work that if is doing in there... :)

  • wimwim
    edited April 2021

    By way of clarification, It's not multicore, it's multi-thread audio processing.

    Developers do not get the ability to control which or how many cores are used or whether or not multiple cores are used. The OS decides that. Adding multi-thread support into audio apps gives the potential for the OS to divide up the audio processing between cores. iOS can, and likely will, still prioritize lower heat and battery consumption over audio performance, at least until it's really called upon heavily. Only hosts can benefit from it (at least as far as I can see). It's great that they can, but it's not the magic overall audio performance boost that people often think it is.

    Putting a whole lot of weight on multi-core multi-thread audio impact on iOS is probably going to be misleading, at least for now.

  • @richardyot said:
    The only real reason to go for the Pros is if you need more storage or want the 8Gb of RAM and Thunderbolt support.

    There’s more, personally I’m considering a 12.9 and I expect to appreciate some of the differences: 4 speakers (Air has 2, only stereo in portrait), better screen, pro motion.

  • @NeonSilicon said:
    Here we go again, there is a big difference between cores and threads. I blame Intel for the confusion. Kinda like Fender and the eternal confusion brought about by their use of the terms vibrato and tremolo (how do you get them both wrong?).

    There is no such thing as a single threaded iOS application. There's no such thing as a single core iOS application. Even for an AUv3 that uses one audio processing thread, the OS will move that thread from core-to-core to optimize performance.

    On my simplest AU where I definitely only use one audio thread, there are a minimum of seven threads running. All of these help the performance of the DSP because it is offloading everything else from the core that the DSP is running on.

    I don't know this for sure, but I would be very doubtful of GarageBand iOS only using one audio thread. I posted a link on another thread recently to what Apple does with the audio threading on Logic. It is massive. They use almost as many threads to run Logic as I used to use to do galactic simulations on supercomputers.

    Someone on here (possibly you) once posted a really good supermarket analogy with multiple checkout lines and stuff.

  • @drcongo said:

    @NeonSilicon said:
    Here we go again, there is a big difference between cores and threads. I blame Intel for the confusion. Kinda like Fender and the eternal confusion brought about by their use of the terms vibrato and tremolo (how do you get them both wrong?).

    There is no such thing as a single threaded iOS application. There's no such thing as a single core iOS application. Even for an AUv3 that uses one audio processing thread, the OS will move that thread from core-to-core to optimize performance.

    On my simplest AU where I definitely only use one audio thread, there are a minimum of seven threads running. All of these help the performance of the DSP because it is offloading everything else from the core that the DSP is running on.

    I don't know this for sure, but I would be very doubtful of GarageBand iOS only using one audio thread. I posted a link on another thread recently to what Apple does with the audio threading on Logic. It is massive. They use almost as many threads to run Logic as I used to use to do galactic simulations on supercomputers.

    Someone on here (possibly you) once posted a really good supermarket analogy with multiple checkout lines and stuff.

    Not me, but I did once have a class in systems engineering that covered optimization of supermarket checkouts. It's way more complicated than I ever would have thought. The solution was not what I would have expected. And, no one implements the solution because of the psychological response of their customers. Of course, that was long ago before the advent of fast-lanes and self-checkout. So, I'm pretty sure I'm out-of-date now.

  • @CaelumAudio said:

    @wim said:

    @CaelumAudio said:
    With the newer "touch" looking features in Logic and also the way it had to be rebuilt from scratch for silicon...

    That's the second time I've read someone saying that. I don't believe that is true. Do you have a reference I can check out? I'm always happy to be wrong. B)

    About it being rebuilt from scratch? That is by belief from personal experience (I may be completely wrong) but sorry, NDA stuff...

    David

    Bowie?

  • wimwim
    edited April 2021

    @drcongo said:
    Someone on here (possibly you) once posted a really good supermarket analogy with multiple checkout lines and stuff.

    That was me, and yes, it was brilliant! (as all my posts are) :D :p

  • @wim said:

    @drcongo said:
    Someone on here (possibly you) once posted a really good supermarket analogy with multiple checkout lines and stuff.

    That was me, and yes, it was brilliant! (as all my posts are) :D :p

    Was it something about self-organizing systems?

  • @NeuM said:

    @wim said:

    @drcongo said:
    Someone on here (possibly you) once posted a really good supermarket analogy with multiple checkout lines and stuff.

    That was me, and yes, it was brilliant! (as all my posts are) :D :p

    Was it something about self-organizing systems?

    Nah. Just an analogy to describe the difference between multi-core and multi-thread processing. I'd go look it up, but it was some dude acting like he knows everything. I hate people like that.

  • @wim said:

    @NeuM said:

    @wim said:

    @drcongo said:
    Someone on here (possibly you) once posted a really good supermarket analogy with multiple checkout lines and stuff.

    That was me, and yes, it was brilliant! (as all my posts are) :D :p

    Was it something about self-organizing systems?

    Nah. Just an analogy to describe the difference between multi-core and multi-thread processing. I'd go look it up, but it was some dude acting like he knows everything. I hate people like that.

  • @AlmostAnonymous said:

    @wim said:

    @NeuM said:

    @wim said:

    @drcongo said:
    Someone on here (possibly you) once posted a really good supermarket analogy with multiple checkout lines and stuff.

    That was me, and yes, it was brilliant! (as all my posts are) :D :p

    Was it something about self-organizing systems?

    Nah. Just an analogy to describe the difference between multi-core and multi-thread processing. I'd go look it up, but it was some dude acting like he knows everything. I hate people like that.

    :D

  • It’s crazy but despite all the upgrades I’ve never been less excited about an iPad. Pro 2020 totally covers all of my needs and more. I can finally say that I’m completely satisfied. ;)

  • @supadom said:
    It’s crazy but despite all the upgrades I’ve never been less excited about an iPad. Pro 2020 totally covers all of my needs and more. I can finally say that I’m completely satisfied. ;)

    I have one word: “M1”.

  • @NeuM said:
    I have one word: “M1”.

  • @NeuM said:

    @supadom said:
    It’s crazy but despite all the upgrades I’ve never been less excited about an iPad. Pro 2020 totally covers all of my needs and more. I can finally say that I’m completely satisfied. ;)

    I have one word: “M1”.

    Yeah, but M1 is sooo much slower than M2 and M3 will have M3 for breakfast!

  • Aha... A BMW fan, eh? :)

  • edited April 2021

    I do believe the M1 will be a great leap for mankind... ehhh iPad musicians. Two more cores and faster memory will make a significant difference. Still wondering about the mythical multi-core rendering of Cubasis 3. How does that match with the OS managed distribution of threads to the cores? As a recap the post of Steinberg‘s Lars

    @LFS said:
    Hi all,

    We've received several questions regarding the performance optimizations in Cubasis 3.2.

    Below please find further information about the new multicore rendering, and how to configure the latency settings, provided by our lead engineer, Alex.

    Multicore-rendering overview
    Multicore-rendering has been implemented in Cubasis 3.1 for Android and in Cubasis 3.2 for iOS.
    This means that on devices with more than two CPU cores*, the rendering of multiple tracks during playback and mixdown is simultaneously performed on multiple cores.
    On devices with 2 CPU cores*, multi-core rendering is disabled in order to keep one of the cores available for all the non-audio stuff (UI, touch, the operating system…)
    Note that a project must contain at least 2 tracks in order for multi-core rendering to kick in.

    Latency setup options
    In the Setup under Audio, “Audio Engine Latency” must be enabled in order for Cubasis to perform multi-core rendering. Note that this setup option is only available on devices with more than two CPU cores*. In most cases, the sweet spot for rendering performance is with Audio Engine Latency set to twice the Device Latency (on iOS), or 16-32ms (on Android).
    However, this introduces additional latency to monitoring and live keyboard input, since the engine uses additional internal buffers to render into, which also prevents drop-outs.
    Multi-core rendering yields the most performance benefit when playing projects with many tracks and effects on devices with many CPU cores, where Audio Engine Latency is enabled.
    On some devices (possibly those with 3 cores) and in certain situations (monitoring or when using specific AU plug-ins), it might be beneficial to turn off Audio Engine Latency.

    DSP meter
    The DSP level in the Inspector’s "System Info” tab measures the time duration that a rendering cycle takes, divided by the buffer duration (the time available to perform rendering).
    With Audio Engine Latency disabled, rendering is performed on the system’s single ultra high priority audio thread, which means that a DSP peak of 100% always results in a drop-out (crackling).
    When Audio Engine Latency is enabled, rendering is performed in engine threads and a short peak of 100% doesn’t always mean that there is a drop-out, because the engine's buffers might have been able to prevent it. A dropout will only occur if DSP is 100% for longer than “Audio Engine Latency” is set to. Note that the DSP usage might be higher than with Audio Engine Latency disabled, which is normal because engine threads don’t get the same priority as the system's audio thread, but that's not a problem because multi-core rendering more than makes up for it.

    *Note that on some devices with more than 2 cores, only 2 cores are considered by Cubasis because the other ones are energy efficient (lower performance) cores and might be unsuited for real-time audio rendering.

    Best wishes,
    Lars

    https://forum.audiob.us/discussion/42746/cubasis-3-2-multicore-rendering-latency-settings

    So for me that sounds like they explicitly utilize the cores and do not rely on the OS distribution of threads, right? The statement about how they calculate the DSP load is also interesting.

  • The way they calculate DSP % does make some sense. It's going to be approximate though just because they don't get to control what the plugins do at each callback to process a buffer.

    macOS and iOS both have several threading models available. I can't really understand what is meant in that quote without knowing what they are using.

    iOS doesn't allow for forking processes. So, concurrency has to be using one of the threading API's. As far as I know, Apple doesn't allow for setting processor (core) affinity in any of the API's on either macOS or iOS. (On some systems, you can ask the OS to put your thread on a core and keep it there.) You can use some of the QoS requests to get some control on some of the concurrency models.

    Here are a couple of links to the low level guides from Apple about concurrency and threading:
    https://developer.apple.com/library/archive/documentation/General/Conceptual/ConcurrencyProgrammingGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40008091
    https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/Multithreading/Introduction/Introduction.html#//apple_ref/doc/uid/10000057i

    Most of the higher level iOS/macOS API's aren't going to be usable for audio. So something like Dispatch isn't going to work. Apple added specific API's to support multithreading on macOS and iOS for audio applications. Here are the main links:
    https://developer.apple.com/documentation/audiotoolbox/workgroup_management/understanding_audio_workgroups
    https://developer.apple.com/documentation/audiotoolbox/workgroup_management/adding_parallel_real-time_threads_to_audio_workgroups/
    https://developer.apple.com/documentation/audiotoolbox/workgroup_management/adding_asynchronous_real-time_threads_to_audio_workgroups/
    https://developer.apple.com/documentation/audiotoolbox/workgroup_management/adding_audio_unit_auxiliary_real-time_threads_to_audio_workgroups/

    The last link is the API specifically for supporting threading in AUv3's, so that isn't going to be something that Cubasis is using in the host, but it is interesting.

    Since Cubasis runs on both Android and iOS, I'd think it is likely that they are using pthreads. I don't know much at all about pthreads on Android other than that they are supposed to be more limited than on iOS. I doubt that you are going to be able to tell the OS what core to run your thread on there either. It may be possible though.

    You can use the parallel real-time thread API in that second link on the new API's to tell the OS that these threads need to execute within the same time constraints and that is really going to help with the structure of your program. I haven't made any test apps or AU's yet with the new API's to trace the execution on the cores. I should do that. Actually, I really should do something with the AUv3 specific one and see if it implies anything about what Apple may be planning with hosts.

  • edited April 2021

    I'm rather a backend Java guy and I have not much knowledge about mobile development but in some projects I had been part of the iOS and Android mobile apps had a shared C++ code base. Could it be possible that Steinberg did the same for the iOS and Android version of Cubasis and therefore has a rather low level access to threading, CPU affinity and the like? A quick search showed me that there is a thread affinity for macOS, but that was an old version.

    Anyway, it's probably too much guesswork but CB3's multi-core performance is very impressive. This guy managed 70 audio tracks with more than a hundred FX applied (Mixbox, Fab Filter, Steinberg stuff) on an iPad Pro 11 2018 4GB. I think that clearly demonstrates how good their approach seems to work and I expect this can be exceeded on an M1 iPad.

    https://forum.audiob.us/discussion/43054/cubasis-3-running-over-50-tracks-and-45-effects-on-medium-buffer-ipad-pro-11-2018

    Here the 70 tracks experiment:

  • @krassmann said:
    I'm rather a backend Java guy and I have not much knowledge about mobile development but in some projects I had been part of the iOS and Android mobile apps had a shared C++ code base. Could it be possible that Steinberg did the same for the iOS and Android version of Cubasis and therefore has a rather low level access to threading, CPU affinity and the like? A quick search showed me that there is a thread affinity for macOS, but that was an old version.

    Anyway, it's probably too much guesswork but CB3's multi-core performance is very impressive. This guy managed 70 audio tracks with more than a hundred FX applied (Mixbox, Fab Filter, Steinberg stuff) on an iPad Pro 11 2018 4GB. I think that clearly demonstrates how good their approach seems to work and I expect this can be exceeded on an M1 iPad.

    https://forum.audiob.us/discussion/43054/cubasis-3-running-over-50-tracks-and-45-effects-on-medium-buffer-ipad-pro-11-2018

    Here the 70 tracks experiment:

    I'd be very surprised if they didn't have a bunch of shared C++ code.

    macOS goes back a long way and the older threading model was primarily Mach threads. So, I would guess that there may well have been a time that it did support assigning threads to particular cores/CPU's.

    I don't think you can actually get more low level than the pthreads interface and from what I see now, it doesn't support thread affinity on iOS or macOS. This could have changed or I could be missing something. But, it makes sense to me for Apple not to support it. Being able to assign threads to sit on a particular CPU or core in a fixed cluster makes perfect sense. But, this could be a big problem when you are running on multiple different models of devices with all sorts of different core configurations and the added complexity of big.little. The OS is going to do a better job of this than anyone is likely to do.

    I'd expect that the Steinberg devs are pretty adept at doing multithreaded programming. It doesn't really matter what API's they've used or even if the threads are running on a particular core or not. These cores are certainly fast enough to run multiple RT audio threads on a single core.

    If I were going to do a host on either macOS or iOS today, I'd use the new API's from Apple. But, not using them doesn't mean that you can't write perfectly performant multithreaded audio on iOS. It would probably be harder though.

  • Watched the Apple release video...loads of news for visual artists...where was the news for music artists?
    Come on Apple!

  • M1 is a lot of hype from Apple as usual. :)
    The M1 was already a rebranded and improved ipad chip, so putting it "back" in the ipad and calling it an M1 rather than A15 or whatever etc is all marketing spin and just part of the usual upgrade cycle.

    All nice to see ipad hardware updates, but they're not delivering on iPadOS yet, which is the main thing holding it back, not the chip.

    One way they could save the ipad from the mediocrity of no pro apps is to allow Mac apps running when attached to an external display and then the ipad display becomes a touch input interface and trackpad, otherwise it's going to be many years before it gets the software to match the hardware because iPadOS has a long way to go yet.

  • I was really hoping to pickup a pair of iPad mini pro's to install in my eurorack system.

Sign In or Register to comment.