Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Multi-core iOS DAWs besides Cubasis 3?

Every now and then I consider using another DAW besides Cubasis 3 (especially when the aforementioned's lack of Plugin Delay Compensation really begins to bug me - like when trying out the new Unfiltered Audio plugins recently) and then I remember all the other DAWs don't yet support multi-core processing (unless there's been any new developments I'm unaware of?)

Looking at my iPad Pro's single core performance on Geekbench (1703) versus its multi-core score (7203) makes me very reluctant to switch to any other DAW, especially since I usually max out the processor towards the end of a project as it is (please see attached images for my crude comparison).

Do you think we'll see any other iOS DAW's taking advantage of all this extra potential processing power we have lying around any time soon?! I think I'll stick to Cubasis for the time being, and take advantage of over four times the performance versus the single core DAWs. I just wish Steinberg would hurry up and implement PDC to stop my drum bus percussion becoming horribly out of sync...!


«13

Comments

  • edited January 2023

    @DanSebPage keep in mind that these benchmarks will not necessarily be representative of performance in actual use, if Cubasis multi-core implementation is anything like Ableton Live's (article here), then it's highly dependent on how the signal path is set up in each project. This is not to say that multi-core is not beneficial; it definitely is, it just doesn't mean you'll get 4.2x performance from it.

    Edit: I'm not sure if the Ableton link is not loading for me only, so here's a snapshot from the Internet Archive.

  • @DanSebPage I just received an update for n-Track Studio which coincidentally appears to have added some level of multi-threaded exploitation. I don’t use this DAW as it was too unstable when I last tested it, but it does otherwise tick a lot of my boxes:

  • FLSM also supports multicore

  • Its on the roadmap for Loopy Pro :)

  • Looking forward to Loopy Pro!

  • @Littlewoodg said:
    FLSM also supports multicore

    +1 for FLSM! Simple, easy, but deep app. :)

  • edited January 2023

    @jwmmakerofmusic said:

    @Littlewoodg said:
    FLSM also supports multicore

    +1 for FLSM! Simple, easy, but deep app. :)

    the trifecta

  • @MisplacedDevelopment said:
    @DanSebPage I just received an update for n-Track Studio which coincidentally appears to have added some level of multi-threaded exploitation. I don’t use this DAW as it was too unstable when I last tested it, but it does otherwise tick a lot of my boxes:

    @MisplacedDevelopment : Thanks , I must have missed this. I’m dabbling w N-Track, no crashes so far on my iPad m1 12.9 . When did you last test it ? They update it constantly. I really like it so far . The 10 gig/400 instrument collection is awesome on the iOS pro suite version.

  • @Telstar5 said:

    @MisplacedDevelopment said:
    @DanSebPage I just received an update for n-Track Studio which coincidentally appears to have added some level of multi-threaded exploitation. I don’t use this DAW as it was too unstable when I last tested it, but it does otherwise tick a lot of my boxes:

    @MisplacedDevelopment : Thanks , I must have missed this. I’m dabbling w N-Track, no crashes so far on my iPad m1 12.9 . When did you last test it ? They update it constantly. I really like it so far . The 10 gig/400 instrument collection is awesome on the iOS pro suite version.

    I last had a play a couple of months ago. They do indeed push regular updates though, and most of the recent ones look like bug fixes so perhaps it is time to take it for another spin when I get a chance. Good to hear you have not had any crashes so far, that is encouraging!

  • edited January 2023

    @Grandbear said:
    @DanSebPage keep in mind that these benchmarks will not necessarily be representative of performance in actual use, if Cubasis multi-core implementation is anything like Ableton Live's (article here), then it's highly dependent on how the signal path is set up in each project. This is not to say that multi-core is not beneficial; it definitely is, it just doesn't mean you'll get 4.2x performance from it.

    Edit: I'm not sure if the Ableton link is not loading for me only, so here's a snapshot from the Internet Archive.

    Yeah, I appreciate it's all very much 'ball-park' figures, but even if it's only say, double the performance (e.g. twice the plugins before crackling) it's well worth taking advantage of, I'd have thought.

    You're right, there's very little information on exactly how Cubasis uses multi-core, and looking at the article you linked to reminds me a lot of the FL Studio multi-core situation.

    I come from an FL Studio background and that could only use 1 core per 'track instance' to begin with, e.g. if you create a 'synth bus' (regardless of how many synths it contained), it could only use 1 core for that synth bus, as they were all 'linked'. I'm sure a later update allowed better use of the cores even in this situation though if I remember correctly...

  • edited January 2023

    There had been a lengthy explanation from Steinberg‘s engineering posted by @LFS that might answer your questions and is a good guide how to set Cubasis up and how to interpret the DSP meter.

    https://forum.audiob.us/discussion/42746/cubasis-3-2-multicore-rendering-latency-settings

    There had been several real world tests that clearly showed how superior the multi-core support is. But I think that is nothing trivial and the developers that have cracked it for their desktop DAWs could probably reuse their know-how for their iOS product.

    https://forum.audiob.us/discussion/43054/cubasis-3-running-over-50-tracks-and-45-effects-on-medium-buffer-ipad-pro-11-2018

    I also did a test myself where I duplicated a track with Model D and a clip with one bar containing 8 8th notes until glitches occurred. It was 28 tracks with Model D while not even 10 without multi-core. With Synthmaster 2 I could run even 70 instances.

  • @MisplacedDevelopment said:
    @DanSebPage I just received an update for n-Track Studio which coincidentally appears to have added some level of multi-threaded exploitation. I don’t use this DAW as it was too unstable when I last tested it, but it does otherwise tick a lot of my boxes:

    Aha! Thanks for that. I've not used this particular DAW, but it's great to see multi-core support is making an appearance. I wonder if it has PDC...?

  • @Littlewoodg said:
    FLSM also supports multicore

    I wasn't aware of this, thanks. As much as I loved FL Studio on PC, I've struggled to get into the mobile version of this DAW, might have to give it another try!

  • @Icoustik said:
    Its on the roadmap for Loopy Pro :)

    Good to hear thanks!

  • edited January 2023

    @krassmann said:
    There had been a lengthy explanation from Steinberg‘s engineering posted by @LFS that might answer your questions and is a good guide how to set Cubasis up and how to interpret the DSP meter.

    https://forum.audiob.us/discussion/42746/cubasis-3-2-multicore-rendering-latency-settings

    There had been several real world tests that clearly showed how superior the multi-core support is. But I think that is nothing trivial and the developers that have cracked it for their desktop DAWs could probably reuse their know-how for their iOS product.

    https://forum.audiob.us/discussion/43054/cubasis-3-running-over-50-tracks-and-45-effects-on-medium-buffer-ipad-pro-11-2018

    I also did a test myself where I duplicated a track with Model D and a clip with one bar containing 8 8th notes until glitches occurred. It was 28 tracks with Model D while not even 10 without multi-core. With Synthmaster 2 I could run even 70 instances.

    Thanks Krassmann, I've read most topics I could find on the forum relating to Cubasis and multi-core processing (including that post from LFS) and must admit I've been massively impressed with the amount of plugins Cubasis can handle. On PC, I'd have to render to WAV before even thinking about the mixing / mastering stage, but with Cubasis on iPad, I've been able to add all the mastering plugins after the track is complete, whilst the track is still 'live', which is great when you suddenly hear a note/element that needs tweaking.

    I don't doubt it's a complex process to get multi-core processing working efficiently (I know very little about coding or app development, I must admit), but given how beneficial more computing power can be to music-based apps, I've been surprised to discover how many DAWs aren't taking advantage of all this extra available processing power. (70 instances of Synthmaster 2 for example - that's incredible!)

  • GarageBand supports multicore I assume. I can run a project with more instances of Model D there.

    Also, what if use 2 DAWs/hosts alongside, will iOS handle that setup in ‘multicore’ mode…

  • edited January 2023

    @DanSebPage said:

    @krassmann said:
    There had been a lengthy explanation from Steinberg‘s engineering posted by @LFS that might answer your questions and is a good guide how to set Cubasis up and how to interpret the DSP meter.

    https://forum.audiob.us/discussion/42746/cubasis-3-2-multicore-rendering-latency-settings

    There had been several real world tests that clearly showed how superior the multi-core support is. But I think that is nothing trivial and the developers that have cracked it for their desktop DAWs could probably reuse their know-how for their iOS product.

    https://forum.audiob.us/discussion/43054/cubasis-3-running-over-50-tracks-and-45-effects-on-medium-buffer-ipad-pro-11-2018

    I also did a test myself where I duplicated a track with Model D and a clip with one bar containing 8 8th notes until glitches occurred. It was 28 tracks with Model D while not even 10 without multi-core. With Synthmaster 2 I could run even 70 instances.

    Thanks Krassmann, I've read everything I could find on the forum relating to Cubasis and multi-core processing (including that post from LFS) and must admit I've been massively impressed with the amount of plugins Cubasis can handle. On PC, I'd have to render to WAV before even thinking about the mixing / mastering stage, but with Cubasis on iPad, I've been able to add all the mastering plugins after the track is complete, whilst the track is still 'live', which is great when you suddenly hear a note/element that needs tweaking.

    I don't doubt it's a complex process to get multi-core processing working efficiently (I know very little about coding or app development, I must admit), but given how beneficial more computing power can be to music-based apps, I've been surprised to discover how many DAWs aren't taking advantage of all this extra available processing power. (70 instances of Synthmaster 2 for example - that's incredible!)

    I’m quite sure that doing multi-core audio rendering right is difficult. You can not just use the convenient multi-threading APIs that modern languages, frameworks and OSs offer. AFAIK you need to replace the standard processing on the system audio thread with your own multi-threaded solution that maps the threads to the available cores while ensuring that there is enough CPU headroom for all other things than DSP. This also adds some more latency that you need to find ways to keep it as small as possible. The big guns like Steinberg and Image-line have the resources and the experience from their desktop products how to do this properly but the indie devs do not. Maybe if they would join forces and create a common open source reference solution that could be reused in their AU hosts.

  • @Agatha_aga said:
    GarageBand supports multicore I assume. I can run a project with more instances of Model D there.

    Also, what if use 2 DAWs/hosts alongside, will iOS handle that setup in ‘multicore’ mode…

    I'm struggling to find any evidence to support this... but I could be wrong!

  • @DanSebPage said:

    @MisplacedDevelopment said:
    @DanSebPage I just received an update for n-Track Studio which coincidentally appears to have added some level of multi-threaded exploitation. I don’t use this DAW as it was too unstable when I last tested it, but it does otherwise tick a lot of my boxes:

    Aha! Thanks for that. I've not used this particular DAW, but it's great to see multi-core support is making an appearance. I wonder if it has PDC...?

    To answer my own question, the N-Track Studio (desktop) changelog mentions 'Plugin Delay Compensation' in numerous locations, so I'd assume the iOS version might also have it, I'll have to check out this DAW in a bit more detail...

  • @krassmann said:

    @DanSebPage said:

    @krassmann said:
    There had been a lengthy explanation from Steinberg‘s engineering posted by @LFS that might answer your questions and is a good guide how to set Cubasis up and how to interpret the DSP meter.

    https://forum.audiob.us/discussion/42746/cubasis-3-2-multicore-rendering-latency-settings

    There had been several real world tests that clearly showed how superior the multi-core support is. But I think that is nothing trivial and the developers that have cracked it for their desktop DAWs could probably reuse their know-how for their iOS product.

    https://forum.audiob.us/discussion/43054/cubasis-3-running-over-50-tracks-and-45-effects-on-medium-buffer-ipad-pro-11-2018

    I also did a test myself where I duplicated a track with Model D and a clip with one bar containing 8 8th notes until glitches occurred. It was 28 tracks with Model D while not even 10 without multi-core. With Synthmaster 2 I could run even 70 instances.

    Thanks Krassmann, I've read everything I could find on the forum relating to Cubasis and multi-core processing (including that post from LFS) and must admit I've been massively impressed with the amount of plugins Cubasis can handle. On PC, I'd have to render to WAV before even thinking about the mixing / mastering stage, but with Cubasis on iPad, I've been able to add all the mastering plugins after the track is complete, whilst the track is still 'live', which is great when you suddenly hear a note/element that needs tweaking.

    I don't doubt it's a complex process to get multi-core processing working efficiently (I know very little about coding or app development, I must admit), but given how beneficial more computing power can be to music-based apps, I've been surprised to discover how many DAWs aren't taking advantage of all this extra available processing power. (70 instances of Synthmaster 2 for example - that's incredible!)

    I’m quite sure that doing multi-core audio rendering right is difficult. You can not just use the convenient multi-threading APIs that modern languages, frameworks and OSs offer. AFAIK you need to replace the standard processing on the system audio thread with your own multi-threaded solution that maps the threads to the available cores while ensuring that there is enough CPU headroom for all other things than DSP. This also adds some more latency that you need to find ways to keep it as small as possible. The big guns like Steinberg and Image-line have the resources and the experience from their desktop products how to do this properly but the indie devs do not. Maybe if they would join forces and create a common open source reference solution that could be reused in their AU hosts.

    Cubasis definitely seem to be on top of their game in this department. I've just come across this post (from Lars) in another 'multi-core' discussion thread:

    Multicore-rendering overview
    Multicore-rendering has been implemented in Cubasis 3.1 for Android and in Cubasis 3.2 for iOS.
    This means that on devices with more than two CPU cores*, the rendering of multiple tracks during playback and mixdown is simultaneously performed on multiple cores.
    On devices with 2 CPU cores*, multi-core rendering is disabled in order to keep one of the cores available for all the non-audio stuff (UI, touch, the operating system…)
    Note that a project must contain at least 2 tracks in order for multi-core rendering to kick in.

    The post also mentions how the 'energy efficiency' cores are not used by Cubasis (unsuitable for audio rendering) and about how 1 of the 'performance' cores is always reserved for non-audio related duties. All of this leads me to believe that in my particular case (4 performance cores, 4 energy efficiency cores) the latter cores are disregarded completely, and 1 of the performance cores is unavailable, but that still means I should be getting about three times the performance using Cubasis versus other 'single core' DAWs

  • @DanSebPage said:

    @krassmann said:

    @DanSebPage said:

    @krassmann said:
    There had been a lengthy explanation from Steinberg‘s engineering posted by @LFS that might answer your questions and is a good guide how to set Cubasis up and how to interpret the DSP meter.

    https://forum.audiob.us/discussion/42746/cubasis-3-2-multicore-rendering-latency-settings

    There had been several real world tests that clearly showed how superior the multi-core support is. But I think that is nothing trivial and the developers that have cracked it for their desktop DAWs could probably reuse their know-how for their iOS product.

    https://forum.audiob.us/discussion/43054/cubasis-3-running-over-50-tracks-and-45-effects-on-medium-buffer-ipad-pro-11-2018

    I also did a test myself where I duplicated a track with Model D and a clip with one bar containing 8 8th notes until glitches occurred. It was 28 tracks with Model D while not even 10 without multi-core. With Synthmaster 2 I could run even 70 instances.

    Thanks Krassmann, I've read everything I could find on the forum relating to Cubasis and multi-core processing (including that post from LFS) and must admit I've been massively impressed with the amount of plugins Cubasis can handle. On PC, I'd have to render to WAV before even thinking about the mixing / mastering stage, but with Cubasis on iPad, I've been able to add all the mastering plugins after the track is complete, whilst the track is still 'live', which is great when you suddenly hear a note/element that needs tweaking.

    I don't doubt it's a complex process to get multi-core processing working efficiently (I know very little about coding or app development, I must admit), but given how beneficial more computing power can be to music-based apps, I've been surprised to discover how many DAWs aren't taking advantage of all this extra available processing power. (70 instances of Synthmaster 2 for example - that's incredible!)

    I’m quite sure that doing multi-core audio rendering right is difficult. You can not just use the convenient multi-threading APIs that modern languages, frameworks and OSs offer. AFAIK you need to replace the standard processing on the system audio thread with your own multi-threaded solution that maps the threads to the available cores while ensuring that there is enough CPU headroom for all other things than DSP. This also adds some more latency that you need to find ways to keep it as small as possible. The big guns like Steinberg and Image-line have the resources and the experience from their desktop products how to do this properly but the indie devs do not. Maybe if they would join forces and create a common open source reference solution that could be reused in their AU hosts.

    Cubasis definitely seem to be on top of their game in this department. I've just come across this post (from Lars) in another 'multi-core' discussion thread:

    Multicore-rendering overview
    Multicore-rendering has been implemented in Cubasis 3.1 for Android and in Cubasis 3.2 for iOS.
    This means that on devices with more than two CPU cores*, the rendering of multiple tracks during playback and mixdown is simultaneously performed on multiple cores.
    On devices with 2 CPU cores*, multi-core rendering is disabled in order to keep one of the cores available for all the non-audio stuff (UI, touch, the operating system…)
    Note that a project must contain at least 2 tracks in order for multi-core rendering to kick in.

    The post also mentions how the 'energy efficiency' cores are not used by Cubasis (unsuitable for audio rendering) and about how 1 of the 'performance' cores is always reserved for non-audio related duties. All of this leads me to believe that in my particular case (4 performance cores, 4 energy efficiency cores) the latter cores are disregarded completely, and 1 of the performance cores is unavailable, but that still means I should be getting about three times the performance using Cubasis versus other 'single core' DAWs

    That totally matches my own experiments with Model D. 28:9 is roughly triple the single-core performance.

  • LFSLFS
    edited January 2023

    Happy an healthy new year wishes, to all of you!

    Please let me share the feedback from our engineering below.

    Stay safe
    & warm greetings,
    Lars

    Yes, Cubasis requests the CPU’s performance cores for rendering, and it creates one render thread per core, except for one that it leaves for other tasks (UI, touch detection, iOS background tasks…)
    However, Cubasis can only indicate its CPU core related wishes to iOS / iPadOS. Ultimately, the system decides which thread of which app gets executed on which core. If multiple apps run multiple threads in parallel (for example 2 DAWs rendering audio in the background, another app shows its UI in the foreground, and iOS is checking for emails at the same time), iOS will distribute each app’s workload optimally over the CPU cores. In other words, multiple apps doing multicore rendering in parallel don’t fight over cores, but get assigned only as much CPU time by iOS as there is available. Let iOS worry about slicing the CPU cake ;)

    Grandbear is right about multicore performance being dependent on the signal path. Only things that are independent of each other can be rendered in parallel. This is why you won’t see any difference in DSP performance with just 1 track, even if it has 8 insert effects (which cannot be parallelized). Multicore starts with 2 tracks. Send effects are also rendered in parallel, in contrast to inserts. There are also special cases where tracks cannot be rendered in parallel, for example if one track is the source for another track’s side-chaining effect, or if tracks point to an AU instrument with multiple outputs. Cubasis analyzes your project’s signal path and tries to make the most of it in terms of parallelization. Then it renders away, trusting iOS to assign each render thread to a free performance core, if possible.

      • //
  • I have quite a bit of frustration with all of this. I recently upgraded to a new iPad Pro 2022 (m2), before that I had an iPad Pro 2020 (a12z). And unfortunately, I don't see any performance gains at all. The same large projects from AUM and drambo give a margin of around 30%. On BM3 there is almost no gain at all. That is, if load used to be around 80-90%, now it's around 60%. So in essence we can say that it is a waste of money. A measly 30%, with this being the third generation processor after my old iPad. Apparently iOS music software just isn't optimized for the new processors. I don't know about cubasis 3, I don't use it because of its huge amount of limitations (no possibility to use midiLFO, effect bus routing and stuff like that) but if we go on like that without any active movements from big developers we will find all these new processors useless as well as buying new devices.

  • @soundsgoodbro said:
    I have quite a bit of frustration with all of this. I recently upgraded to a new iPad Pro 2022 (m2), before that I had an iPad Pro 2020 (a12z). And unfortunately, I don't see any performance gains at all. The same large projects from AUM and drambo give a margin of around 30%. On BM3 there is almost no gain at all. That is, if load used to be around 80-90%, now it's around 60%. So in essence we can say that it is a waste of money. A measly 30%, with this being the third generation processor after my old iPad. Apparently iOS music software just isn't optimized for the new processors. I don't know about cubasis 3, I don't use it because of its huge amount of limitations (no possibility to use midiLFO, effect bus routing and stuff like that) but if we go on like that without any active movements from big developers we will find all these new processors useless as well as buying new devices.

    I think it is only fairly recently that audio devs have been starting to utilize multi core. I think there was some significant SDK (or something?) that came out only relatively recently that made this accessible for most audio devs. That plus the lack of financial motivation probably doesn't help. But yah it is a bummer. I think Lumafusion really benefits though. My 4K renders fly now.

  • @LFS said:
    Happy an healthy new year wishes, to all of you!

    Please let me share the feedback from our engineering below.

    Stay safe
    & warm greetings,
    Lars

    Yes, Cubasis requests the CPU’s performance cores for rendering, and it creates one render thread per core, except for one that it leaves for other tasks (UI, touch detection, iOS background tasks…)
    However, Cubasis can only indicate its CPU core related wishes to iOS / iPadOS. Ultimately, the system decides which thread of which app gets executed on which core. If multiple apps run multiple threads in parallel (for example 2 DAWs rendering audio in the background, another app shows its UI in the foreground, and iOS is checking for emails at the same time), iOS will distribute each app’s workload optimally over the CPU cores. In other words, multiple apps doing multicore rendering in parallel don’t fight over cores, but get assigned only as much CPU time by iOS as there is available. Let iOS worry about slicing the CPU cake ;)

    Grandbear is right about multicore performance being dependent on the signal path. Only things that are independent of each other can be rendered in parallel. This is why you won’t see any difference in DSP performance with just 1 track, even if it has 8 insert effects (which cannot be parallelized). Multicore starts with 2 tracks. Send effects are also rendered in parallel, in contrast to inserts. There are also special cases where tracks cannot be rendered in parallel, for example if one track is the source for another track’s side-chaining effect, or if tracks point to an AU instrument with multiple outputs. Cubasis analyzes your project’s signal path and tries to make the most of it in terms of parallelization. Then it renders away, trusting iOS to assign each render thread to a free performance core, if possible.

      • //

    Thanks a lot for a great summary from the first hand!
    One main question I always had about Cubasis multi-core rendering: Do you mean by rendering only the process of exporting an audio file, or do you also mean that multi-core is used in general for outputting of any audio, so also while casually working and playing any sound from Cubasis?
    In other words: is multicore used in Cubasis only for speeding up the audio export or also for playing back live, so basically for the most of the work?
    I've heard many different opinions on that, so this would be the place and time to settle it once and for all 🙂
    Another discussion I had regarding multicore usage was that some people are convinced, that iOS by default (I.e. in any host) can delegate Audio units processing into multiple cores. While this may be true for e.g. UI and all the non-audio stuff, I believe audio is always processed on a single thread for all the AUs within a host. Can you please confirm or reject this assumption?
    Thank you a lot!

  • @soundsgoodbro said:
    I have quite a bit of frustration with all of this. I recently upgraded to a new iPad Pro 2022 (m2), before that I had an iPad Pro 2020 (a12z). And unfortunately, I don't see any performance gains at all. The same large projects from AUM and drambo give a margin of around 30%. On BM3 there is almost no gain at all. That is, if load used to be around 80-90%, now it's around 60%. So in essence we can say that it is a waste of money. A measly 30%, with this being the third generation processor after my old iPad. Apparently iOS music software just isn't optimized for the new processors. I don't know about cubasis 3, I don't use it because of its huge amount of limitations (no possibility to use midiLFO, effect bus routing and stuff like that) but if we go on like that without any active movements from big developers we will find all these new processors useless as well as buying new devices.

    I far as I'm aware, AUM, Drambo and BM3 are all single core applications, (not multi-core like Cubasis). In which case, you are only seeing the performance increase between the single core scores from the table in the first post (1872 Vs 1141).

    I created this post to highlight how few DAW's actually make use of the multi-core processors within iPads. I previously thought Cubasis was the only one, although I've learnt from the replies in this thread that FLStudioMobile and n-track Studio also use multi-core. If you switched to Cubasis therefore, you'd be able to take advantage of all that extra processing power your M2 iPad has (i.e. the '8416' figure from the multi-core table - although in reality it's '6312' (75% of 8416) as you can only use 3 of the 4 cores - 1 is reserved for touch UI etc.)

  • @skrat said:

    @LFS said:
    Happy an healthy new year wishes, to all of you!

    Please let me share the feedback from our engineering below.

    Stay safe
    & warm greetings,
    Lars

    Yes, Cubasis requests the CPU’s performance cores for rendering, and it creates one render thread per core, except for one that it leaves for other tasks (UI, touch detection, iOS background tasks…)
    However, Cubasis can only indicate its CPU core related wishes to iOS / iPadOS. Ultimately, the system decides which thread of which app gets executed on which core. If multiple apps run multiple threads in parallel (for example 2 DAWs rendering audio in the background, another app shows its UI in the foreground, and iOS is checking for emails at the same time), iOS will distribute each app’s workload optimally over the CPU cores. In other words, multiple apps doing multicore rendering in parallel don’t fight over cores, but get assigned only as much CPU time by iOS as there is available. Let iOS worry about slicing the CPU cake ;)

    Grandbear is right about multicore performance being dependent on the signal path. Only things that are independent of each other can be rendered in parallel. This is why you won’t see any difference in DSP performance with just 1 track, even if it has 8 insert effects (which cannot be parallelized). Multicore starts with 2 tracks. Send effects are also rendered in parallel, in contrast to inserts. There are also special cases where tracks cannot be rendered in parallel, for example if one track is the source for another track’s side-chaining effect, or if tracks point to an AU instrument with multiple outputs. Cubasis analyzes your project’s signal path and tries to make the most of it in terms of parallelization. Then it renders away, trusting iOS to assign each render thread to a free performance core, if possible.

      • //

    Thanks a lot for a great summary from the first hand!
    One main question I always had about Cubasis multi-core rendering: Do you mean by rendering only the process of exporting an audio file, or do you also mean that multi-core is used in general for outputting of any audio, so also while casually working and playing any sound from Cubasis?
    In other words: is multicore used in Cubasis only for speeding up the audio export or also for playing back live, so basically for the most of the work?
    I've heard many different opinions on that, so this would be the place and time to settle it once and for all 🙂
    Another discussion I had regarding multicore usage was that some people are convinced, that iOS by default (I.e. in any host) can delegate Audio units processing into multiple cores. While this may be true for e.g. UI and all the non-audio stuff, I believe audio is always processed on a single thread for all the AUs within a host. Can you please confirm or reject this assumption?
    Thank you a lot!

    Further up the thread I've posted a quote from LARS which clarifies the confusion around the word 'rendering'. It does in-fact refer to both Playback and Mixdown:

    "Multicore-rendering has been implemented in Cubasis 3.1 for Android and in Cubasis 3.2 for iOS.
    This means that on devices with more than two CPU cores*, the rendering of multiple tracks during playback and mixdown is simultaneously performed on multiple cores.
    On devices with 2 CPU cores*, multi-core rendering is disabled in order to keep one of the cores available for all the non-audio stuff (UI, touch, the operating system…)"

    This should also answer the query about audio only being able to use a single core - this is not the case, Cubasis makes use of all-but-one of your performance core, to handle audio duties. in my case (four performance cores), this means 3 are utilised by Cubasis

  • @LFS said:
    Happy an healthy new year wishes, to all of you!

    Please let me share the feedback from our engineering below.

    Stay safe
    & warm greetings,
    Lars

    Yes, Cubasis requests the CPU’s performance cores for rendering, and it creates one render thread per core, except for one that it leaves for other tasks (UI, touch detection, iOS background tasks…)
    However, Cubasis can only indicate its CPU core related wishes to iOS / iPadOS. Ultimately, the system decides which thread of which app gets executed on which core. If multiple apps run multiple threads in parallel (for example 2 DAWs rendering audio in the background, another app shows its UI in the foreground, and iOS is checking for emails at the same time), iOS will distribute each app’s workload optimally over the CPU cores. In other words, multiple apps doing multicore rendering in parallel don’t fight over cores, but get assigned only as much CPU time by iOS as there is available. Let iOS worry about slicing the CPU cake ;)

    Grandbear is right about multicore performance being dependent on the signal path. Only things that are independent of each other can be rendered in parallel. This is why you won’t see any difference in DSP performance with just 1 track, even if it has 8 insert effects (which cannot be parallelized). Multicore starts with 2 tracks. Send effects are also rendered in parallel, in contrast to inserts. There are also special cases where tracks cannot be rendered in parallel, for example if one track is the source for another track’s side-chaining effect, or if tracks point to an AU instrument with multiple outputs. Cubasis analyzes your project’s signal path and tries to make the most of it in terms of parallelization. Then it renders away, trusting iOS to assign each render thread to a free performance core, if possible.

      • //

    Thanks very much Lars, this is really interesting & useful info. I hadn't realised sidechain potentially had an effect on the ability to use multiple cores - I'll have to watch out for that, as I often sidechain my kick to a lot of other elements! I'll perhaps switch to a volume ducker plugin (BLEASS Sidechain etc.) in future to 'unlock' more performance...

  • edited January 2023

    @DanSebPage said:

    @soundsgoodbro said:
    I have quite a bit of frustration with all of this. I recently upgraded to a new iPad Pro 2022 (m2), before that I had an iPad Pro 2020 (a12z). And unfortunately, I don't see any performance gains at all. The same large projects from AUM and drambo give a margin of around 30%. On BM3 there is almost no gain at all. That is, if load used to be around 80-90%, now it's around 60%. So in essence we can say that it is a waste of money. A measly 30%, with this being the third generation processor after my old iPad. Apparently iOS music software just isn't optimized for the new processors. I don't know about cubasis 3, I don't use it because of its huge amount of limitations (no possibility to use midiLFO, effect bus routing and stuff like that) but if we go on like that without any active movements from big developers we will find all these new processors useless as well as buying new devices.

    I far as I'm aware, AUM, Drambo and BM3 are all single core applications, (not multi-core like Cubasis). In which case, you are only seeing the performance increase between the single core scores from the table in the first post (1872 Vs 1141).

    I created this post to highlight how few DAW's actually make use of the multi-core processors within iPads. I previously thought Cubasis was the only one, although I've learnt from the replies in this thread that FLStudioMobile and n-track Studio also use multi-core. If you switched to Cubasis therefore, you'd be able to take advantage of all that extra processing power your M2 iPad has (i.e. the '8416' figure from the multi-core table - although in reality it's '6312' (75% of 8416) as you can only use 3 of the 4 cores - 1 is reserved for touch UI etc.)

    Yes, thanks a lot for this information, of course I will try to use cubasis 3, but I miss its features in terms of routing, MIDI learn in auv3 and other things. In case I use AUM as iaa I can't write audio to cubasis, but it writes perfectly in bm3 for example. Although, as I understand it, the sense of using AUM in iaa is lost with multicore. I apologize for the offtopic. In any case, thanks for the information.

  • edited January 2023

    Multi core won't make you a better producer. Sure, it'll improve what you can do in one session, but there are other creative ways to solve things.

    I'm just surprised that a lot of posts about performance and apps I've been seeing over the past few years, makes me wonder why they haven't switched to a Mac.

    iOS is going to be without a lot of Mac OS features. Maybe iPad OS might see some upgrades but with the way Apple is positioning the iPad as a companion device, even if it did come with a full feature Logic Pro DAW, we will still get complaints about how it doesn't have an Ableton, FL Studio, or Cubase feature. Or, how a feature like IAA or AUv3 had been deprecated. Or one's favorite app disappeared with the latest update.

    And that's okay. Why don't we use both devices we have? A hammer can't do the job of a saw, and a saw can't do the job of a hammer. Most people don't need both. Some need one, some the other, some neither. And all options are fine.

    Because my guess is this. If Steinberg fully goes in the way of trying to reach the elitist devices, they will lose a huge portion of their audience. Which are the new users of Cubasis, who can't afford to spend $2000 on a "pro level device" to use 1 app. Cubasis does not generate significant revenue from adding performance improvements. They will make revenue by capturing a wide net, and providing features that will sustain that wide net.

Sign In or Register to comment.