Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Nanostudio 2

13468912

Comments

  • edited April 2019

    @jonmoore
    The one thing that apeMatrix and AUM provide that NS2 doesn't, is greater flexibility with FX routing. apeMarix is particularly strong in this regard as you can easily set up FX chains that use parallel as well as series IO

    readed this multiple times last few days ... Maybe i'm missing something, but to me looks like NS2 midi/audio routing is capable same complexity like AUM/ApeMatrix, and more ... Looks like people really don't know about true routing possibilities in NS2...

    Ok - there are current bugs in MIDI AUfx routing (midi is not passed from one plugin to another) - but this is just bug which will be fixed in upcoming update ... regarding pure MIDI / AUDIO routing between channels - there is literally NO LIMIT - NS2 is most complex host app in terms of midi / audio routing available at the moment .

    Of course maybe i'm missing something in AUM/ApeMatrix ...

  • @dendy said:
    Of course maybe i'm missing something in AUM/ApeMatrix ...

    Possibly. Though I think I may be missing it too. The audio routing in Nanostudio is very good I think once you wrap your head around it. I like the Reaper model and Nanostudio does a good job implementing it. The only limitation I can really think of is that you are limited on send/aux channels. Oh and automation for effects doesn't seem to work.

    There are also a couple of things that I found confusing. It took me a while to realize that if I wanted to add AU3 plugins, I had to add them in the audio strip, rather than in Obsidian/Slate. It makes sense - but it wasn't intuitive.

    ApeMatrix has some interesting tricks for midi that I don't think any other app can match (BM3 is a possible exception here). LFOs for parameters being a big one. It also has an interface that makes it far easier to visualize what you're doing and switch between apps. And the interface for midi routing in both ApeMatrix and AUM is superior to anything in Nanostudio - though I'm not sure that Nanostudio really needs that complexity.

  • edited April 2019

    LFOs for parameters being a big one.

    Hm, that sounds interesting, what is this ? I didn't tested ApeM so deeply .. I guess, you can control with dedicated LFO any plugin parameter ? If that is it, then this is pretty cool !

    Oh and automation for effects doesn't seem to work.

    Ok ok of course. I was talking just about pure midi / audio routing. btw. you can automate build-in FX's...

    The only limitation I can really think of is that you are limited on send/aux channels

    Yeah.. basically just sends and aux (group) channels. But don't forget there is any limit on number of sends or on number of group channels (and also any limit on their parenting - you can have groups of groups of groups of groups ...). Another thing is that you have also midi sends not just audio sends.

    And every channel is able propagate audio not just to it's parent channel but also directly to main device HW output. And you can set MIDI to be send also to parent channel if you want.

    In terms of routing, there is almost nothing you can't do :)

    But i understand that especially for more simple projects and for live jam, matrix view of AUM/ApeMatrix is lot more intuitive, no doubt about that. Actually we discussed about some kind of "matrix" overview of all mixer routings during development, but obviously there was a lot of other, more important, tasks to do, so this idea faded away.

  • Hm, that sounds interesting, what is this ? I didn't tested ApeM so deeply .. I guess, you can control with dedicated LFO any plugin parameter ? If that is it, then this is pretty cool !

    Yeah the developer is definitely of all the school that if you have a control, then that control should have an LFO. And to be clear, I don't expect Nanostudio to have those features (Aum and Audiobus don't), just giving you an idea of quite how insanely flexible that thing is. It's basically a plugin host reinvented as a modular synth.

    Yeah.. basically just sends and aux (group) channels. But don't forget there is any limit on number of sends or on number of group channels (and also any limit on their parenting - you can have groups of groups of groups of groups ...). Another thing is that you have also midi sends not just audio sends.

    Sure, but isn't there a limit on how many you can have? Or am I mis-rembering this? If Aux and Sends are unlimited then yeah - it's basically reaper minus audio lanes. Which is certainly my preferred paradigm.

  • @cian said:

    Hm, that sounds interesting, what is this ? I didn't tested ApeM so deeply .. I guess, you can control with dedicated LFO any plugin parameter ? If that is it, then this is pretty cool !

    Yeah the developer is definitely of all the school that if you have a control, then that control should have an LFO. And to be clear, I don't expect Nanostudio to have those features (Aum and Audiobus don't), just giving you an idea of quite how insanely flexible that thing is. It's basically a plugin host reinvented as a modular synth.

    Yeah.. basically just sends and aux (group) channels. But don't forget there is any limit on number of sends or on number of group channels (and also any limit on their parenting - you can have groups of groups of groups of groups ...). Another thing is that you have also midi sends not just audio sends.

    Sure, but isn't there a limit on how many you can have? Or am I mis-rembering this? If Aux and Sends are unlimited then yeah - it's basically reaper minus audio lanes. Which is certainly my preferred paradigm.

    It’s unlimited. For example, I have 3 different delay sends that are grouped so I can apply a single filter on all of them. And that delay send group is a member of the master send group. Then I can do stuff like automate the volume of the master send and do instant all send muting for interesting breakdown effects.

    I had 20 sends on this tune before I scaled back and realized I was just getting out of hand :smiley:

  • What is the general consensus of the best all-around beginners orientation of NS2? Not one that gets into the nitty gritty, but a solid one that goes through most of the main stuff? Hopefully something under an hour? ;)

  • @skiphunt said:
    What is the general consensus of the best all-around beginners orientation of NS2? Not one that gets into the nitty gritty, but a solid one that goes through most of the main stuff? Hopefully something under an hour? ;)

    Maybe start here?

  • edited April 2019

    @cian sounds pretty interesting, goimg to give ApeMatrix another round :)

    @skiphunt there is nothing better than Platinumaudiolab videos, they are thematic so you can watch just some mased on what interests you.. they are also relative short, one hour to watch all important parts is more than enough..

    https://www.blipinteractive.co.uk/learn/

  • @drez said:

    @skiphunt said:
    What is the general consensus of the best all-around beginners orientation of NS2? Not one that gets into the nitty gritty, but a solid one that goes through most of the main stuff? Hopefully something under an hour? ;)

    Maybe start here?

    Clear, concise, doesn't meander. Doesn't sound like he's wingin' it while trying to figure it out as he goes. Perfect.

    Thanks!

  • Just btw. Steven, who made all that tutorial videos is also author of all IAP packs ;-) That guy knows how to work hard.

  • @dendy said:
    Just btw. Steven, who made all that tutorial videos is also author of all IAP packs ;-) That guy knows how to work hard.

    I wish he'd pump out some more IAPs (he says like it's nuttin :)), I like them, but I also liked buying them in small payment for the excellent video series.

  • edited April 2019

    @skiphunt said:

    @syrupcore said:

    @skiphunt said:

    @ExAsperis99 said:
    For a couple of days I was proud of myself for not buying NS2 just because the sale price (said to never be coming) was so cheap. I have enough DAWs, I am fluid in AUM, which is being updated. This is just a distraction, I told myself, even if it's cheaper than a Midtown sandwich. (And it is; how the hell did $14 lunches become the standard??)

    Anyway, I caved.
    And NS2 is really nice to work in. Just dabbled, but it's everything they said, though I don't get the love for Obsidian just yet.
    Glad I have it.

    I felt that even if I didn't get on with it as a "DAW", or get off on all the other stuff the ardent fans love... at the minimum it's worth the price for making loops and pieces to export out and use in other stuff.

    And/or as a super low resource multi-timbral synth/sampler with effects. Ignore the sequencer and run it from something external (say from Quantum or a set of ApeMatrix hosted AU MIDI doodads or...) and suddenly it's the most powerful/least expensive Roland JV/XP/XV series type multi-timbral synth ever. I mean, Obsidian is considerably more powerful than the synth engine in the JV2080 or the XV5080 in pretty much every way (except for poly-aftertouch support, I guess). The JV2080 goes for $400+ on ebay to this day. Obsidian still doesn't have that Roland JV/XP/XV sound library yet but that's just a question of time.

    Decided to give that a spin, ie sending midi to obsidian from externally midi doodads. Tried Quantum @nd had it working with minimal effort. Tried sending midi from Fugue Machine, riffer, and Rozeta via apeMatrix and AUM. Can’t figure out the routing. Can you or anyone else offer a little guidance? I checked the NS2 manual, but there’s not much detail there at all. And, can’t I just host those AU sequencers in NS2 like I can with all the other AU hosts?

    Was able to do it easy enough from AB3 by sending to virtual midi. Still stumped with apeMatrix & AUM though

    Right. The old who does and who doesn't publish a virtual MIDI port thing. Wish all apps did! I use MIDI FLow's custom virtual ports for this. They're dead simple to set up and you can have as many as you want. You then, for example, point AUM MIDI OUT to your new port and NS MIDI IN to the same.

  • @dendy said:

    @skiphunt
    I’m going to put this back into the back burner for now I think. Maybe later it’ll start making sense.

    yeah if controlling NS from outside by various midi sequencers is what you want to do then at the moment it's not ideal, you need to wait until NS2 will register itself as "virtual midi in" port - then it will be easy same way like in other hosts .. definitely on todolist..

    If the MIDI sending app also doesn't publish a virtual port, a work around is required (or just use AB3 for hosting). Other wise, I reckon NS2 is quite ideal for controlling from the outside world. I use it with my hardware sequencers and Quantum regularly like this—Q and my midi interface both just show up in NS so no workarounds required.

    I mean, I totally agree that NS should expose a virtual port but using it with apps that do publish a virtual port or creating your own with MIDI Flow (or MIDI Fire or MIDI Bridge or...) is so quick and simple it never feels 'in the way' or kludgy to me.

  • @syrupcore said:

    @skiphunt said:

    @syrupcore said:

    @skiphunt said:

    @ExAsperis99 said:
    For a couple of days I was proud of myself for not buying NS2 just because the sale price (said to never be coming) was so cheap. I have enough DAWs, I am fluid in AUM, which is being updated. This is just a distraction, I told myself, even if it's cheaper than a Midtown sandwich. (And it is; how the hell did $14 lunches become the standard??)

    Anyway, I caved.
    And NS2 is really nice to work in. Just dabbled, but it's everything they said, though I don't get the love for Obsidian just yet.
    Glad I have it.

    I felt that even if I didn't get on with it as a "DAW", or get off on all the other stuff the ardent fans love... at the minimum it's worth the price for making loops and pieces to export out and use in other stuff.

    And/or as a super low resource multi-timbral synth/sampler with effects. Ignore the sequencer and run it from something external (say from Quantum or a set of ApeMatrix hosted AU MIDI doodads or...) and suddenly it's the most powerful/least expensive Roland JV/XP/XV series type multi-timbral synth ever. I mean, Obsidian is considerably more powerful than the synth engine in the JV2080 or the XV5080 in pretty much every way (except for poly-aftertouch support, I guess). The JV2080 goes for $400+ on ebay to this day. Obsidian still doesn't have that Roland JV/XP/XV sound library yet but that's just a question of time.

    Decided to give that a spin, ie sending midi to obsidian from externally midi doodads. Tried Quantum @nd had it working with minimal effort. Tried sending midi from Fugue Machine, riffer, and Rozeta via apeMatrix and AUM. Can’t figure out the routing. Can you or anyone else offer a little guidance? I checked the NS2 manual, but there’s not much detail there at all. And, can’t I just host those AU sequencers in NS2 like I can with all the other AU hosts?

    Was able to do it easy enough from AB3 by sending to virtual midi. Still stumped with apeMatrix & AUM though

    Right. The old who does and who doesn't publish a virtual MIDI port thing. Wish all apps did! I use MIDI FLow's custom virtual ports for this. They're dead simple to set up and you can have as many as you want. You then, for example, point AUM MIDI OUT to your new port and NS MIDI IN to the same.

    Yes. You must’ve missed my follow up post. Using midiflow is exactly what I did to get external stuff working.

    Moving on to the sampler after finishing up the vids on send fx, etc.

  • @skiphunt said:
    Ok, I got it set up so that I can send/receive AU midi sequences from external hosts. I had to set up a quick local to local network on the iPad using midiflow. Send out my midi to my “local” midiflow network, and then receive from the “local” network from within NS2.

    Works fine, but kind of a PITA compared to other hosts.. but not really that bad. Still, it’s a way can use all my midi sequence toys from external hosts to drive and record NS2’s obsidian synth.

    Or, just use AB3 with its virtual midi bridge instead. :)

    Ha. I guess I should have read the rest of the thread before replying. :) Glad you got it sorted.

  • @syrupcore said:

    @skiphunt said:
    Ok, I got it set up so that I can send/receive AU midi sequences from external hosts. I had to set up a quick local to local network on the iPad using midiflow. Send out my midi to my “local” midiflow network, and then receive from the “local” network from within NS2.

    Works fine, but kind of a PITA compared to other hosts.. but not really that bad. Still, it’s a way can use all my midi sequence toys from external hosts to drive and record NS2’s obsidian synth.

    Or, just use AB3 with its virtual midi bridge instead. :)

    Ha. I guess I should have read the rest of the thread before replying. :) Glad you got it sorted.

    HA 2x. I even missed your follow up about your follow up to my follow up. Closing the browser...

  • edited April 2019

    @syrupcore said:

    @skiphunt said:
    Ok, I got it set up so that I can send/receive AU midi sequences from external hosts. I had to set up a quick local to local network on the iPad using midiflow. Send out my midi to my “local” midiflow network, and then receive from the “local” network from within NS2.

    Works fine, but kind of a PITA compared to other hosts.. but not really that bad. Still, it’s a way can use all my midi sequence toys from external hosts to drive and record NS2’s obsidian synth.

    Or, just use AB3 with its virtual midi bridge instead. :)

    Ha. I guess I should have read the rest of the thread before replying. :) Glad you got it sorted.

    Well, it was your original post about driving obsidian with apeMatix mini doodads that set me off on the wild goose chase. ;)

    Playing with the obsidian sampler now... pretty cool.

  • edited April 2019

    @skiphunt said:
    Well, it was your original post about driving obsidian with apeMatix mini doodads that set me off on the wild goose chase. ;)

    Playing with the obsidian sampler now... pretty cool.

    Cool. Sampler OSC + FM OSC is my favorite.

    One fun thing that you prolly already know but I'll say it aloud anyway because it's part of its "as a standalone multi-timbral synth strengths: you can set multiple Obsidian tracks to the same MIDI channel to layer stuff.

    You can also set a key range per track. Obvious use case for this is splitting a keyboard but it can be used for fun stuff too. For instance, adding some oomph to the bass notes in an otherwise thin sound. Or making a 3 octave arpeggio play different sounds. Or, copying the same sound, having entirely different FX chains for different note ranges...

    And beyond... Some rabbit holes:

    1. Assign pan on each 'layer' (track) to a macro knob and assign different CCs to that knob per layer (in the mixer). Then you can automate them with some CC sequencer or MIDI LFOs for swirly madness.
    2. Or just use an internal LFO. If you set them all to the same speed but with slightly different start phases they wont line up but will sort of chase each other. Adjust the start phase of an LFO by dragging on the screen. For this, you'll want them to be using the 'global' sync option. Note: remember that you can copy and paste Obsidian sub-panels between tracks. So set up the first and paste it into your other layers and then adjust the phase. I love this feature.
    3. Or use one of the random LFO shapes and set the LFO sync to KEY. Now each time you hit a key each of the layers will be at a different place in the stereo field.
    4. With two LFOs assigned to PAN you can sorta mix these two ideas. They start at a random place in the stereo field and then swirl from there (while a key is held). With the mod matrix, there even more ways to do this sort of stuff.

    Another multi-timbral rabbit hole... on the OSC VEL panel, you can set up velocity switching for each of the three oscillators. A non-obvious thing you might try:

    1. Create a track with whatever 'main' sound you want to use. Let's say a piano-ish sound
    2. Create another track with a boom-like sound on OSC2.
    3. Put them both on the same MIDI channel
    4. On the boom patch, turn off OSC1 or set it's volume to zero.
    5. On the boom patch, set the velocity up so that the boom OSC is only triggered by velocities over, say, 120.
    6. Now in your remote sequencer, any time you send a velocity over 120 you'll hear both the piano and the boom.
    7. Could also use something like this with the same sound on two tracks but a different mixer FX chain (add tons of delay to notes of a certain velocity)

    Velocity switching is somewhat explained here: https://www.blipinteractive.co.uk/nanostudio2/user-manual/Obsidian.html#osc-panel

    Of course, if the sounds aren't too complex, most of the above can be done within a single instance of Obsidian since it has three oscillators.

  • edited April 2019

    @syrupcore said:

    @skiphunt said:
    Well, it was your original post about driving obsidian with apeMatix mini doodads that set me off on the wild goose chase. ;)

    Playing with the obsidian sampler now... pretty cool.

    Cool. Sampler OSC + FM OSC is my favorite.

    One fun thing that you prolly already know but I'll say it aloud anyway because it's part of its "as a standalone multi-timbral synth strengths: you can set multiple Obsidian tracks to the same MIDI channel to layer stuff.

    You can also set a key range per track. Obvious use case for this is splitting a keyboard but it can be used for fun stuff too. For instance, adding some oomph to the bass notes in an otherwise thin sound. Or making a 3 octave arpeggio play different sounds. Or, copying the same sound, having entirely different FX chains for different note ranges...

    And beyond... Some rabbit holes:

    1. Assign pan on each 'layer' (track) to a macro knob and assign different CCs to that knob per layer (in the mixer). Then you can automate them with some CC sequencer or MIDI LFOs for swirly madness.
    2. Or just use an internal LFO. If you set them all to the same speed but with slightly different start phases they wont line up but will sort of chase each other. Adjust the start phase of an LFO by dragging on the screen. For this, you'll want them to be using the 'global' sync option. Note: remember that you can copy and paste Obsidian sub-panels between tracks. So set up the first and paste it into your other layers and then adjust the phase. I love this feature.
    3. Or use one of the random LFO shapes and set the LFO sync to KEY. Now each time you hit a key each of the layers will be at a different place in the stereo field.
    4. With two LFOs assigned to PAN you can sorta mix these two ideas. They start at a random place in the stereo field and then swirl from there (while a key is held). With the mod matrix, there even more ways to do this sort of stuff.

    Another multi-timbral rabbit hole... on the OSC VEL panel, you can set up velocity switching for each of the three oscillators. A non-obvious thing you might try:
    2. Create a track with whatever 'main' sound you want to use. Let's say a piano-ish sound
    3. Create another track with a boom-like sound on OSC2.
    4. Put them both on the same MIDI channel
    5. On the boom patch, turn off OSC1 or set it's volume to zero.
    6. On the boom patch, set the velocity up so that the boom OSC is only triggered by velocities over, say, 120.
    7. Now in your remote sequencer, any time you send a velocity over 120 you'll hear both the piano and the boom.
    8. Could also use something like this with the same sound on two tracks but a different mixer FX chain (add tons of delay to notes of a certain velocity)

    Velocity switching is somewhat explained here: https://www.blipinteractive.co.uk/nanostudio2/user-manual/Obsidian.html#osc-panel

    Of course, if the sounds aren't too complex, most of the above can be done within a single instance of Obsidian since it has three oscillators.

    Oh yes! I’ll dive into these rabbit holes later as I become more familiar with ns2, but I just tried setting up 5 instances and driving them all externally via KB-1 (instant connection). Awesome! Does obsidian support mpe?

    EDIT: I don't see it listed anywhere on the NS2 site specs.. so I'm guessing it doesn't. No matter, was mostly just curious.

  • @skiphunt No MPE at the moment.

  • Another layering thing to consider—you can set the AMP envelope's level to track velocity in reverse (Env->Scaling). So you could have one sound track velocity in the 'normal' way and another sound do the opposite in order sort of cross-fade them with playing dynamics (or seq vel)

  • @syrupcore said:
    Another layering thing to consider—you can set the AMP envelope's level to track velocity in reverse (Env->Scaling). So you could have one sound track velocity in the 'normal' way and another sound do the opposite in order sort of cross-fade them with playing dynamics (or seq vel)

    You need a shed in the middle of the forest, with a blackboard, we shall come and bring cushions and tea.

  • @syrupcore said:
    Another layering thing to consider—you can set the AMP envelope's level to track velocity in reverse (Env->Scaling). So you could have one sound track velocity in the 'normal' way and another sound do the opposite in order sort of cross-fade them with playing dynamics (or seq

    It took me half an hour to figure out how to set this up... and yes, that’s a spectacular trick. Thanks!

  • edited April 2019

    That's a pretty cool list of tricks and tips !

    Or use one of the random LFO shapes and set the LFO sync to KEY.

    You can use for randomisation also other method which saves LFO for other more fancy stuff :) you can"Rand1" or "Rand2" modulation sources. These mod sources basically generates for every voice random value from 0 to x, where is is modulation depth number:


    This is deeply connected to one of my most favourite tricks. By default, obsidian oscillators are phase synced (every voice starts with same phase). This is cool in many cases (especially for percussive sounds or leads where you want to start with significant transient). In combination with unison it produces "phasing" effect (especially with just small detune) - which again is great in many cases.

    BUT.

    In case you want to reproduce more "analog-like" sound, you need so called "free running" oscillators - their start with different (semi-random) phase for ever voice.

    To obtain this, just set in mod matrix for example RAND1 > Osc1 (or Rand1 > All Osc - it depends on what character of sound you want to get, how much randomness you want).

    How it affects result sound ? Significantly. In example above is exactly same patch (2 oscillators, 4 voice unison applied with small detune and stereo spread, playing 4 notes chords). In second round there is added Rand1 > Osc1 (40%) and Rand2 > Osc2 (40%) modulation. Difference is very noticeable - first round is synced, there is noticeable phasing effect, sound is sharp, cutting, second round is nice smooth, voices are more blended, it sounds more "analog like phat".

    Such significant change of sound character obtained just by chaning oscillators from phase synced to phase random.
    https://www.dropbox.com/s/2b19j3o1t4pz3on/SyncedAndUnsynced.wav?dl=0

    Note: For "Sample" oscillator, if you use "single cycle" waveforms, you can get same effect by modulating "Rand > Oscillator sample start" instead of "Phase".

    Note2: For long samples (not "single cycle" waveforms) use just very small amount of modulation (1-5). Try put same sample on osc1, osc2, pan one to left, other to right and than apply rand1 > Osc1 sample start (3%), rand 2 > Osc2 sample start (2%) - instant huge stereo sound :)

  • edited April 2019

    @skiphunt said:

    @ExAsperis99 said:
    For a couple of days I was proud of myself for not buying NS2 just because the sale price (said to never be coming) was so cheap. I have enough DAWs, I am fluid in AUM, which is being updated. This is just a distraction, I told myself, even if it's cheaper than a Midtown sandwich. (And it is; how the hell did $14 lunches become the standard??)

    Anyway, I caved.
    And NS2 is really nice to work in. Just dabbled, but it's everything they said, though I don't get the love for Obsidian just yet.
    Glad I have it.

    I felt that even if I didn't get on with it as a "DAW", or get off on all the other stuff the ardent fans love... at the minimum it's worth the price for making loops and pieces to export out and use in other stuff.

    Also, if you don't already know this (I didn't) you can import from AudioShare into the Slate beat section. You can import a very long sample if you want, then play that into a track by holding down the pad while recording... as kind of a workaround for getting audio tracks in.

    I’d take it one step further. If you have blocs wave. Use it to time stretch the recording you have in AudioShare to target tempo and THEN import into slate.

    Also. To answer an earlier question you had. It actually is possible to send midi from AUM hosting midi AUs and record them in NS2. You just need an intermediary midi port like mf adapter. Since I already owned it I just used it. Here’s some example pictures:-

    @ExAsperis99 this might interest you too. Since both apps have Ableton Link the midi is recorded in perfect time.

    Now I just need to figure out if there is a way to make each track listen to a particular midi port and channel..... any help with this @dendy ?

    Edit:- nevermind. Found it :D

  • @dendy said:

    @jonmoore
    The one thing that apeMatrix and AUM provide that NS2 doesn't, is greater flexibility with FX routing. apeMarix is particularly strong in this regard as you can easily set up FX chains that use parallel as well as series IO

    readed this multiple times last few days ... Maybe i'm missing something, but to me looks like NS2 midi/audio routing is capable same complexity like AUM/ApeMatrix, and more ... Looks like people really don't know about true routing possibilities in NS2...

    I think @cian covered a lot of the differences in apeMatrix but another key aspect is that FX react differently when they're set up as channel FX to the way they do when set up on send/return busses. Both of those aspects of routing can be matched in most audio hosts. Where things change in apeMatrix is that you can then set up both parallel and series FX paths for both insert FX and send/return busses.

    Add to that the same level of routing flexibility with regards to MIDI devices and that results in routing flexibility I've not seen outside of a high Eventide devices. And that's the way I view apeMatrix, it's the hub that enables me to roll my own sophisticated multi-fx where each discrete element is powered by an app of my choosing.

    Once you get your head around the separate matrices, it's a walk in the park to use.

    apeMatrix is so good I wish I had it available to me in Ableton on the desktop That's not to say Ableton can't do everything apeMatrix can, it's just more of a headache to achieve it. I use FL Studio as a VST within Ableton to achieve similar flexibility as the workflow is an improvement on Ableton's but it's still not as good as apeMatrix.

    But it's worth saying here, Nanostudio 2 is very close to being my favourite DAW on iOS. It has a few critical things that don't work for me as yet which stop me from using it, but I'm very much looking forward to the next update.

  • edited April 2019

    Where things change in apeMatrix is that you can then set up both parallel and series FX paths for both insert FX and send/return busses.
    Add to that the same level of routing flexibility with regards to MIDI devices and that results in routing

    hm, still but all this you can do also in NS2.. you can send audio/midi from one channel to any number of other channels, every target channel you can again send to any number of other channels, etc etc .. i really can't imagine any thinkable audio / midi route which will be not possible to do in NS2... don't get me wrong, i'm not saying that "ns2 is better" - i'm just curious and trying understand if there is some routing (just pure routing) which is not possible to do in NS... (maybe then i can talk with matt about improving ns routing capabilities :)))

    give mi some real example a i'll try reproduce it in NS :)

    Still - main advantages of ApeMatrix are to me:

    • "matrix" view on all routings - very clean, intuitive, straightforward - that's really powerfull especially for live performance.
    • that LFO feature for every AU parameter is fantastic !
    • routing scenes

    those 3 things are reaally really cool.. but just in terms of pure audio/midi routing posibilities, i don't see anything which would be possible in ApeMatrix and not in NS2

  • @dendy said:

    those 3 things are reaally really cool.. buj just in terms of pure audio/midi routing posibilities, i don't see anything which would be possible in ApeMatrix and not in NS2

    As I mentioned with regards to Ableton and FL Studio, it's not just a matter of what is and isn't possible, most hosts will get you there in the end. It's a matter or the workflow and the resulting visual representation of the complex routing network.

    I've got a lot on my plate at the moment (and I've promised to write some bits for the Audiobus wiki) but multi-fx routing is part of what I'll be covering on the AB wiki so you'll see some real world examples when that gets posted.

    The main reason I use aM, AUM, and AB is that I like to write my FX chains once and then have them available in all iOS DAW's. And one of my issues with Nanostudio 2 at the moment is that it's too much of an island that's attempting to be all things to all people. One of the main reasons I use iOS devices, is the modular communication between apps. Back in the day when @Michael created AB, modularity was a necessity, but it's grown to become a flexible platform USP. Being an island in a networked world is most definitely a limitation (one that Gadget suffers from too). But the new update brings AB compatibility and that can host AUM/aM so that should open things up a fair bit.

    As I've mentioned elsewhere, I use iOS devices differently to most here on the AB forum in the sense that I have 4-5 permanently hooked up to my desktop DAW as external sound modules. I own all of the iOS DAWs (as I like to keep on top of their developments) and none of them excel in all areas but Nanostudio 2 is the closest to being the rounded package suited to my workflows when I'm mobile. The reason it's head and shoulders above Cubasis and Auria, is that it has a refined UX that's built for iOS. Both Cubasis and Auria suffer from desktop-style UX's badly translated to iOS. But Cubasis is still my DAW of choice when mobile as it will host most things you throw at it without problem. And whilst it's automation workflow isn't very good for non-native 'plugins', it will automate any parameter the plugin in question exposes.

    ModStep/Studiomux is what I use day in day out, but only to stream audio to my desktop DAW. When ModStep 2 comes maybe that will be my iOS DAW of choice, who knows (I am an Ableton user after all). If it has fully functioning Plugin Delay Compensation from day one, that will be a major plus point. At the moment it appears that only Auria Pro has implemented glitch free PDC (this came with the last update). Really hoping that NS2 isn't far behind AP as PDC is another of those things that will bring iOS DAWs closer to desktop DAW performance and flexibility.

  • edited April 2019

    @jonmoore
    It's a matter or the workflow and the resulting visual representation of the complex routing networ

    yeah .. i understand this factor .. i mentioned it above .. matrix view of routing (same is in AUM) is very cool ..

    Being an island in a networked world is most definitely a limitation

    i'm glad i have some great infos about this issue, but first i need put my hands on latest beta, check if everything works as planned - and then will be back with some great new infos :))

    f it has fully functioning Plugin Delay Compensation from day one, that will be a major plus point. At the moment it appears that only Auria Pro has implemented glitch free PDC (this came with the last update). Really hoping that NS2 isn't far behind AP as PDC is another of those things that will bring iOS DAWs closer to desktop DAW performance and flexibility.

    NS implements sample accurate plugin delay compensation on all parts of audio graph. It even displays information if some AUfx plugin introduces bigger latency into audio graph (for example FAC transient does that, which is obvious because of what this plugin is doing).

  • I'm in agreement with @jonmoore here.

    One of the benefits about the midi routing flexibility of apeMatrix is that it is agnostic in terms of what
    type of plugin it is. That's why there's never been any compatibility issues with midi plugins in apeMatrix. All the Daws have had problems because they have to categorise everything. Which has its' advantages in a daw, But in modular routing platforms the openness means that you can easily send midi from one app to another without any worry about what category it is.

    It's also the number of taps to do things as well, apeMatrix has it's advantages here. Just much faster to experiment with things. I'm also wishing there was a quicker way in Ableton Live as well.

    That's why I like doing my mucking about on iOs modulars, it's more fun to try things out quickly, and most of the time I have Ableton hooked up to my ipad either AUM/apeMatrix or AB3 and I can send stuff between easily. Using the power of a desktop daw when i need it. The ipad daws are mainly designed to be worked on as standalones and that idea is less appealing to me.

    If I'm wanting to sketch out a full track on iOS though then NS2 is a great option. But I like a workflow between platforms. I'd like to see Ableton Live export on NS2. But one nice thing it does have is the connection to desktop webDAV which is a great feature :)

    I'd like to see that in the modulars too.

Sign In or Register to comment.