Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Random rants about the "Files" app and MIDI timing (no, really!)

2»

Comments

  • edited January 2023

    jason said:
    ( last time i tried for instance to load a larger MIDI file into Xequence2, it just failed terribly because it actually was cutting the tracks at a certain point in time.

    Can you try disabling "Split by markers" in the MIDI import settings?

    I did never touch it therefore again. Is that now fixed? - I mean, the same thing did not happen with Helium, a fully featured sequencer

    From what I understand, Helium is a pianoroll and controller editor only, it doesn't have arrangement features. So while Helium is probably a good choice for "noodling", Xequence is more geared towards arranging full songs.

    I guess, in general, everyone defines “professional” slightly different and not all tools always will fit everyones needs.

    That's true! Although I'd say that both Xequence and Helium are objectively "professional" tools. Just with different feature sets 🙂

    A “linear” MIDI sequencer, as you mentioned MUST be able to sequence hours (hundred thousands) of MIDI events without any problems and perfect timing at least, as this actually is a very simple iteration process and MIDi events actually are small data bytes.

    Yes, Xequence should handle thousands of clips with tens of thousands of events each without issues.

    Let me know if disabling the above option helps.

  • jason said:
    So not all the ‚dirty bomb‘ auv3 sequencers?
    (Atom1-, Atom2- Helium-)

    Well, to start, the Atom doesn’t provide editing of anything but notes. It also isn’t able to accurately record and playback Animoog Z performances.

    As far as I know, MIDI Tape Recorder (great recorder/player that has no editing ) is the only highly reliable AUv3 for capturing and playing back MPE.

  • jason said:

    @espiegel123 said:

    jason said:
    So not all the ‚dirty bomb‘ auv3 sequencers?
    (Atom1-, Atom2- Helium-)

    Well, to start, the Atom doesn’t provide editing of anything but notes. It also isn’t able to accurately record and playback Animoog Z performances.

    As far as I know, MIDI Tape Recorder (great recorder/player that has no editing ) is the only highly reliable AUv3 for capturing and playing back MPE.

    I think, this all is caused by a very simple fact.
    MPE is nothing different than MULTI CHANNEL MIDI. No more no less.

    So you need just a recorder or sequencer that supports all 16 MIDI channels and it actually should work.
    As I see, the heavily hyped Atom series is a conceptual disaster, as it obviously merely is a pattern based and single channel thing. But if so, then it is even totally inconsequently designed. This kind of Audio us “game changer”. - a personal opinion.

    MPE also needs very accurate timing. To accurately play back. Until MTR nothing on iOS eas accurate enough in the timing to capture and play back correctly , including apps that do multi-channel recording.

    Many ios midi sequencers are not very high resolution in their timing.

  • @espiegel123 said:

    jason said:
    So not all the ‚dirty bomb‘ auv3 sequencers?
    (Atom1-, Atom2- Helium-)

    Well, to start, the Atom doesn’t provide editing of anything but notes. It also isn’t able to accurately record and playback Animoog Z performances.

    As far as I know, MIDI Tape Recorder (great recorder/player that has no editing ) is the only highly reliable AUv3 for capturing and playing back MPE.

    That's my experience. Atom 2 can't loop mpe performances accurately. Midi Tape Recorder is my current go-to for recording mpe

  • MTR is open source. You can take a look and see what I does.

  • jason said:

    @espiegel123 said:
    MTR is open source. You can take a look and see what I does.

    Not necessary. Because I know, how it is working.

    It doesn’t have the issue you mention of quantizing to buffer cycles.

  • jason said:
    So then your perfect tool is there already. That”s just fantastic.
    But I am quite sure, a host can confuse this tool too with easinesses.

    unfortunately, MTR has no editing. The issue is that there is no AUv3 midi recorder and editor with that precision.

  • edited January 2023

    jason said:
    Example: AUM for instance will display zero offsets, if you use the inbuilt keyboard for playing… so it is definitively a HOST issue ! In fact any touchscreen keyboard most likely will have this issue of delivering zero offsets…

    Live events cannot be scheduled in the current buffer because that buffer has already been dispatched (is already on its way to the output / next processor / etc.). So they always have to be scheduled in the next buffer, that's why the timestamp will always be zero (as early as possible, because they're already too late anyway).

  • _ki_ki
    edited January 2023

    jason

    AUMs keyboard and all the AUV3 keyboard and pad app i have tested (*) produce notes with zero sample offset timings. In the thread i only tested MIDI Sequencers. If you record something like Rozeta X0X, you‘ll notice the sample offsets.

    The test method was mentioned quite early in the thread, and i also published the AUM session i am using. With this session and a sample buffer setting of 2048, a missing sample offset is also audible. But just looking at the MIDI monitor is usually enough.

    (*) AUM’s keyboard, Xequence key, Xequence Pads, KB-1, Velocity KB, Mononoke Pads, the keyboard and chord pads integrated in LK, Tonality ChordPads, ChordPadX

  • _ki_ki
    edited January 2023

    @SevenSystems said:
    Live events cannot be scheduled in the current buffer because that buffer has already been dispatched (is already on its way to the output / next processor / etc.). So they always have to be scheduled in the next buffer, that's why the timestamp will always be zero (as early as possible, because they're already too late anyway).

    Thanks for the explanation - i already suspected that there is a technical reason none of the keyboard apps issue a sample offset.

  • edited January 2023

    @_ki said:

    @SevenSystems said:
    Live events cannot be scheduled in the current buffer because that buffer has already been dispatched (is already on its way to the output / next processor / etc.). So they always have to be scheduled in the next buffer, that's why the timestamp will always be zero (as early as possible, because they're already too late anyway).

    Thanks for the explanation - i already suspected that there is a technical reason none of the keyboard apps issue a sample offset.

    Yes. It's not even a programming thing, it's more of a generic spacetime thing. 😉 when a live event arrives, what's currently being output is the previous buffer you prepared, and that cannot be modified anymore. So, your earliest chance for outputting is 0 in the next buffer.

    jason said:
    Yes logical. But not entirely correct.

    Because You actually can schedule the touch event for the next buffer with concrete offsets, finally giving much more precise timing with touch screens. In fact, I think some touchscreen keyboards will do this. This is working the same way as scheduling realtime MIDI events from Core MIDI will (should be) processed. With short sample buffers the small delay is just ignorable, but the input timing is much better.

    I think I get what you mean -- you essentially want to introduce additional latency in order to eliminate jitter? i.e. make the delay between the input event and the output constant? Interesting. I guess input apps could have both modes as an option. Though I guess in most situations, minimal latency is more important than jitter for live input.

  • _ki_ki
    edited January 2023

    jason said:
    I mean, are you guys thinking. you are the “inquisition” or the “prefectura of the holly audio congregation” or such??? This is similar to a witch hunt made by fools.

    Missing sample offsets from a sequencer are quite audible for percussive sounds. At a buffersize of 2048 and 120bpm it can be as be as bad as 1/64th (*). And then there‘s also the problem of drift - which started the whole investigation in the „lets talk about sequencer timing“ thread.

    (*) 48000 samples per second sample rate and 2048 buffer size results in 23.4 msec/buffer - or a max offset of 23.4ms if sample-offset is always zero. 120bpm means 2 secs for 1 bar, 62.5ms for 1/32 and 31.25 for 1/64

  • edited January 2023

    jason said:
    The larger the buffer size is, the more worse this careless approach of just putting everything to zero is.
    And doing it correctly is not an additional “latency” at all but a correct representation of the input timing. (slightly delayed).

    Because A latency is always there with any digital system. its just unavoidable. Even with that wrong zero timing, there is a latency. But additionally also a total wrong time jitter of the input timing. Period.

    A realtime Drum machine with touch input will actually quantize the entire realtime input to the buffersize this way! And this is a wrong approach. With a high precision medical instrument or a space shuttle on the way, this could actually bring the death, such a sloppiness. ^^

    It's an interesting point, maybe @j_liljedahl also wants to drop a short comment.

    Fortunately I'm not designing the realtime control system for the Space Shuttle ;) (I did design realtime control systems for industrial applications, but timing fortunately wasn't that critical in that case 😎)

  • I suspect one of the reasons on-screen keyboards don't send any sample clocks is due to the speed at which the touch interactions are collected and reported by the OS. 60Hz for regular iPads, 120Hz for Pros unless screen refresh is capped to 60Hz or 240Hz when using the Apple Pencil...

  • _ki_ki
    edited January 2023

    jason said:
    So what is with realtime keyboard input then???
    Is that also all quantized to zero?

    I didn‘t check that yet, but i will do so for my two midi keyoafds that i own. I assume these will get the offset fromthe Ios system reporting the midi input.

    Who wants to record the output of a sequencer ???
    I am totally lost here. This makes no sense to me at all.

    No-one want‘s to record sequencers, thats correct.

    But we want sequencers that are tight and don‘t have drift. We found out that zero sample-offset is one part of problem, as this is audible (around 1/64 at 48khz/2048) for drums. The sequencers that implement sample-offset rarely show drift in our experience, maybe that‘s a side effect of the devs having a closer look at their code.

    To check Atom/Helium or other sequencers, just input notes by tapping into their UI, set 2048 buffersize and have a look at the Midi in the Midi-Montor. BUT to check MitiTapeRecorder that does not offer editing or note input via UI, one obviously needs to supply an input with sample offset to check if that is recorded and replayed. And since keyboard don‘t send sample-offset and MTR test requires to record one of the sample-offset producing sequencers.

    I understand, that a sequencer is required to produce perfect sample offsets.

    But the discussion here was about that recorder, which was to be required sample perfect for being MPE compatible, which obviously is nonsense because the onscreen keyboards all do not produce correct offsets.

    In my understanding it isn‘t the sample offset thats needed, but the order of events recorded needs to be exactly the same when recording and later playing back the MPE for Animoog/Animoog Z. Its a ‚dance‘ of midi CCs (74, 11 and AT) some before and some after the note-on is issued.

    Atom 2 records mulichannel midi and controller data, so in theory it could play back the midi that Animoog sends - but that doesn‘t work if IIRC - but if you want i can check. And one could have a look at the initial midi stream of animoog vs the replayed stream from Atom 2.

  • jason said:

    Do you think, that screen refresh rate is equal to touch input rate?

    There's a connection there at least...
    I don't remember the exact name of the WWDC session where response time was discussed in detail.

    There's a difference when using the Apple Pencil on a 60Hz device compared to 120Hz device.

  • _ki_ki
    edited January 2023

    I just checked recording a single MPE note of Animoog Z in Atom 2:

    In my test, Animoog Zs onscreen keyboards sends the note-on and note-offs with sample-offset zero, some of the aftertouch interaction gets sample-offset, but most of them also use zero, the pitch bend reset nefore the note-on has a sample offset of zero.

    Midi Monitor of input note:

    The recorded version has an identical order of the events - but all the CCs get a sample-offst timestamp before the note-on… maybe Animoog-Z interprets that offset when processing its input midi buffer and re-orders them.

    Midi Monitor of Atom 2 playing back the recorded events:

    I also can‘t explain the ‚System‘ messages logged.

  • If I recall, one of the issues is that Animoog Z makes use of portamento events that trip up some sequencers. @SevenSystems got it working. He can fill us in about the things he did to get it working.

  • edited January 2023

    There is no CC data in that loh> @espiegel123 said:

    If I recall, one of the issues is that Animoog Z makes use of portamento events that trip up some sequencers. @SevenSystems got it working. He can fill us in about the things he did to get it working.

    Yes, Animoog Z was a bitch to get fully working, especially because Xequence converts the recorded MIDI into an internal, completely different representation, and then back to MIDI on output, and all this has to be fully transparent. (i.e. the task is much more difficult than with something like MIDI Tape Recorder).

    Here's a page from another thread where this whole thing got fixed:

    https://forum.audiob.us/discussion/48946/xequence-2-3-public-beta-mpe/p2

  • edited January 2023

    jason said:
    => Any MPE recorder actually should just record what is coming in via MIDI.
    But NEVER do any modifications on it as long it is not potentially invalid data.

    Yes, that should be the case, but it's often difficult in practice. If you have a pianoroll editor with a ton of features, for example, you just cannot work with the raw MIDI data, it would increase the development effort by an order of magnitude. So you have to convert the incoming MIDI data to some internal representation that is easier to work with, and then back to MIDI on playback.

    The difficult part is to keep this whole process transparent so that playback creates exactly the same MIDI stream as what was recorded, even though the data went through two transformations.

  • jason said:
    Yes I do understand this perfectly.
    Therefore for recording MPE, either an extra recording modus should be chosen or a complete extra MPE recorder should be used. I think to make the perfect MPE editor is just a challenge that nobody can master with 100% success.

    Finally a special MPE editing mode will be required otherwise it will disturb all traditional sequencing editor paradigms up to incompatibility.

    That's what I tried to do in Xequence, and I think with reasonable success. But I'm biased 😄

  • @SevenSystems said:

    jason said:
    Yes I do understand this perfectly.
    Therefore for recording MPE, either an extra recording modus should be chosen or a complete extra MPE recorder should be used. I think to make the perfect MPE editor is just a challenge that nobody can master with 100% success.

    Finally a special MPE editing mode will be required otherwise it will disturb all traditional sequencing editor paradigms up to incompatibility.

    That's what I tried to do in Xequence, and I think with reasonable success. But I'm biased 😄

    You did a great job!

  • edited January 2023

    jason said:

    @SevenSystems said:

    jason said:
    Yes I do understand this perfectly.
    Therefore for recording MPE, either an extra recording modus should be chosen or a complete extra MPE recorder should be used. I think to make the perfect MPE editor is just a challenge that nobody can master with 100% success.

    Finally a special MPE editing mode will be required otherwise it will disturb all traditional sequencing editor paradigms up to incompatibility.

    That's what I tried to do in Xequence, and I think with reasonable success. But I'm biased 😄

    And generally I am thinking that MPE is more a realtime thingy. Well suited for recording. It is just not supposed for editing in the traditional way. Because the MPE implementation details are subject of highly specific manufacturer implementations. I mean, did you ever deal with NRPN, data entry, RPN and SysEx messages of certain devices? Even with more advanced MSB/LSB implementations?

    Yes that's true... I've looked at all of that and actually added comprehensive (N)RPN support to Xequence mostly so that MPE works (well, some synths, especially older hardware, still use (N)RPNs instead of CCs for a lot of stuff so it's a bonus anyway!).

    I agree that the implementations differ a lot, partly because the MPE specification is either inaccurate or a bit loose ("you can do it either way")... that's why Xequence has a bazillion settings for MPE recording, editing, and playback (settings! The stuff you hate! 😜)

  • edited January 2023

    .

  • @SevenSystems said:

    jason said:
    The larger the buffer size is, the more worse this careless approach of just putting everything to zero is.
    And doing it correctly is not an additional “latency” at all but a correct representation of the input timing. (slightly delayed).

    Because A latency is always there with any digital system. its just unavoidable. Even with that wrong zero timing, there is a latency. But additionally also a total wrong time jitter of the input timing. Period.

    A realtime Drum machine with touch input will actually quantize the entire realtime input to the buffersize this way! And this is a wrong approach. With a high precision medical instrument or a space shuttle on the way, this could actually bring the death, such a sloppiness. ^^

    It's an interesting point, maybe @j_liljedahl also wants to drop a short comment.

    I can confirm that the on-screen keyboard in AUM does not provide any sample offsets, because it's really mostly meant for testing out stuff. Also it's non-trivial to fix, because the MIDI timestamps are sample offsets within the current audio buffer (in AUM internally and in AUv3 in general) which is accessed in the audio thread, while the on-screen keyboard is handled in the separate UI thread, so one would need to put the note events in a lock-free queue, figure out the minimum delay needed to dispatch them in the next (but not the current) audio buffer, and convert their UI event timestamps into sample offsets within that audio buffer.

  • @Samu said:
    I suspect one of the reasons on-screen keyboards don't send any sample clocks is due to the speed at which the touch interactions are collected and reported by the OS. 60Hz for regular iPads, 120Hz for Pros unless screen refresh is capped to 60Hz or 240Hz when using the Apple Pencil...

    Oh, yes this is also an issue of course.
    There’s actually no point in trying to use the UI timestamps. At 60 Hz that would give the same jitter as using no timestamps and a buffer size of 735 samples at 44.1kHz sample rate.

Sign In or Register to comment.