Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Let’s talk about midi sequencer timing

12357

Comments

  • Hi there. I'm developer from Audiomodern team. Thanks everyone here for involvement and making this world a better place! :smile:

    @_ki Tested both sessions today and drift doesn't seem to appear on my end at all. On your screenshots I can see exact 1/16 offset - perhaps I know why this grid-offset may appear in some cases. But that slow drift, that's getting worse in time mentioned by @ocelot is still kind of mystery to me.
    I'm on iPad 5th gen 2017, iPadOS 14.7.1. Do you think it is something related to device or OS btw?

    @ocelot said:
    Tomorrow Later this week: I'll use a MIDI monitor and also record the audio output from AUM's metronome and Riffer>Spectrum.

    Yes please, I hope an in-depth inspection of recorded audio will help me to catch it

    @j_liljedahl said:
    I see, ok! That’s weird. Does Riffer have any hardcoded assumption of 44.1k, or buffer size, @Audiomodern ?

    And I’d like to understand why it happens only in AUM, but only with some sequencers. @Audiomodern Are you using current beatPosition from the host sync struct to determine time, or something else?

    Sample rate is not hardcoded of course. I'm using sample position provided by the host to calculate current step at playback start. Further callbacks rely on the same variable to calculate position inside the sequence, but we are using internal linear incrementation of steps, because we have adjustable range bar and backward direction. This approach leads to some issues. For example: in one of the projects provided by @_ki connecting/disconnecting headset during playback is breaking audio callbacks for a moment, and after this break AUM and X0X are staying in sync, but Riffer is starting to play with some grid-offset until playback is restarted.

  • @lunelalune said:
    Hi there. I'm developer from Audiomodern team. Thanks everyone here for involvement and making this world a better place! :smile:

    @_ki Tested both sessions today and drift doesn't seem to appear on my end at all. On your screenshots I can see exact 1/16 offset - perhaps I know why this grid-offset may appear in some cases. But that slow drift, that's getting worse in time mentioned by @ocelot is still kind of mystery to me.
    I'm on iPad 5th gen 2017, iPadOS 14.7.1. Do you think it is something related to device or OS btw?

    @ocelot said:
    Tomorrow Later this week: I'll use a MIDI monitor and also record the audio output from AUM's metronome and Riffer>Spectrum.

    Yes please, I hope an in-depth inspection of recorded audio will help me to catch it

    @j_liljedahl said:
    I see, ok! That’s weird. Does Riffer have any hardcoded assumption of 44.1k, or buffer size, @Audiomodern ?

    And I’d like to understand why it happens only in AUM, but only with some sequencers. @Audiomodern Are you using current beatPosition from the host sync struct to determine time, or something else?

    Sample rate is not hardcoded of course. I'm using sample position provided by the host to calculate current step at playback start. Further callbacks rely on the same variable to calculate position inside the sequence, but we are using internal linear incrementation of steps, because we have adjustable range bar and backward direction. This approach leads to some issues. For example: in one of the projects provided by @_ki connecting/disconnecting headset during playback is breaking audio callbacks for a moment, and after this break AUM and X0X are staying in sync, but Riffer is starting to play with some grid-offset until playback is restarted.

    I think someone mentioned that there were issues at 44.1k and not 48k. A developer recently mentioned to me that some iOS devices that can run at 44.1k and 48k are really 48k native and return some surprising values (timestamps? buffer windows? I can’t recall) that have him scratching his head.

  • @espiegel123 said:

    @lunelalune said:
    Hi there. I'm developer from Audiomodern team. Thanks everyone here for involvement and making this world a better place! :smile:

    @_ki Tested both sessions today and drift doesn't seem to appear on my end at all. On your screenshots I can see exact 1/16 offset - perhaps I know why this grid-offset may appear in some cases. But that slow drift, that's getting worse in time mentioned by @ocelot is still kind of mystery to me.
    I'm on iPad 5th gen 2017, iPadOS 14.7.1. Do you think it is something related to device or OS btw?

    @ocelot said:
    Tomorrow Later this week: I'll use a MIDI monitor and also record the audio output from AUM's metronome and Riffer>Spectrum.

    Yes please, I hope an in-depth inspection of recorded audio will help me to catch it

    @j_liljedahl said:
    I see, ok! That’s weird. Does Riffer have any hardcoded assumption of 44.1k, or buffer size, @Audiomodern ?

    And I’d like to understand why it happens only in AUM, but only with some sequencers. @Audiomodern Are you using current beatPosition from the host sync struct to determine time, or something else?

    Sample rate is not hardcoded of course. I'm using sample position provided by the host to calculate current step at playback start. Further callbacks rely on the same variable to calculate position inside the sequence, but we are using internal linear incrementation of steps, because we have adjustable range bar and backward direction. This approach leads to some issues. For example: in one of the projects provided by @_ki connecting/disconnecting headset during playback is breaking audio callbacks for a moment, and after this break AUM and X0X are staying in sync, but Riffer is starting to play with some grid-offset until playback is restarted.

    I think someone mentioned that there were issues at 44.1k and not 48k. A developer recently mentioned to me that some iOS devices that can run at 44.1k and 48k are really 48k native and return some surprising values (timestamps? buffer windows? I can’t recall) that have him scratching his head.

    Yep, I've read this entire thread and tested both 44.1 and 48. @ocelot mentioned that drifting appears on the iPads that runs natively 44.1 so this makes the whole thing more confusing. Another interesting fact: me and @_ki testing iPadOS 14.x, but @ocelot has iPadOS 15.2.

  • @lunelalune > @lunelalune said:

    Hi there. I'm developer from Audiomodern team.

    Welcome to the Audiobus forum and thank for your work on enhancing the Audiomodern apps !

    .

    @_ki Tested both sessions today and drift doesn't seem to appear on my end at all.

    Hmm, the AUM dev also couldn‘t reproduce it, but he used a device with 48khz and ocelot states it only happens with 44.1khz on his side.

    .

    On your screenshots I can see exact 1/16 offset - perhaps I know why this grid-offset may appear in some cases. But that slow drift, that's getting worse in time mentioned by @ocelot is still kind of mystery to me.

    It‘s a small drift on my side also, it took 130bars until 1/16 was reached, the drift is linear at bar 65 it‘s about 1/32 drift.

    .

    My current takeaway is that this drift only happens if i load a session based on ocelots AUM session (the drift glitch also „survives“ an AUM ‚clear‘ operation, but not a session load). All the AUM session that i build from scratch didn‘t show drift. (ie after fresh AUM start or not yet loaded a session with drift). The ‚drift‘ or ‚no drift‘ behavior are both 100% reproducible on my iPad (Pro 10.5 / IOS 14.8 / 44.1khz / all buffer sizes / Link or not), depending on the session i load.

    The last days i was not after the question ‚if‘ Riffer is in sync now (which it seems to be) but under which circumstances this mysterious drift happens.

    When i made that timing script, i used sessions based on ocelots session - and it seemed that Riffer send its notes every 499ms instead of 500ms - just an off-by one. That‘s why i asked the AUM dev how the tempo is stored and submitted to the AUv3 plugins. One assumption was that it‘s 119.999 (but shown as 120 and perhaps resulting in the 499ms on Riffers end). But setting AUMs tempo to 120.0 using keyboard input didn‘t help in case of an ocelot based session.

  • Welcome @lunelalune always great to have a knowledgeable developer join the fold.

  • _ki_ki
    edited December 2021

    As @cem_olcay did send me a promo code (thanks!) i was able to test SnakeBuds sync.

    It applies sample-offset (seen in MIDI Monitor) and stays in sync with X0X in the most problematic setting (2048 buffersize at 441.khz) . I checked if SnakeBud starts to drift in longer sessions, everything was still fine after 100 bars.

    Keep up your good work !

  • @lunelalune said:
    Hi there. I'm developer from Audiomodern team. Thanks everyone here for involvement and making this world a better place! :smile:

    @_ki Tested both sessions today and drift doesn't seem to appear on my end at all. On your screenshots I can see exact 1/16 offset - perhaps I know why this grid-offset may appear in some cases. But that slow drift, that's getting worse in time mentioned by @ocelot is still kind of mystery to me.
    I'm on iPad 5th gen 2017, iPadOS 14.7.1. Do you think it is something related to device or OS btw?

    @ocelot said:
    Tomorrow Later this week: I'll use a MIDI monitor and also record the audio output from AUM's metronome and Riffer>Spectrum.

    Yes please, I hope an in-depth inspection of recorded audio will help me to catch it

    @j_liljedahl said:
    I see, ok! That’s weird. Does Riffer have any hardcoded assumption of 44.1k, or buffer size, @Audiomodern ?

    And I’d like to understand why it happens only in AUM, but only with some sequencers. @Audiomodern Are you using current beatPosition from the host sync struct to determine time, or something else?

    Sample rate is not hardcoded of course. I'm using sample position provided by the host to calculate current step at playback start. Further callbacks rely on the same variable to calculate position inside the sequence, but we are using internal linear incrementation of steps, because we have adjustable range bar and backward direction. This approach leads to some issues. For example: in one of the projects provided by @_ki connecting/disconnecting headset during playback is breaking audio callbacks for a moment, and after this break AUM and X0X are staying in sync, but Riffer is starting to play with some grid-offset until playback is restarted.

    Ok, here we have the problem I think. AUM's sample position timestamp is steadily incrementing, it shows the number of actual samples rendered since time zero. (The sensible thing to do IMHO but apparently other devs disagree!) This is not guaranteed to be in sync with currentBeatPosition. Especially if Link sync is enabled, but also because of internal jitter between sample time and host time on various devices and depending on sample rate (and iOS internal sample rate conversion).

    Sequencers should only use beat pos and tempo for musical synchronisation with host, nothing else. To calculate the frame offsets for midi events, grab the sample rate from your output bus in AllocateRenderResources and cache it for later use.

    http://devnotes.kymatica.com/ios_midi_timestamps.html
    http://devnotes.kymatica.com/ios_audio_sync.html

    (Regarding Jitter from one of above links) :

    On most 32-bit devices there’s jitter between mSampleTime and mHostTime of the timestamp passed to your render callback. Since Ableton Link is based on mHostTime, you’ll see fluctuations in the incrementations of the beat time, and thus also in the calculated precise tempo for each buffer. If Link is also connected to other Link-enabled apps, the fluctuations might be larger and incorporate adjustments made by Link to keep all peers in sync.

    One question I had regarding Link: Does the jittery beat time average out in the long run to stay in sync with a theoretical ideal clock source? Ableton responded that yes, this should be true but with the caveat that the hosttime clock of one of the devices in a session will be used as reference. Clocks can have slightly different frequency from each other, so it’s not true that it will match up with a theoretical “ideal” clock - it will match up with the actual physical clock of one of the devices on the network and the others will make slight adjustments to stay in sync with that.

  • _ki_ki
    edited December 2021

    And while doing my snyc tests, i noticed that Helium no longer has the 'loop wrap-around' bug. Must have been quietly fixed in one of the last updates :)

  • edited December 2021

    @j_liljedahl @lunelalune @_ki

    My mistake - Today, with these basic tests on my iPad, the issue may be with AUM's metronome, not Riffer>Plectrum.

    2021/12/20: Three 5 minute videos, each in AUM, apeMatrix, and Audiobus on a 2017 iPad Pro 10.5. All hosts running at 44.1kHz with 512 sample buffer size, at 120 BPM.

    Summary: The timing drift is only apparent in AUM at 44.1kHz, not at 48kHz, on a 2017 iPad Pro 10.5 on iPadOS 15.2 using internal iPad audio.

    Update 2021/12/22: Tested a 2013 iPad Mini 2, 2015 iPhone 6S Plus, the 2017 iPad Pro 10.5 again, and 2018 iPad 6th Generation, at both 44.1kHz and 48kHz. The timing drift is only an issue in AUM at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Generation. This is with the latest version of AUM from the App Store.

    • Tested with Riffer>Plectrum, polyBeat>Ruismaker FM, & Rozeta X0X>Ruismaker. AUM's metronome, polyBeat>Ruismaker FM, & Rozeta X0X>Ruismaker start to drift after >20 bars. Riffer>Plectrum doesn't, and is still on the grid after >100 bars. Again, this is only in AUM at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Generation. At 48Khz, those 2 iPads don't drift, and neither do the iPad Mini 2 and iPhone at either 44.1kHz or 48kHz.

    2021/12/20 Video: AUM at 44.1kHz (AUM Metronome=Left Channel; Riffer>Plectrum=Right Channel)(AUM metronome starts to drift after >20 bars; Riffer>Plectrum doesn't):

    2021/12/20 Video: apeMatrix at 44.1kHz (apeMatrix's Metronome, Riffer>Plectrum, & Ruismaker Noir (using its internal sequencer & preset 'TEST CLICK 2021'))(No drift):

    2021/12/20 Video: Audiobus at 44.1kHz (Riffer>Plectrum & Ruismaker Noir (using its internal sequencer & preset 'TEST CLICK 2021'))(No drift):


    Here's the 120 BPM stereo WAV that was extracted from the first video above (AUM video):

    • Download the WAV here (42MB): https://www.dropbox.com/s/t0lokbtmfekj4m0/20211220 AUM-trimmed_1.wav?dl=0
    • Open your preferred DAW or audio editor, set it to 120 BPM, set its timeline to display Bars/Beats/Ticks, and if required, turn on its metronome and vertical guidelines.
    • Load the WAV. Notice that both the left and right channels start simultaneously.
    • Push play.
    • Compare the left and right channels, both audibly and visually after 50+ bars. Notice that the left channel is no longer on the downbeat, but the right channel is.
      • Left channel (top) waveform is the audio output from AUM's metronome.
      • Right channel (bottom) is the audio output from Riffer driving Plectrum.

    Visual of the above WAV at start (AUM Metronome=Left Channel (top); Riffer>Plectrum=Right Channel (bottom)):

    Visual of the above WAV after 50 bars (AUM Metronome=Left Channel (top); Riffer>Plectrum=Right Channel (bottom)):

    Visual of the above WAV after 100 bars (AUM Metronome=Left Channel (top); Riffer>Plectrum=Right Channel (bottom)):


    Here's another AUM test at 120 BPM, 44.1kHz, 512 sample buffer:

    At start (AUM Metronome=Left Channel (top); Rozeta X0X>Rusimaker=Right Channel (bottom)):

    After 50 bars (AUM Metronome=Left Channel (top); Rozeta X0X>Rusimaker=Right Channel (bottom)):

    After 100 bars (AUM Metronome=Left Channel (top); Rozeta X0X>Rusimaker=Right Channel (bottom)):


    2021/12/20 Basic Test Setup: 2017 iPad Pro 10.5; iPadOS 15.2; Storage Space: 45GB+ remaining; Audio: Internal iPad audio, Sample rate: 44.1kHz; no external peripherals used while testing; Airplane Mode=On; Do Not Disturb=On, etc.; no other apps open except for the AU hosts.

    2021/12/22 Basic Test Setup: 2013 iPad Mini 2 (iOS 12.5.5), 2015 iPhone 6S Plus (iOS 15.2), the 2017 iPad Pro 10.5 again (iPadOS 15.2), and 2018 iPad 6th Generation (iPadOS 15.2); Audio: Internal iPad audio, Sample rate: 44.1kHz and 48kHz; latest version of AUM from the App Store.

  • @ocelot said:
    @j_liljedahl @lunelalune @_ki

    My mistake - Today, with these basic tests on my iPad, the issue may be AUM's metronome, not Riffer>Plectrum.

    I certainly hope not, I spent days on making that metronome sample accurate :) (but of course there could be bugs)..

    How would you decide the issue is the metronome if you don't compare it with a third test object? I bet the metronome will be in sync with XOX if you add that to the mix.

    I'm pretty sure the issue is that Riffer uses the sampleTime from the host sync callbacks instead of current beat position.

    The reason it's different between hosts is that different devs had different ideas of what the mSampleTime field should represent.

    The reason it depends on sample rate is because the iOS jitter/drift between sampletime and hosttime depends on internal (and sometimes invisible for the dev) sample rate conversion happening deep down in iOS/CoreAudio.

  • @j_liljedahl Please review the photos and video posted above. The apeMatrix and Audiobus videos are uploading, each with a third test object.

  • @ocelot said:
    @j_liljedahl Please review the photos and video posted above. The apeMatrix and Audiobus videos are uploading, each with a third test object.

    Sorry if I’m not understanding something, but I see no third test object in the AUM video, so what proves the metronome is off instead of Riffer? If you play the metronome + XOX + Riffer in AUM, will two of them stay in sync and if so, which one is not?

    As I mentioned above, since Riffer is not using beatPosition but sampleTime, it will drift out of sync on some devices.

  • @j_liljedahl said:

    @ocelot said:
    @j_liljedahl Please review the photos and video posted above. The apeMatrix and Audiobus videos are uploading, each with a third test object.

    Sorry if I’m not understanding something, but I see no third test object in the AUM video, so what proves the metronome is off instead of Riffer? If you play the metronome + XOX + Riffer in AUM, will two of them stay in sync and if so, which one is not?

    As I mentioned above, since Riffer is not using beatPosition but sampleTime, it will drift out of sync on some devices.

    In the three photos above, does the Bars/Beats/Ticks view in the audio editor's timeline not suffice as the baseline?

  • @ocelot said:

    @j_liljedahl said:

    @ocelot said:
    @j_liljedahl Please review the photos and video posted above. The apeMatrix and Audiobus videos are uploading, each with a third test object.

    Sorry if I’m not understanding something, but I see no third test object in the AUM video, so what proves the metronome is off instead of Riffer? If you play the metronome + XOX + Riffer in AUM, will two of them stay in sync and if so, which one is not?

    As I mentioned above, since Riffer is not using beatPosition but sampleTime, it will drift out of sync on some devices.

    In the three photos above, does the Bars/Beats/Ticks view in the audio editor's timeline not suffice as the baseline?

    No, because the audio editor timeline shows the sample time, not the host sync beat time. So it only proves again that Riffer uses sample time and everything else (including AUMs metronome, and AUMs Ableton Link sync) is using beatPosition.

  • Alrighty, it's just strange to my ears (and eyes) that after 50 or 100 bars, Riffer>Plectrum are still on the downbeat, whereas AUM's metronome isn't.

  • @j_liljedahl said:

    @lunelalune said:
    Hi there. I'm developer from Audiomodern team. Thanks everyone here for involvement and making this world a better place! :smile:

    @_ki Tested both sessions today and drift doesn't seem to appear on my end at all. On your screenshots I can see exact 1/16 offset - perhaps I know why this grid-offset may appear in some cases. But that slow drift, that's getting worse in time mentioned by @ocelot is still kind of mystery to me.
    I'm on iPad 5th gen 2017, iPadOS 14.7.1. Do you think it is something related to device or OS btw?

    @ocelot said:
    Tomorrow Later this week: I'll use a MIDI monitor and also record the audio output from AUM's metronome and Riffer>Spectrum.

    Yes please, I hope an in-depth inspection of recorded audio will help me to catch it

    @j_liljedahl said:
    I see, ok! That’s weird. Does Riffer have any hardcoded assumption of 44.1k, or buffer size, @Audiomodern ?

    And I’d like to understand why it happens only in AUM, but only with some sequencers. @Audiomodern Are you using current beatPosition from the host sync struct to determine time, or something else?

    Sample rate is not hardcoded of course. I'm using sample position provided by the host to calculate current step at playback start. Further callbacks rely on the same variable to calculate position inside the sequence, but we are using internal linear incrementation of steps, because we have adjustable range bar and backward direction. This approach leads to some issues. For example: in one of the projects provided by @_ki connecting/disconnecting headset during playback is breaking audio callbacks for a moment, and after this break AUM and X0X are staying in sync, but Riffer is starting to play with some grid-offset until playback is restarted.

    Ok, here we have the problem I think. AUM's sample position timestamp is steadily incrementing, it shows the number of actual samples rendered since time zero. (The sensible thing to do IMHO but apparently other devs disagree!) This is not guaranteed to be in sync with currentBeatPosition. Especially if Link sync is enabled, but also because of internal jitter between sample time and host time on various devices and depending on sample rate (and iOS internal sample rate conversion).

    Sequencers should only use beat pos and tempo for musical synchronisation with host, nothing else. To calculate the frame offsets for midi events, grab the sample rate from your output bus in AllocateRenderResources and cache it for later use.

    I see. During my tests yesterday I've seen some inconsistence between mSampleTime and currentBeatPosition after 100+ bars, so that was my guess too. Since our plugin is originally from non-mobile world, where sample position is more reliable and general (while beat position is not provided by many host) this approach worked very well. If I recall correctly AUM (or maybe it was Audiobus) didn't provide mSampleTime few years ago - the value was always 0, though currentBeatPosition was incremented. So I advanced my algo to adapt to this situation, so it's easy for me to hardcode some condition for AUM to ignore mSampleTime. However, @ocelot 's experiments show that recording audio is still sample-based (and I can't imagine situation how it can be done any other way), so if some kind of jitter you mention appear in the real world (@ocelot 's testing environment for example) recorded audio could fall off-beat.

    Anyway, I do agree that currentBeatPosition seems to be more reliable for sequencer plugin, especially in Link situation. So I think we're going to release a new build in few days and see if this jitter issue will vanish forever and never bother us again :wink:

  • @lunelalune said:

    @j_liljedahl said:

    @lunelalune said:
    Hi there. I'm developer from Audiomodern team. Thanks everyone here for involvement and making this world a better place! :smile:

    @_ki Tested both sessions today and drift doesn't seem to appear on my end at all. On your screenshots I can see exact 1/16 offset - perhaps I know why this grid-offset may appear in some cases. But that slow drift, that's getting worse in time mentioned by @ocelot is still kind of mystery to me.
    I'm on iPad 5th gen 2017, iPadOS 14.7.1. Do you think it is something related to device or OS btw?

    @ocelot said:
    Tomorrow Later this week: I'll use a MIDI monitor and also record the audio output from AUM's metronome and Riffer>Spectrum.

    Yes please, I hope an in-depth inspection of recorded audio will help me to catch it

    @j_liljedahl said:
    I see, ok! That’s weird. Does Riffer have any hardcoded assumption of 44.1k, or buffer size, @Audiomodern ?

    And I’d like to understand why it happens only in AUM, but only with some sequencers. @Audiomodern Are you using current beatPosition from the host sync struct to determine time, or something else?

    Sample rate is not hardcoded of course. I'm using sample position provided by the host to calculate current step at playback start. Further callbacks rely on the same variable to calculate position inside the sequence, but we are using internal linear incrementation of steps, because we have adjustable range bar and backward direction. This approach leads to some issues. For example: in one of the projects provided by @_ki connecting/disconnecting headset during playback is breaking audio callbacks for a moment, and after this break AUM and X0X are staying in sync, but Riffer is starting to play with some grid-offset until playback is restarted.

    Ok, here we have the problem I think. AUM's sample position timestamp is steadily incrementing, it shows the number of actual samples rendered since time zero. (The sensible thing to do IMHO but apparently other devs disagree!) This is not guaranteed to be in sync with currentBeatPosition. Especially if Link sync is enabled, but also because of internal jitter between sample time and host time on various devices and depending on sample rate (and iOS internal sample rate conversion).

    Sequencers should only use beat pos and tempo for musical synchronisation with host, nothing else. To calculate the frame offsets for midi events, grab the sample rate from your output bus in AllocateRenderResources and cache it for later use.

    I see. During my tests yesterday I've seen some inconsistence between mSampleTime and currentBeatPosition after 100+ bars, so that was my guess too. Since our plugin is originally from non-mobile world, where sample position is more reliable and general (while beat position is not provided by many host) this approach worked very well. If I recall correctly AUM (or maybe it was Audiobus) didn't provide mSampleTime few years ago - the value was always 0, though currentBeatPosition was incremented. So I advanced my algo to adapt to this situation, so it's easy for me to hardcode some condition for AUM to ignore mSampleTime. However, @ocelot 's experiments show that recording audio is still sample-based (and I can't imagine situation how it can be done any other way), so if some kind of jitter you mention appear in the real world (@ocelot 's testing environment for example) recorded audio could fall off-beat.

    Anyway, I do agree that currentBeatPosition seems to be more reliable for sequencer plugin, especially in Link situation. So I think we're going to release a new build in few days and see if this jitter issue will vanish forever and never bother us again :wink:

    Yes, I believe it's more important to stay in sync with the rest of the world (and Ableton Link) than to get sample accurate recordings.

    AUM's clock is based on Ableton Link. It would be interesting to make an alternative implementation that does not use Link at all, and uses a sample time based clock instead of mHostTime.

    I have a piece of debug code in AUM that shows the drift between sample time and beat time. On my iPhone 7, iPad mini retina 2, and iPad Pro 1st gen, it stays within +/- 1 frame when running at 48kHz, and some larger jitter when running at 44.1kHz. This jitter is self correcting though, averaging around 0, so it does not drift over time. So in this case, even relying on sampleTime for sync, it will not drift.

    However, I just now tried it on my iPad Air 4th gen, which is locked at 48kHz, and there it seems to build up very slowly, so sampleTime will drift away from the beat time. Weirdly, it was on this device I tested your drift test AUM sessions and did not see any drift!

    Some more information regarding jitter, sampleTime vs hostTime, and Link:

    https://github.com/Ableton/LinkKit/issues/20

  • @j_liljedahl said:

    @lunelalune said:

    @j_liljedahl said:

    @lunelalune said:
    Hi there. I'm developer from Audiomodern team. Thanks everyone here for involvement and making this world a better place! :smile:

    @_ki Tested both sessions today and drift doesn't seem to appear on my end at all. On your screenshots I can see exact 1/16 offset - perhaps I know why this grid-offset may appear in some cases. But that slow drift, that's getting worse in time mentioned by @ocelot is still kind of mystery to me.
    I'm on iPad 5th gen 2017, iPadOS 14.7.1. Do you think it is something related to device or OS btw?

    @ocelot said:
    Tomorrow Later this week: I'll use a MIDI monitor and also record the audio output from AUM's metronome and Riffer>Spectrum.

    Yes please, I hope an in-depth inspection of recorded audio will help me to catch it

    @j_liljedahl said:
    I see, ok! That’s weird. Does Riffer have any hardcoded assumption of 44.1k, or buffer size, @Audiomodern ?

    And I’d like to understand why it happens only in AUM, but only with some sequencers. @Audiomodern Are you using current beatPosition from the host sync struct to determine time, or something else?

    Sample rate is not hardcoded of course. I'm using sample position provided by the host to calculate current step at playback start. Further callbacks rely on the same variable to calculate position inside the sequence, but we are using internal linear incrementation of steps, because we have adjustable range bar and backward direction. This approach leads to some issues. For example: in one of the projects provided by @_ki connecting/disconnecting headset during playback is breaking audio callbacks for a moment, and after this break AUM and X0X are staying in sync, but Riffer is starting to play with some grid-offset until playback is restarted.

    Ok, here we have the problem I think. AUM's sample position timestamp is steadily incrementing, it shows the number of actual samples rendered since time zero. (The sensible thing to do IMHO but apparently other devs disagree!) This is not guaranteed to be in sync with currentBeatPosition. Especially if Link sync is enabled, but also because of internal jitter between sample time and host time on various devices and depending on sample rate (and iOS internal sample rate conversion).

    Sequencers should only use beat pos and tempo for musical synchronisation with host, nothing else. To calculate the frame offsets for midi events, grab the sample rate from your output bus in AllocateRenderResources and cache it for later use.

    I see. During my tests yesterday I've seen some inconsistence between mSampleTime and currentBeatPosition after 100+ bars, so that was my guess too. Since our plugin is originally from non-mobile world, where sample position is more reliable and general (while beat position is not provided by many host) this approach worked very well. If I recall correctly AUM (or maybe it was Audiobus) didn't provide mSampleTime few years ago - the value was always 0, though currentBeatPosition was incremented. So I advanced my algo to adapt to this situation, so it's easy for me to hardcode some condition for AUM to ignore mSampleTime. However, @ocelot 's experiments show that recording audio is still sample-based (and I can't imagine situation how it can be done any other way), so if some kind of jitter you mention appear in the real world (@ocelot 's testing environment for example) recorded audio could fall off-beat.

    Anyway, I do agree that currentBeatPosition seems to be more reliable for sequencer plugin, especially in Link situation. So I think we're going to release a new build in few days and see if this jitter issue will vanish forever and never bother us again :wink:

    Yes, I believe it's more important to stay in sync with the rest of the world (and Ableton Link) than to get sample accurate recordings.

    AUM's clock is based on Ableton Link. It would be interesting to make an alternative implementation that does not use Link at all, and uses a sample time based clock instead of mHostTime.

    I have a piece of debug code in AUM that shows the drift between sample time and beat time. On my iPhone 7, iPad mini retina 2, and iPad Pro 1st gen, it stays within +/- 1 frame when running at 48kHz, and some larger jitter when running at 44.1kHz. This jitter is self correcting though, averaging around 0, so it does not drift over time. So in this case, even relying on sampleTime for sync, it will not drift.

    However, I just now tried it on my iPad Air 4th gen, which is locked at 48kHz, and there it seems to build up very slowly, so sampleTime will drift away from the beat time. Weirdly, it was on this device I tested your drift test AUM sessions and did not see any drift!

    Some more information regarding jitter, sampleTime vs hostTime, and Link:

    https://github.com/Ableton/LinkKit/issues/20

    I would love the option (in all hosts) whether sample vs host time has precedence.

  • I found a way to get rid of the drift between sampleTime vs hostTime. Check the next AUM beta!

    Please note however that using sampleTime for musical sync will still not work, because it can jump out of sync if there is any discontinuity in the clock stream, for example because of a minor audio drop-out that you might not even hear. Or because Link adjusts beat time to keep peers in sync. Etc..

  • @j_liljedahl said:
    I found a way to get rid of the drift between sampleTime vs hostTime. Check the next AUM beta!

    Please note however that using sampleTime for musical sync will still not work, because it can jump out of sync if there is any discontinuity in the clock stream, for example because of a minor audio drop-out that you might not even hear. Or because Link adjusts beat time to keep peers in sync. Etc..

    Good news!

    Looks like Steinberg devs think the other way. I've tested algo that use currentBeatPosition instead of sample time over different hosts and I'm getting very weird out-of-sync playback in Cubasis with its more old-school arrangement DAW architecture. The other problem - I can't find a way to determine what kind of DAW is used by the user, as AUv3 is heavily sandboxed. Anyone among the devs here knows workaround?

    If there is no way to hardcode some conditions for particular DAW, and if this new AUM beta solves problems on @ocelot an @_ki devices, we'd better stay where we are for now (I mean using sample time)

  • @lunelalune said:

    @j_liljedahl said:
    I found a way to get rid of the drift between sampleTime vs hostTime. Check the next AUM beta!

    Please note however that using sampleTime for musical sync will still not work, because it can jump out of sync if there is any discontinuity in the clock stream, for example because of a minor audio drop-out that you might not even hear. Or because Link adjusts beat time to keep peers in sync. Etc..

    Good news!

    I spoke too soon.. It fixed the drift on my iPad Air 4th gen, but resulted in beat time ramping up to "infinite" speed for another user :)

    Looks like Steinberg devs think the other way. I've tested algo that use currentBeatPosition instead of sample time over different hosts and I'm getting very weird out-of-sync playback in Cubasis with its more old-school arrangement DAW architecture.

    Maybe something else is going on there? I'd be surprised if Cubasis didn't send a proper and valid beatTime! In what way is it invalid? Try logging the currentBeatPosition at each buffer cycle, and your calculated buffer duration in beats, and thus your predicted beat time for the first frame of the next buffer. Is there gaps? Jitter? How much if so?

    The other problem - I can't find a way to determine what kind of DAW is used by the user, as AUv3 is heavily sandboxed. Anyone among the devs here knows workaround?

    No, it's not possible for an AUv3 in general to know who the host is, unless the host has some special agreement/handshake with the plugin (via fullState or sysex or some custom IPC mechanism).

    If there is no way to hardcode some conditions for particular DAW, and if this new AUM beta solves problems on @ocelot an @_ki devices, we'd better stay where we are for now (I mean using sample time)

    No, even if the AUM beta did solve the beatTime vs sampleTime drift (which it hasn't, yet), it will not solve the problem for Riffer because beatTime and sampleTime are two very different things! beatTime tells you the musical time position, so that (combined with current tempo) plugins can synchronize with the host and the other plugins. sampleTime tells you only the number of sample frames rendered since time 0.

    So, even on devices where there is no drift between these two clocks, your plugin would loose sync if for example:

    • AUM is synced via Link, and Link decides to adjust the beat time slightly to make all peers align.
    • There's an audio dropout, AUM's beatTime detects and compensates for that while sampleTime does not.
    • Tempo changes might be much harder to track correctly.
  • edited December 2021

    @espiegel123 said:
    I don’t know if this true, but a developer recently told me that some of the 44.1khz capable iPads are really running at 48k … and the OS returns some strange buffer timestamps when they run at 44.1k but not at 48k

    Seems to be the case? -

    Today, I tested a 2013 iPad Mini 2, 2015 iPhone 6S Plus, the 2017 iPad Pro 10.5 again, and 2018 iPad 6th Generation, at both 44.1kHz and 48kHz. The timing drift is only an issue in AUM at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Generation. This is with the latest version of AUM from the App Store.

    Tested with Riffer>Plectrum, polyBeat>Ruismaker FM, & Rozeta X0X>Ruismaker. AUM's metronome, polyBeat>Ruismaker FM, & Rozeta X0X>Ruismaker start to drift after >20 bars. Riffer>Plectrum doesn't, and is still on the grid after >100 bars. Again, this is only in AUM at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Generation. At 48Khz, those 2 iPads don't drift, and neither do the iPad Mini 2 and iPhone at either 44.1kHz or 48kHz.

  • edited December 2021

    @ocelot said:

    @espiegel123 said:
    I don’t know if this true, but a developer recently told me that some of the 44.1khz capable iPads are really running at 48k … and the OS returns some strange buffer timestamps when they run at 44.1k but not at 48k

    Seems to be the case? -

    Today, I tested a 2013 iPad Mini 2, 2015 iPhone 6S Plus, the 2017 iPad Pro 10.5 again, and 2018 iPad 6th Generation, at both 44.1kHz and 48kHz. The timing drift is only an issue in AUM at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Generation. This is with the latest version of AUM from the App Store.

    Tested with Riffer>Plectrum, polyBeat>Ruismaker FM, & Rozeta X0X>Ruismaker. AUM's metronome, polyBeat>Ruismaker FM, & Rozeta X0X>Ruismaker start to drift after >20 bars. Riffer>Plectrum doesn't, and is still on the grid after >100 bars.
    Again, this is only in AUM at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Generation. At 48Khz, those 2 iPads don't drift, and neither do the iPad Mini 2 and iPhone at either 44.1kHz or 48kHz.

    It's Riffer>Plectrum that does drift, the others are not. When you say "still on the grid" you're comparing it to sample time, which is not what defines the musical beat sync grid.

    In the world of Ableton Link, it's simply not possible to keep beat time and sample time in sync. Even if I would adjust AUMs beat time to keep it in sync with the mathematically ideal beat time as calculated from the perfect sample time, it would break and make Link Sync unusable, since AUM would in effect follow the sample time instead of the Link Sync time.

    Sample time is only perfect on a single device. If you have two devices, their sample times will drift in relation to each other, unless their actual sample clocks are synchronized (by wordclock cable, etc).

    In theory I could implement a sample-time based clock mode in AUM, which disables following any kind of external sync (Ableton Link, and in the future MIDI clock), and I probably will. But as soon as the user wants to use Link, you're out of luck.

    It feels like I'm repeating myself endlessly here without getting any confirmation that either you or @lunelalune hear me: you can't rely on sample time for musical (beat and tempo) synchronization.

  • @j_liljedahl said:
    because beatTime and sampleTime are two very different things! beatTime tells you the musical time position, so that (combined with current tempo) plugins can synchronize with the host and the other plugins. sampleTime tells you only the number of sample frames rendered since time 0.

    Well, I have to disagree here. Tempo changes are crucial. What happens with mSampleTime provided by AUM if I suddenly switch tempo from 120 to 240? Right, it drops down 2 times. All hosts. Changing time signature breaks linear incrementation of mSampleTime too in most hosts. So let me sum it up: mSampleTime is not continuous and it doesn't show number of frames rendered from 0.

    Let's assume I'm getting ST (sample time), BP (beat position) from host, so I can find out CST (calculated sample time). Here's simple formula I'm using in my code:
    CST = (int64)(60.* sampleRate * BP / bpm + .5);

    So I can easily log all 3 variables in form: BP | ST | CST and get results like: 15.2169 | 335533 | 335533. I'm going even further:
    if (CST != ST) assertion

    Here's the list of tested DAWs

    • Audiobus 3
    • apeMatrix
    • Cubasis 3
    • AUM
    • Nanostudio 2

    I've tested 44.1 and 48, different buffer sizes, with and without Link, changing tempo on-fly, even emulated kind of drop-out by intense connecting/disconnecting headset. I've got no assertions in first 3 hosts in the list. Their ST and BP seems to be perfectly correlated, means I'm not getting even 1 sample drift. Perhaps they use some reverse formulae to calculate BP from ST, or something. AUM fires assertion almost immediately after starting playback, Nanostudio plays just fine until tempo is changed (but the drift between 2 variables is taking place only while I'm dragging tempo).

    So my unpretentious assumption is: perhaps beat postion and sample time are not that different and showing same thing in general - position on the timeline.

    Anyway, I'm really thankful for the discussion. I've managed to fix Cubasis issue that I mentioned in prev post, so we're moving to use beat position instead of sample time in our AUv3 plugins. I'm just looking for the truth and some advices from pros and I hope the results of this debug session might be helpful for other devs in case of some sync issues with AUM or Nanostudio

  • I‘m not a pro, but this

    @j_liljedahl said:
    Sample time is only perfect on a single device. If you have two devices, their sample times will drift in relation to each other, unless their actual sample clocks are synchronized (by wordclock cable, etc).

    is based on law of physics... and the reason for dedicated precision wordclock distribution from a central location in studios if more than 1 DAW is involved.
    Unfortunately it‘s a general contradiction to what mobile devices are supposed to be.
    (mentioned just to prevent further flogging of a dead horse...)

    Imho it‘s more efficient to optimize precision actions on the musical grid, instead of hunting for an absolute timeline over extended periods.

  • @lunelalune said:

    @j_liljedahl said:
    because beatTime and sampleTime are two very different things! beatTime tells you the musical time position, so that (combined with current tempo) plugins can synchronize with the host and the other plugins. sampleTime tells you only the number of sample frames rendered since time 0.

    Well, I have to disagree here. Tempo changes are crucial. What happens with mSampleTime provided by AUM if I suddenly switch tempo from 120 to 240? Right, it drops down 2 times. All hosts. Changing time signature breaks linear incrementation of mSampleTime too in most hosts. So let me sum it up: mSampleTime is not continuous and it doesn't show number of frames rendered from 0.

    Really? I would expect sample time to be fully independent of tempo. What has tempo to do with sample time? There's also a reason they are in two different host blocks.

    (To clarify, I'm talking about currentSampleTime from AUHostTransportStateBlock and not mSampleTime from AudioTimestamp passed to your render block. I guess you are too even if you wrote mSampleTime.)

    Let's assume I'm getting ST (sample time), BP (beat position) from host, so I can find out CST (calculated sample time). Here's simple formula I'm using in my code:
    CST = (int64)(60.* sampleRate * BP / bpm + .5);

    So I can easily log all 3 variables in form: BP | ST | CST and get results like: 15.2169 | 335533 | 335533. I'm going even further:
    if (CST != ST) assertion

    Here's the list of tested DAWs

    • Audiobus 3
    • apeMatrix
    • Cubasis 3
    • AUM
    • Nanostudio 2

    I've tested 44.1 and 48, different buffer sizes, with and without Link, changing tempo on-fly, even emulated kind of drop-out by intense connecting/disconnecting headset. I've got no assertions in first 3 hosts in the list. Their ST and BP seems to be perfectly correlated, means I'm not getting even 1 sample drift. Perhaps they use some reverse formulae to calculate BP from ST, or something. AUM fires assertion almost immediately after starting playback, Nanostudio plays just fine until tempo is changed (but the drift between 2 variables is taking place only while I'm dragging tempo).

    If so they probably calculate ST from BP, not the other way around. If you log their ST and buffer sizes, I assume there will be gaps or overlaps between ST+bufferSize and next ST, especially when synced to other Link peers or changing tempo.

    So my unpretentious assumption is: perhaps beat postion and sample time are not that different and showing same thing in general - position on the timeline.

    I honestly don't believe that was the intention with currentSampleTime. If so you could just have one or the other and convert between them with the formula you just mentioned. Perhaps time to take this to the CoreAudio dev mailing list to see what the old foxes say :)

    BTW this is a nice example of how Apples minimal documentation really can mess things up!

    Anyway, I'm really thankful for the discussion. I've managed to fix Cubasis issue that I mentioned in prev post, so we're moving to use beat position instead of sample time in our AUv3 plugins. I'm just looking for the truth and some advices from pros and I hope the results of this debug session might be helpful for other devs in case of some sync issues with AUM or Nanostudio

    Good to hear!

    Indeed, I'm also interested in the truth here and am happy to be proven wrong. We need to hear from some authorized source what the intention and purpose of currentSampleTime is.

    I'm also thankful, this made me implement a bypass of Link when disabled in AUM, so the beat clock will be sample perfect with guaranteed no jitter or drift.

  • AUM Beta 1.4.0 (262):
    AUM's clock no longer slowly drift out of time after >20 bars at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Gen.

    Only thang that needs a fixin now is some folks attitude.

  • @ocelot said:
    AUM Beta 1.4.0 (262):
    AUM's clock no longer slowly drift out of time after >20 bars at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Gen.

    AUMs beat clock didn't drift, only currentSampleTime (which should not be used for musical sync purposes anyway). But it now has zero jitter (if Link is disabled) which is nice :)

    Only thang that needs a fixin now is some folks attitude.

    ?

  • @j_liljedahl said:

    @ocelot said:
    AUM Beta 1.4.0 (262):
    AUM's clock no longer slowly drift out of time after >20 bars at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Gen.

    AUMs beat clock didn't drift, only currentSampleTime (which should not be used for musical sync purposes anyway). But it now has zero jitter (if Link is disabled) which is nice :)

    It most certainly did. Have a gander at my long post above - or better yet, I'll recap it for you -

    The timing drift was an issue only on certain iPad models using internal iPad audio at 44.1kHz, in AUM only (2017 iPad Pro 10.5, 2018 iPad 6th Generation). Solution for these models is to use 48kHz in AUM, or use Audiobus or apeMatrix at either 44.1kHz or 48kHz.

    AUM, and long audio recordings made in AUM would slowly drift out of sync with other equipment (Octatrack, Akai Force, Model:Cycles, etc.) after >20 bars.

    This is now fixed. Thank you.

    And about that last bit - I'll message you.

  • @ocelot said:

    @j_liljedahl said:

    @ocelot said:
    AUM Beta 1.4.0 (262):
    AUM's clock no longer slowly drift out of time after >20 bars at 44.1kHz on the 2017 iPad Pro 10.5 and 2018 iPad 6th Gen.

    AUMs beat clock didn't drift, only currentSampleTime (which should not be used for musical sync purposes anyway). But it now has zero jitter (if Link is disabled) which is nice :)

    It most certainly did. Have a gander at my long post above - or better yet, I'll recap it for you -

    The timing drift was an issue only on certain iPad models using internal iPad audio at 44.1kHz, in AUM only (2017 iPad Pro 10.5, 2018 iPad 6th Generation). Solution for these models is to use 48kHz in AUM, or use Audiobus or apeMatrix at either 44.1kHz or 48kHz.

    AUM, and long audio recordings made in AUM would slowly drift out of sync with other equipment (Octatrack, Akai Force, Model:Cycles, etc.) after >20 bars.

    This is now fixed. Thank you.

    But I didn't fix it. :) I only decoupled the beat time clock generation from Ableton Link (which is based on mHostTime) and instead use an exact sample based clock when Link is disabled. As soon as you enable Link, you will see the same drift again.

    This is because the beat time according to Link can't and won't be aligned with the precise sample time. On some devices and sample rates it's only minor jitter, on others it's large jitter, and on some it even drifts. This jitter and/or drift is present deep down in the CoreAudio timestamps, between mHostTime vs mSampleTime, and the reason it becomes visible with Link is because it's based on mHostTime. Looking at these two clock sources individually, both are correct, they just measure time differently when compared to each other, since their timing is coming from different sources in the device hardware.

    The same thing would happen if the beat clock is derived by following a MIDI clock or any other external sync source (apart from wordclock, where the actual sample time is synchronized).

    So, a sample accurate beat time can only be achieved when there is no external clock source (including Link).

    I thought I did read all your posts above, but I must have missed the part about external equipment. How were those synchronized with AUM in these cases?

Sign In or Register to comment.