Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Phase Adjustment/Phase Correction/Phase Shifting AU?
So, here's the dilemma. Maybe I overlooked something, but I am looking for a plugin that can solve phase cancellation not only between two separate recordings but also between the right and left stereo channels. This is especially important when recording acoustic genres, and your overhead miking is f-cking up your direct miking. Or if you're doubling a vocal and it's causing phasing issues and need a way to get rid of any phase cancelling. Etc. Is there a phase adjustment AU out there I'm not aware of? Thanks in advance.
Comments
I'm thinking about the problem and did a bit of research. I found some recommendations to invert one of the channels to reduce the loss of signal or frequencies being cancelled. I also saw for the multic-microphone case the suggestion to nudge a track by a 1-2 millisecs to improve the output.
Regarding AU's they tend to expect 1-2 channels for input just routing 2 mic's into 1 AU might be tricky. Maybe someone has some advice on this.
I think a great DAW would offer the most assistance with invert one of the channels in the stereo case and offering small nudges for the multi-channel cases. I'm sure this is where Auria Pro shines since it's so focused on the techniques of classic studio engineering combined with precise MIDI controls.
I'd like to see the questions hang about until we learn more about the nature of the problems and potential solutions for IOS music nerds. There was also some comments re: Oscillator sync'ing to insure they didn't work to cancel out frequencies with phase mis-alignment.
Not aware of any au but in Auria Pro bring up the channel strip and look for the phase icon ø it may solve your issue.
Not phase shift, but...
I wonder if Discord4, with no pitch shift, but a buffer setting would do it?

Yes,, that's the proper one in Auria to invert signal polarity.
(a more meaningful expression instead of 'phase inversion')
It's also present in AUM's channel effects (last item of 'stereo processing').
In a more general context it would be useful to have an adjustable single sample delay to compensate runtime differences between microphones (or arbitrary processes).
There are Allpass filters in AUM which alter phase/shift signal in time.
(dunno details, but maybe worth a try)
IMO, moving the mics before recording is the correct way to deal with phase cancellation (I.e. avoid the issue in the first place).
If you are doubling a vocal, phase won't be the issue because the two vocal takes are different waveforms.
The phasing like effect is due to the two waveforms being similar but not the same with the difference varying over time even for a singer who can lock on tight.
Phase is primarily an issue when you have an identical signal making its way from source to "tape" via different pathways.
That being said maybe there is a plugin that can adjust phase and help when there is no other choice. Waves makes a plugin for phase adjustment but it seems to be desktop only.
Does anyone know about "nudging" audio by small increments (1-2 millisecs) in the major DAW's? I saw this feature in Logic Pro where I could select the nudge amount and then just request nudges but I was using it to get a drum track to fit better into the pocket using multiple of 10 msecs per nudge. Seems like a 1 ms nudge setting would help fine tune track alignments but I'm just guessing without any real experience on the problem.
Lazy way to deal with it is to pan the two tracks apart from one another, and nudge one a few ms.
Best option is to experiment with different mic placements, and test for phase issues using the 'invert' button on your premap or mixer. Flip it back when your mic placement is good, then record.
I don't know about any such tools on iOS that are free in every serious Mac/PC DAW. Auria Pro?
Auria Pro manual shows:
Record Latency Adjustment – Enter a time value, in samples, to shift recorded audio earlier during recording.
Man, Auria Pro has more solutions than I ever thought.
Thank you all, and happy New Year.
(I do wonder if @FredAntonCorvest could invent such a plugin, but I have a few good solutions now.)
That's a global setting applied to all recorded audio, but only while tracking.
You can't set it for individual tracks later.
Audio travels about 15" per millisecond in air, you need a finer resolution than ms to set values properly. Usually single samples as it takes only a couple of samples to have a notic
You have build in solution also inside NS2
I use that effect all the time on the master channel in NS2. How the hell did I miss that? Lol. Thanks Dendy.
Is there an app that offers shifting with high precision between various audio tracks?
I think I'm beginning to understand why most of us just skip recording anything with mics and just used synths and samplers and get to work. It's just easier than solving the mismatch between reality and making perfect recordings. Why bother?
Oo-oh look... new App.
It is not related to microphones only but to any digital processing path and even analog.
The latter is just a magnitude lower.
Just because you can adjust signal runtime doesn't mean you have to
But it's important to know what's going on to adjust the setup IF something comes out strange or unpleasantly.
Internal latency adjustment has been a feature of professional audio systems since 2 decades. It's part of the recording 101.
Early Pro Tools documentary had a couple of pages listing the processing delay in samples for several types of plugins and signal routings. If needed you could enter those values into a delay plugin, which was inserted in the respective channel(s).
Later this was extended to an automatic system, but plugins had to communicate exact values to the host, which isn't always the case.
Iirc it was more or less ignored in VST design originally supposing the host would do it right on itself - but I've never been a big VST fan and lack hands on experience from that time. At least (imh ears) most VST sound 15 years ago just sucked.
I was using a Sharc DSP based system and my guess is supported by UAD's later success which is based of the very same DSP architecture.
The Analog Devices Sharc DSP is the only DSP that was explicitely designed with audio processing as it's main application and is found today in most high end pedals, lots of Roland/Boss gear, high end car and home audio.
The main aspect of running DSP code on a dedicated board is it's separation from the OS.
On a typical desktop you have at best 15-20% of the CPU power available for audio, the vast majority goes into OS 'services'.
(you may check Merging Technologies 'Mass Core' strategy, which hides CPU cores entirely from Windoze and uses them for audio processing only)
It's not just about processing power, but the audio streams are completely undisturbed from OS service interrupts.
Anyway - imho this is a strong aspect of IOS audio processing, too.
It has a much more application focussed design, so if you choose an audio app there's a lot of stuff NOT present which is active on a typical desktop system.
It's fairly easy to disconnect an iDevice from internet temporarily and/or disable certain features like Siri and social media. The lack of global file access is partly annoying sometimes, but it reduces CPU resources otherwise needed to a significant degree.
The OS or CPU architecture has absolutely nothing at all to do with audio quality or phase seperation, it is comparable to saying my forum posts are better quality on a Sharc based internet box, Sharc was not designed for audio specifically, it is a single chip floating point processor.
For phase seperation all you need is phase invert and a sample based delay with zero feedback, neither of which would take a developer of any note more than ten minutes to code (most audio development tutorials will start with a sample based delay) the fact that an AU that does this does not exist is more to do with the tiny IOS userbase than anything else, it will happen when a capable developer needs it for themselves.
@Turntablist
Sample based delay is possible in Audulus3 isn't ? With upcoming Audulus AUv3 version (not sure if there is already some ETA) this would be easy like breeze ...
Sharc DSPs weren't eclusively designed for audio, but audio processing was a main target application. Explicitely mentioned in this spec sheet and usually at the top of the list for possible uses on AD's site.
https://www.analog.com/media/en/technical-documentation/data-sheets/ADSP-21261_21262_21266.pdf
it has quite something to do with it, if audio streams loose alignement by processing 'flaws'. That's only a matter of probability because you can't measure it easily on desktop, but well... I know Windoze long enough to not trust it blindly.
Whatever it may be: an average desktop system HAS to spend a lot more CPU cycles NOT related to audio than the CPU on an iPad or iPhone.
Which in turn makes IOS a more reliable environment.
I could see Klevgrand making an AU for this use case without a lot of effort. Then we can get back to complaining about their home-made GUI. I wonder if I'll be able to hear when I need to use it. The nerd in me will make it a must buy assuming it's not priced for Pros only.
OT: My first Klevgrand purchase was "Tines" and I hated it because it sounded too fake compared to most Rhode Piano samples. Lately, I have realized that with added effects it's nice and an efficient use of CPU resources in complex AUM setups.
Klevgrands ReAmp product is excellent. Imagine if the just added channel invert and sample-based delay features to something like that excellent AU product. Kleverb would also make a good candidate for this as an update.
A sample delay to fake a mic closer or farther away from an amp cabinet doesn't match reality: real mics have a response (in frequency and phase) that varies with distance.
It's not just a simple sooner or later arrival of the same sound event.
If you apply such delays to recordings of a single instrument by 2 microphones, it's kind of opening Pandora's box: you'll hardly ever find an end to decide which of the countless variations in sound is the proper (or most nice) one.
Such delays are tools in the first place to adjust single sources that are processed in multiple parallel routes.
You may play around with them in any context of course - but there's few gain and lots of confusion or mess to be expected
No doubt! FAC could give us a phase alignment tool. We could probably use it to destroy any semblance of practical audio engineering technique and drive @telefunky to just quit arguing for acoustic fidelity. That's my wish for 2019. Great debates down narrow ratholes. I like the rats. Had a couple as a kid.
NOTE: This is intended to be humorous. I have heard @telefunky's recordings and they are beautifully transparent. He knows a lot about sound engineering hardware and best practices.
You misunderstand - it's no argue for sonic fidelity, but pointing out a major fact that is largely unknown to IOS users.
Digital signals are not only delayed by interface buffers (which is generally compensated), but also by processing itself.
The latter is much smaller - say 2 to 20 samples, equal up to 0.5 milliseconds.
This is not noticable as a delay and can generally be ignored IF each signal is independant from the other.
But it becomes a concern once you split things to deliberately layer sounds or parallel compress (which seems to be a fashion thing today).
It also alters the stereo image if the channels drift from each other for a few samples.
A similiar thing happens when the digital system clock isn't steady, either by the hardware itself or obscure interaction from the OS. Small drifts result in a less defined output.
Though subtle it's a rather common experience, as that's the reason that makes an interface sound 'better' than another, though specs are identical.
Of course you can ignore all the rubbish, most users don't care anyway.
When Positive Grid accidently flipped one channel of their Bias amp sim's stereo output, noone complained during the 3 month it took to fix it.