Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
That's how I see it as well. +1
However one shouldn't start using their food money to invest in expensive top shelf mics, preamps and software if they cannot hear/enjoy the benefits anyway. I guess the true path is somewhere in the middle unless you're actually making proper money with this stuff.
Yep! Definitely some activities that benefit from higher sample rate/bit depth but wouldn't worry too much making music on iOS
@Blue_Mangoo (or anyone else who can answer), I have few questions regarding this video as I still don't understand the benefits of 96 khz:
You demonstrated the creation of frequencies above ~20khz spectrum by inserting a silence in a random place in the waveform. Although this is a possible way to produce such higher frequencies, it's not a very real use case. You mention that such things are very common in the sound processing chain but I wonder - when and how exactly? I can imagine some distortion algorithms to make such "brutal" changes to the soundwave, but is it also the case of e.g. compressors, filters or EQs? I can't really imagine how such high frequencies can be produced there...? Of course, synths are different beasts, I completely understand that they can produce much higher frequencies.
Even if there is such processing that creates >20khz frequencies, shouldn't the developer of the plugin use oversampling for that? Isn't this kind of a standard approach? I know that in many (maybe mostly desktop?) plugins, there is usually some "hi-quality" or "eco" mode that turns oversampling on/off and for offline rendering, most of the plugins use oversampling. Isn't this a way how to address this issue and you as a user then don't need to use higher than 44khz frequency rate?
Even if such frequencies are produced and "lost in processing", does it mean that the resulting sound is inherently "bad" or "inferior"? From what I know when playing around with exporting in different frequency rates (or with such eco/high-quality modes) is that the output is not really better or richer, just different. This is quite obvious with synths and high frequency sounds. In some other discussion on this topic dendy provided good sound examples where it was really obvious, but the point is that you couldn't really tell which one is better - just very different.
Is it possible this anyhow affects lower frequencies? I know many people involved in electronic music production accuses digital sound to be "weak on lows" or "thin", usually no one complains about unprecise high frequencies... Also many people tell that using higher frequency rates and bit depth helps a lot, but personally I've found no real scientific base for such statements and in general that sounds to me like there should be no relation...
Thanks for sharing your wisdom!
It is unusual for a plugin to make a hard cut like I did at that part of the video but any plugin that automatically adjusts the gain usually creates harmonics above the frequency of the input. Examples include saturators, compressors, limiters, distortion pedals, amp sims, and transient shapers. Harmonics come in integer multiples of the input frequency (frequency x2, x3, x4, x5...). So when a plugin creates those harmonics, the lowest of them is 2 x higher frequency. Imagine a gentle saturation that simply creates one harmonic above the input. If the input frequency is 15khz then that little first harmonic is at 30khz, which is beyond the limit of 48khz sample rate. A saturator that creates only the first harmonic is a very gentle saturation. Most of them go much farther than that. I demonstrate how this happens in this video:
In the video I linked above you will see that it can take up to 20,000 times oversampling to completely eliminate aliasing. No plugin does that. They typically do 2x or 4x only. Some go as high as 16 or 32 but that’s unusual. Oversampling helps a little but in many cases it’s not a solution.
Also, oversampling can be done in two ways. If done using FIR filters then it implies a trade off between adding delay into your signal chain and loosing some of your high frequency content. If you expect your plugins to oversample from 44.1 to 88.2 KHz and still keep all the sound above 20 KHz then you will need to add 3 or 4 ms of delay for each plugin in the chain. If you use three plugins in series then you loose the ability to process in real-time.
Alternatively, if the plugins use IIR filters for oversampling, delay won’t be an issue but the plugins will cause ringing near the nyquist frequency. The amount of ringing they produce depends on the same questions mentioned above for FIR filters.
To put it simply: upsampling and downsampling are not simple operations. They use some of the most complex filtering schemes found anywhere in audio signal processing plugins and they distort the signal significantly. If you can avoid it, you should. Regardless of the type of filtering It would horrible to have to chain up three or four plugins, all running at 44.1 KHz and all oversampling.
If you run at 96kHz however, oversampling filters will still mangle the signal but all of their mangling happens from 30khz to 48khz, and We aren’t going to hear the distortion if it stays in that frequency range. If you run your DAW at 96kHz and use IIR filters for oversampling, you would find it almost impossible to even detect the effects of oversampling filters because the distortion happens at very high frequencies and the delay is minuscule.
If they truly get lost, then it doesn’t matter. But, as demonstrated in the video I linked above, many of the aliasing artefacts do not get lost in processing; instead they ouch your noise floor up significantly and once that happens there is no filter that can clean it up again.
This is a question I am personally very interested in. I also have the feeling that analog synths are warmer and digital ones sound a bit thin. I agree with you that there is no scientific basis for digital synths lacking bass because if that were true then you could just use an EQ to boost the bass. I think what is actually happening here is the digital ones have too much treble.
My guess is that there are several reasons for it:
First, people are hearing some aliasing sound and it’s perceived as harshness in the high frequencies that makes the sound feel “thin”. It’s described as thin because of it weren’t so harsh in the high frequencies we could turn the volume up louder, making it sound bigger, but because it bothers our ears we have to keep the volume low.
Second, the digital filters sound different from their analog counterparts above sampleRate/2. If you sample at 44.1 KHz then the digital filters are very unlike the analog ones from 11 KHz on up. In general the digital filters cut much deeper than the analog ones do in this range. So in theory they shouldn’t sound thinner, but I suspect that because the cut is too deep, people feel the sound lacks clarity so they run digital synths with 25% higher filter cutoff than they would use on the same synth if it were analog. This issue is easily solved by running at 96kHz. At 96kHz he digital filters are similar to the analog ones up to about 24khz.
The last issue is that analog synths play through amps and speakers. If you have ever tried running a digital synth through a guitar amp simulator, you’ll hear that it sounds significantly fatter. This should be obvious in retrospect but still people use digital synths directly with no amp and cabinet simulator and compare them against the analog ones using amp and speaker cab and wonder why digital synths don’t live up to expectations. I agree that many digital synths have problems with aliasing and filter design, but at least give them an equal playing field before comparing them.
Personally I find most software amp sims also not very good quality, so once again, I recommend running them at the highest sample rate available. 192 KHz is advisable for guitar amp sims because they saturate heavily and therefore they alias heavily. If you run them at 192 KHz and don’t push the gain hard (use a clean amp setting and no distortion pedals) they should be fine.
Finally, guitar amps have very little high frequency output above 4kHz due to the weak HF response of the speakers. If you want to fatten up a synth without loosing all the high frequencies I recommend putting the following effects after the synth:
We are very lucky to have you sharing your knowledge here @Blue_Mangoo

😲 wow, this is really an amazing amount (and quality) of information! Thanks a lot, you answered everything I was curious about. I think everyone should make experiments on their own to see (and hear) the possible difference and find out, which plugins do some harm and which can deal with this. Now I'll be more paranoid when making music...
A certain amount of paranoia is justified. In the end, your ears must be the guide, but it's challenging because most plugins simultaneously affect the sound in positive and negative ways. I guess most decisions we make in life are like that.
@skrat
I was working on adjusting anti-aliasing filters in an oversampler this morning so I decided to make a video while I had that code handy that demonstrates what kind of distortion you get from upsampling and downsampling audio:
Would love to see the original video at the beginning of this thread - can it be re uploaded? Thanks!
If you up-sample and down-sample and do nothing, you are correct. However, some DSP processes have artifacts related to the sampling rate. When these are applied with a very high sampling rate, the artifacts are well above the range of human hearing. When they are applied at lower sample rates they may impact audible frequencies.
To what extent these artifacts are noticeable will depend on the processes, the playback equipment and the listener's ears.
Dan Worrall's take on Samplerates: the higher the better, right? is wrong?
His conclusion:
@Max23
It is worth mentioning that to the extent that for anything where the difference in audio quality is subtle in the original raw audio, those differences may disappear when lossy audio codecs are applied (as is the case with YouTube or SoundCloud and most streaming services).
Worrall doesn’t make the claim that oversampling is never worthwhile. His point is that in some cases it is counter-productive and often provides no significant improvement. But he doesn’t say that it never has a benefit (and mentions cases where it does have a benefit). And in some cases where there is a benefit the benefit isn’t worth the CPU hit... which is different from the implication that it is never beneficial which someone else implied.
Well as long as it’s 128bit audio everything should be just fine 🤷♂️
Does anyone remember the essential aspects of the video? I recently watched a couple of digital audio introductory videos by Monty Montgomery published by Xiph.org that eventually led me to Dan Worrall’s, and now I’m curious about the argument for higher sample rates.
I record a lot of songs with live guitar, bass, vocals, etc and mainly at 24/96k. At the end of the day when mixing down for uploading mp3/m4a, etc...yeah .... 24/96k might not make sense, but at this point it’s mainly for my listening pleasure and being able to listen to a 96k mixdown thru an interface that supports it.
@Max23 I was joking. 16bit is more then enough for me. 24bit might give you slightly more headroom when you’re mixing that new orchestral main theme for Nolan’s next movie. 🤷♂️
And soon we’ll see that not even 192KHz is not enough when composing music for Bats and Cats...
The 16 vs 24 bit jump was a huge but I’ve not seen the need to go above 44.1 wih oversampling.
My super ESI U2A did 64x oversampling at the AD stage and there never was much energy above 17k but then again not many microphones capture super high frequencies with enough energy above the noisefloor... self-resonating filters could go quite high but the levels also went up and to avoid distortion the levels had be turned down and whoops the super high frequency content also got turned down below hearing threshold.
Theoretical limits and practical usecases are sometimes quite far apart from each other...
I’m more allergic to noise and even when the noise is around -90db it annoys me, maybe my eats are too sensitive or something...
Cheers!
Why you don't need 24 Bit 192 kHz listening formats;
https://youtube.com/watch?v=cIQ9IXSUzuM&feature=emb_logo