Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
RIght - its the most beneficial approach - well balance between proprietary, user access and the like - but once in a vast while someone should at least grump about interoperability and protocols - the time and creativity wasted and every crackle in my ears, there are counter arguments, but here we are!
then your school finished too early I suppose...
The 'twice the highest frequency' rule is only one side of the coin - the other is labeled 'aliasing'.
With 44.1 and 48k sampling rates the processing creates artifacts in ultrasound that 'flip back' into the audible domain.
With any sampling frequency above 70khz such aliasing is (due to math) always beyond 20 khz and thus the sound is generally perceived as better, more clear, transparent etc.
It's extreme disharmonic content makes aliasing standout more than it's level may suggest.
I potentially agree, but neither of those are factors, so I'm okay with that - that's kinda it - if a normal MBP has 40gb of throughput... I remember around '05 putting in a 40gb capable packet analyzer or something, it was like 350k, now we have it at home... should be no problems to record as many channels at whatever kHz. Otoh, I potentially disagree because there is much we don't know about sound, including how our own brain processes it, once you get past the isle of corti the electro-chemical mystery begins, and nominally most naturally occurring 'things' like sound tend to eventually unravel as some way affecting - and I think it's still worthwhile to call into question not the math which clearly demonstrates such frequencies are unheard, but our as yet incomplete understanding of our own processing of sound. There's a popular saying about language and context - that every statement always already exists, so too the entirety of frequency, and we may still discover nature's sweetest melodies are those unheard
At the point where your source enters the system at 16 bits, up-rezzing to 24 bits is lossless.
Once inside the system, especially when mixing, the extra headroom from mixing at 24 bits rather than 16 bits gives a greater freedom from distortions due to arithmetic rounding errors.
e.g. adding together 48 channels with their LSB (at 24 bits) greater than 10 will yield a result differing by 1 from the same operation dine in 16 bits. Assuming random distributions, the errors - which show up as distortion, could be on average 15 different. Which Is actually quite audible and nasty. I have had to deal with this with up to 96 16 bit sources in my DSP work.
Downrezzing back to 16 bits at the point where the final result is served nack to the outside world does not introduce errors in the sense that it is already proven that 16 bit is already faithfully reproducing waveforms hi-cut at 20K.
There are of course plenty of conversations about whether or not this brick wall filtering can or can not be heard. My opinion is that what is heard in those tests are flaws in the filters and jitter in the bit clock (where it is already shown, long ago, that sub-nanosecond jitter can audibly degrade the noise floor).
here's the analog counterpart of that 'effect' (quoting self)
@dwarman nice to read a real world proof, I never use that amount of channels
I can't speak for all developers, but internally all my sound engines run at (at least) 32 bits. So the 16/24 bit distinction only comes into play when getting sounds in and out of the app and are not part of the equation when mixing voices/channels, etc. I am almost certain most iOS DAWs also mix at much higher resolutions than 24 bits internally.
Only in theory. If the data in your 24bits stream doesn't make full use of the resolution there will be very little difference between the 24 bit and 16 bit version. In other words: these extra 8 bits of resolution are not used meaningfully. And - as stated a couple of times in this thread already - if the data was generated digitally in an iPad synth this will most likely be the case.
Look at the image below:
The resolution of this image is much higher than is required by the data it contains, and it could have been encoded in a much lower resolution without losing fidelity (both in terms of spatial resolution and in the fact that three color channels are used to encode a single grayscale range). A similar analogy can be made when it comes to headroom and dynamic range in audio signals.
Focusing too much on Bits and Hz leads nowhere. There are amazing sounding 16-bit devices and 'shty sounding' 24-bit devices. I mean does a 12-bit Akai S950 sound like 'sht'? I doubt that very, very much...
Personally I'm semi-addiceted to 'chip-music'(SID6581/8580, NES, 2OP FM etc.).
The Yamaha FM-Essentials app has a next to perfect emulation of the 12-bit DAC found on the TX81z...
(I can't hear any difference between my real TX81z and the FM Essentials app).
So instead of focusing in bit-depths and sample-rates like some photographers focus on 'ISO-Noise' it might be better to look at the bigger picture and ask ourselves...
...What do we really want to accomplish?
^ the voice of reason
Every bit of modern music is based on distortion. The rock'n'roll revolution would never have occurred if amps didn't fail to reproduce the input signal properly. The character of virtually every sought after vintage compressor, mixing desk, and microphone etc. is based on these failures.
Hell, all every instrument, and even nature itself is perceived with some kind of interference (albeit non-digital). There is no such thing as pristine sound.
Interesting thread for sure, and I'm not disputing anyone's points or downplaying your own preferences, but it does seem pointless to me to obsess about such perfection in a media that is virtually driven by imperfection.
That's not how it works.. it is full quality, but the detail of the dynamic range in a fully digitally generated signal is so 'perfect' that it already exceeds the human perception without needing 24 bits. E.g. There is no audible noise floor 'eating up bits' that needs to be compensated for, etc.
But either way, as @Samu already said: in the end it's all about the ears, not the theory.
brambos probably did get it elsewhere, but dwarman in a post above wrote about exactly that aspect of the topic.
That 'tiny difference' starts to kick in around -90 dB levels and below.
To lift that part of the signal above perceiption boundary you'll have to apply at least 110 dB of gain.
Audio algorithms are a tricky thing in certain domains: just picture a fast compressor like the 1176 model, with attack times in the sub-millisecond range.
How is it supposed to deal with a bass sound of 50Hz, where a single cycle of the waveform takes 20ms, not that easy - calculation errors are almost guaranteed
As mentioned above in heavy distorted (or compressed) sounds it's bare nonsense to use 24 bits input, because digital processing will introduce higher calculation errors than the gain in recording precision contained.
And the more channels you have in your mix, the more such small errors sum up.
That's why it sometimes helps to deliberately cut off bits (which are internally replaced by zeroes).
I am not talking theory here, I am talking practice.
The issue is in the internal processing arithmetic, which is where the 24 bit (or better 32 bit float) are needed. Take a random set of 16 bit samples, scale each by some fraction (and immediately you have lost some resolution and accuracy), then add the results together. Do the same with 24 bit samples set, and additionally downrez the final result to 16 bits. You will get different results. Not all differences will be inaudible (thus generating bitch reports), and any such differences will screw up automated testing. It is however true that for a single audio file, there is no advantage to being 24 bit over 16 bit if the processing hardware is properly designed. But it might save a bit or two of rounding errors if you are going to present such a file as one of many similar files for mixing and mastering together.
DId you notice I code these things for a living? WiiU audio engine, for example, is 16 bits at the periphery but 24 bits uprezzed internally. My first hack was 16 bits internally, in my naive days. I do not speak from theory, I speak from practice and experience. Don't understand the math of the theory anyway, but I do know my way around an FFT and an oscilloscope, and my ears are near golden. The signal path will in general include several processing steps, each one of which is a potential source of distortion inversely proportional to the bit rez and proportional to the number of channels involved. Up to 96 in that case. Rounding errors are a bitch. And the closer to the noise floor you have to work, the worse they are. I had one case where it made a difference between a scale value of 1 or 0. Filter converged to high DC instead of 0v.