Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
24 bit/96khz - worth it?
Hello, I know - i can try it out myself, but I'm interested in your opinion also. Since I've got my iConnectAudio4+, I could make use of higher bit/sample-rates, but am wondering, if the benefits of this are outwighting the higher processing load involved with this. What bit/sample-rates are you using? I'm using my interface for connecting a guitar in the first place, but also for a microphone.
Comments
Oldskoolish 16 Bit/44,1 kHz here ;-)
There is something to be said for digitizing into DAW at the higher rate. Your DAW will probably also be doing higher resolution, possibly higher rates, anyway. But once in the digital domain - according to theory - keep the higher width but dropping the rate should be fine. Final out to standard 44.1/16 also.
The higher rates are useful if you want to edit your audio and maintain high quality. If you're not editing much then 24 bit 44.1 should be fine.
I think it's ALWAYS worth working at 24bit while you're writing the music if you can, get that noise floor as low as possible. But I also think it's pretty much never worth working at anything higher than 44.1kHz for most musicians. If you're recording real instruments and have high end mics and pres, then possible, but otherwise you're just wasting CPU cycles IMVHO. There's a myth that the higher the sample rate the more accurately you're recording what you hear, but that's only up to a point. More and more research is showing that anything above say 60kHz is actually going to make things sound worse (google Dan Lavry if you want to read more on this). 192kHz can be much less accurate than 44.1 in fact.
FWIW, it's my job to make people's music sound as good as possible, and I have the level of gear to support any bit depth or sample rate. For my own music I'm happily using 24/44.1 these days.
Oversampling is a well known trick to push aliasing out of the hearing range. Try something like abletons operator with 44.1 kHz and with 96 kHz. Sounds completely different.
and if you are processing audio heavily the more numbers the cpu can chew on the more accurate the results will be.
That said whatever sounds right is right.
most stuff I use does internal oversampling anyway now ...
so my samplerate is depending on the project and what effect I am after and what the cpu can handle...
rule of thumb is if you can't hear it it's not worth it
If you are using a cheap shure mic you don't need to care about the sample rate, but maybe your reverb may sound better at a higher sampling rate.
Ok, thanks! So I think I will turn up the bits and leave sampling-rate as it is.
Oversampling internally is a good point to make. A lot of the plug ins that benefit from this do it automatically for you, so the end user doesn't need to worry about. I found that needing to use higher sample-rates because of aliasing was more of an issue 10 years ago than now, though obviously there's still some instances where it can sound different. I think most people tend to hear different a difference and think "that's better" when in reality it might even sound worse. But still the sound is different so it must be better! (not directed at you in particular LaLa, just in general I've seen this).
The higher the sampling rate, the lower the latency at same buffer settings which could be beneficial in some cases such as live use.
http://forum.audiob.us/discussion/8411/software-synths-at-higher-sample-rates#latest
Essentially if the output is 44.1K 16/24/32bit and you record that well...That's the ceiling of that recorded sound. Now, if you want to add FX or audio processing plugins you'll want to have some more room for that. For example if you were dealing with a drum loop at 44.1K 24bit you load it into your DAW running at 88.2K or 96K sample rate then use your audio processing plugins it will sound better in the final product than if you just did everything in 44.1K.
My rules of thumb.
1) If you are going to mix lots of tracks together, you want more bitdepth. Mixing 8 tracks in integer costs you three bits. (Floating point makes the discussion more complicated.)
2) If you are going to process a track, you want a higher sample rate. But note that whether filters are "better" at higher sample rates is a tricky issue with digital effects. Digital effects work in discrete time not continuous time, so subtle or not-so-subtle changes in the sound could smack you, depending on the effect. If you've tweaked things to sound awesome at 44.1, they could very well sound worse at a higher rate, so if you want to work at 96k, commit to 96k at the beginning of your project and do your tweaking there.
Worth it. Just do an AB test with an analog synth doing a LPF sweep at high resonance and you will hear the difference.
24 bit/96khz? get at it...yes
Set everything up to 24/44.1 on my laptop, if I was going to go with a higher sample rate it would be 88.2 as 8's are more friendly looking than 9's.
Doesn't matter if it's 88 or 96 FWIW, at least not the way most people think. A lot of people think that since 88.2 is just twice 44.1 it's easier/better sounding when downsampling to 44.1. Almost all sample rate conversions these days are oversampled many time though, so in terms of the math involved it's the same "difficulty" no matter what you choose.
That said, I do like the way 8's look compared to 9's as well.
I've read so much conflicting info about bit depth and sample rates that it's nice to get your opinion on this Tarekith as you've gone into these matters at a much deeper level than I have. One thing I was wondering about sample rate, will higher rates give you more detail on the low end, especially kick drums and instruments like the double bass, I'm still in a bit of a jazz trance.
Disk space is a concern for me, until I sort out a nas, so I find 24/44.1 to be a good compromise, although some of the things you said earlier in the thread make me doubt the compromise part a little.
I think I leave the setting on 24 bits with 44khz. What I've read from this thread so far it gives the most audible difference, but I have to admit, that I can't hear that of an improvement. I've read some threads in hifi-forums, where people were not being able to differiantiate 192kbs-MP3's from uncompressed format's in blind-tests. But the 24bits should provide some more headroom for editing files, after recording.
iOS native just for "playing" 16bit/44khz.
But if i record(live) i ALWAYS do 24Bit/44khz
you can leave your Gain Settings of your Mics @ the middle/normal setting & after recording normalize this....
So much more dynamic left, as you adjust your Gain setting perfect with 16Bit with the Risk to ClippingXXX.....
@qbiwahnkentobi Agreed - and it is not as much fiddling around with gain-settings after changing effect-chains, cause headroom is always enough, even on lower input-gain as far as I understand this.
@qbiwahnkentobi I would have thought that the average interface/mic is going to add way more noise by itself than the difference in bit depth though. I can't imagine many consumer level interfaces and mics are going to have noise floor low enough for 24bit to be worth it.
For people with higher grade equipment sure, but for anyone using budget gear I would imagine the noise floor is going to be dictated by the gear itself. I'm no expert though, so please correct me if I'm wrong.
But within the digital chain it should matter - regardless, which device you are using. When in a chain there are multiple effects, each with independent input-gain, for example. And the same should apply to output gain, of course. Please correct me, if I'm wrong on this.
Right that makes sense. I wonder if the difference is actually audible in a digital chain though?
Depends perhaps, if the internal processing is independent of the input-processing of the connected device. In cubasis, bit-resolution is an option within the project-setting page. So I suppose, it applies to the whole processing.
Here's a pretty good discussion of "high res" audio processing, intermodulation distortion, and frequency filtering:
https://www.gearslutz.com/board/mastering-forum/968641-some-thoughts-high-resolution-audio-processing.html
@Uncledig Thanks for posting the link - looks interesting!
Often, the noise floor of recording is indeed the limiting factor. 16 bits is not bad at all (to listen to). But you should make sure your mixing is happening in 24 or 32 bits. As I said before, if you mix 8 tracks, you are losing 3 bits from each instrument. If you are mixing 16 tracks, you are losing 4 bits from each instrument. You may like 16 bit sound, but do you like 12 bit sound?
(Ignoring circumstances that make the situation worse, like differences in levels, and things that can make the situation better, like dithering, witch trades bitdepth accuracy for temporal smearing.)
Is that really true? If 8 instruments are mixed at the same level could a human actually notice this loss of dynamic range? It seems doubtful to me, the quiet sounds would surely be lost amongst the louder ones, not because of bit depth but because of our own limitations.
I'm happy to be proved wrong though, is there a resource you can point to with more information?
This is a great thread. I'm learning some new info here. I'm also concerned/interested, how the internal processing of iOS apps processing at rates that they may choose, and how they interact through apps like Audiobus and IAA..
Over at Gearslutz forum we'd be getting a lot of 20 year old info and a lot of arguing over this.
@richardyot, Good point. It's always more complicated than it seems. What I said was pretty much worst case.
Imagine you are mixing a bunch of instruments by summing them. With a naive adding mixer, you keep the significant bits and throw out the least significant (or else you clip, trusting the instruments to be uncorrelated, or at rather low levels, and that means you've already sacrificed bits). If all the instruments are playing at once, yeah, you may not hear the missing bits. But if all the instruments drop out but one, you should be able to make out its loss of quality.
Of course, that's where a compressor or limiter will start helping you.
This is all theoretical. For any particular setup, you'd have to follow the math to see what is lost.
One thing to keep in mind is that we are blessed in modern recording. You can make some pretty amazing music with your voice, your hands, and a tape recorder, if you have talent.
Here's a fun thread to read. This conversation happens over and over and over. You'll find similar arguments on kvr, blogs, all over the web.
http://dsp.stackexchange.com/questions/3581/algorithms-to-mix-audio-signals-without-clipping
All this has happened before and it will happen again.
@Diode108 thanks, I'll check that out. And I agree, we're lucky, and ultimately the music is the most important thing.