Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Why does the mastering engineer do the ceiling and not the codec?
Very technical topic, beware!
This is something I've always wondered since the whole "Ceiling" thing came about when music increasingly got spread via lossy formats / streaming etc.
Why the heck is the mastering engineer supposed to set the "Ceiling" to prevent any codecs (well, encoders) further down the line from doing stupid stuff, thereby needlessly reducing the output format's dynamic range (by a tiny amount, I know, but still!)
Should't each individual codec know better how it works internally and thus do the necessary level reduction, always assuming that the input is normalized to 0 dBFS?
Or am I again totally incompetent?
Comments
Because usually the final ceiling is tied hand in hand with the overall volume/loudness of the track, which is the mastering engineer is usually doing the heavy lifting on.
Also, the only way the codec can prevent overs if the file is 0dBFS is to lower the dynamic range some how, which a lot of people aren't happy leaving up to the codec to decide.
Interesting, thanks... so as I suspected, I'm looking at it too much from a technical point of view than artistic.
I think setting the ceiling one or two dB lower than full scale started in the ‘80, when compact disks appeared. The format itself was fine, but many consumer CD players had awful converters or other components, so they sounded pretty bad when the true peak was over 0dBFS.
There's still many places where you can run into clipping issues at 0dBFS, even with streaming services. There's a reason that all of them recommend masters no louder than -1TP (True Peak) instead of -1dBFS.
Whether or not that clipping is audible is a whole other story, which is one reason we don't hear issues even though most masters sent for streaming are probably -0.3 to -0.5dBFS. 🤷🏼♂️
I still do all my client masters to -0.3dBFS unless they specifically tell me they want something different. Most online aggregators don't let you send multiple versions of the same track for different outlets, so one "traditional" master for everything usually works fine.
It's also about the fact that reducing the volume of a digital signal (i.e. inside the codec) can't be done without adding more quantization errors.
Might not be as relevant in a high volume mastered track with 24 bits resolution or more but if the volume can be reduced in the mastering process already then why leave it up to the codec - which doesn't even know in advance by how much to reduce the volume if it's a live stream.
Nonetheless, audio compression then usually adds even more quantization errors (now in the frequency domain) to reduce the data rate.
This......even when pulling the daw fader down and up it is introducing quant errrors.
Cheers
Can any human actually hear these "errors"?
Or is it like driving down the street in your car and time is going infinitesimally slower for you in the car than for the people waking on the footpath?
Since DAWs either work with floating point or 32/64bit precision, it's likely more like a slow car, not a lightspeed racing machine 😄
But can you actually hear the errors?
I can't, that's what I meant to say 😉
OK, thanks. Good to know.
The "the Codec scaling down the amplitude and thus introducing additional quantization errors" argument also only holds when the master is delivered in 16-Bit WAV. Much less already in 24-Bit, and pretty much irrelevant in 32-Bit Float I'd say.
I still personally think that the job of the mastering engineer should focus on the artistic part (dynamics, perceived loudness (as measured by their trusty ole' LUFS meter), EQ, stereo image etc.) and mere technicalities like ceiling to avoid conversion artefacts should be handled automatically downstream, which isn't really a problem if the master is delivered in a 32 or 64 bit floating point format.
But I also understand the reasoning that ONE engineer might want precise control about EVERYTHING including the FINAL loudness down to a tenth of a dB.
Good discussion to have! 👍
Some of it is practical too, how many aggregators are actually accepting 32bit float masters? You’re lucky if you can actually find one that lets you upload 24bit sadly.
FWIW I can’t believe anyone can hear quantization errors in a file in normal use cases. The noise floor of even the best playback systems is probably way higher. Sometimes the theory of digital audio gets in the way of the practical side of audio engineering. 🙃
I’m making my songs as Dobly Atmos currently and I have to get them down to -18LUFS
My approach is to not use a limiter, instead I (painstakingly) bring each bed track or object track down in level independently to achieve the balanced mix
I prefer this to slapping a limiter on the final groups or channels etc, a limiter would only kick in above a certain level and let most quiet passages pass as is (or at least very little-ly smallened)
Fortunately I have enough pain to stake
Is there a quick intro to this topic somewhere? I think I understand, but the dynamic range bit was new to me. Do streaming services measure lufs now or?