Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Combining tracks in Logic to make an album - advise needed.

I’ve got 10 tracks I want to make into an album. Never done this before.

I’m using Logic Pro. Here’s what I’ve been doing, but should I do it differently?
I’ve been exporting each track to audio stems at the settings Logic has as standard (24bit 44100 AIFF), saving in AudioShare, then saving to the files app to drag back into a new Logic instance to merge the tracks together and do some final mastering.

When I drag the audio back into Logic, it says converting audio. All seems fine, but I’m worried that this whole process and conversions of the audio may degrade the quality.

So, a few questions:
Should I be exporting the audio at different settings?
Is there a simpler way to do this whole process?
Does the conversion of the audio degrade it in any meaningful way?
Any other making an album advice?

Comments

  • edited September 26

    The conversion is happening because Logic's projects are set to 48k by default, and you are exporting at 41k, so when you import the stems back into a new Logic project they are being converted back to 48k.

    The easy way to avoid that is simply to export the stems at 48k, to avoid the conversion.

    Personally I wouldn't be exporting stems to master, IMO it makes more sense to export a single stereo file for mastering, with about 6db of headroom. All you should be doing at the mastering stage really is setting the final levels and some corrective EQ if it's needed. If you are still balancing levels or bus compressing that's best done at the mix stage.

    My personal preference for albums is to try and roughly level-match all the songs at a similar loudness level just louder than the -14LUFS used by streaming platforms. So for example if all your tracks are around -13.5 LUFS you are maximising the dynamic range available to you without risking your track being turned down to any noticeable degree on Spotify (or being noticeably quieter than other tracks).

    Of course some people like to add more aggressive compression and limiting than that, but just be aware that you can't really make a track "louder" this way, you are just compressing more aggressively. It really depends on how much dynamic range you want in the finished track and how sharp you want the transients to be. More compression means less dynamic range and less prominent transients.

  • @richardyot said:
    The conversion is happening because Logic's projects are set to 48k by default, and you are exporting at 41k, so when you import the stems back into a new Logic project they are being converted back to 48k.

    The easy way to avoid that is simply to export the stems at 48k, to avoid the conversion.

    Personally I wouldn't be exporting stems to master, IMO it makes more sense to export a single stereo file for mastering, with about 6db of headroom. All you should be doing at the mastering stage really is setting the final levels and some corrective EQ if it's needed. If you are still balancing levels or bus compressing that's best done at the mix stage.

    My personal preference for albums is to try and roughly level-match all the songs at a similar loudness level just louder than the -14LUFS used by streaming platforms. So for example if all your tracks are around -13.5 LUFS you are maximising the dynamic range available to you without risking your track being turned down to any noticeable degree on Spotify (or being noticeably quieter than other tracks).

    Of course some people like to add more aggressive compression and limiting than that, but just be aware that you can't really make a track "louder" this way, you are just compressing more aggressively. It really depends on how much dynamic range you want in the finished track and how sharp you want the transients to be. More compression means less dynamic range and less prominent transients.

    Thanks so much. That’s a really comprehensive answer.

    My levels of the tracks are close for someone still learning. I did use stereo tracks to start with, but while setting out the tracks, I decided I wanted to mix in some sections between album tracks, as the album is more of a continual concept going through than just individual tracks. Some album tracks I’m trying to blend together. For a first album attempt, I’m not keeping it simple lol.

    Thanks for clearing up the conversion point though, that clears that up for me.

  • edited September 27

    It depends on how many stems you have for each track. For exemple, did you export your drums as a single stereo stem or as "multi track".
    If I was in your case, I would stick to stereo mixdown of each track of the album and export separately only the stems I need for the transitioning. What you're doing is not wrong (there are no wrong in creation) and could work. But from an outsider pov it seems cumbersome and a lot of work, especially if each of the tracks comes with lots of stems. So I would say, plan ahead to know exactly what stems you need for your transitions then export a global mixdown and export separately only the transition stems. It will save you time, space and lots of headaches too.
    As @richardyot mentioned, I would Stick to 48k, by doing so, you won't be downsampling your original source. I wonder why you exported at 44.1k in the first place. Was it accidental because it was set as standard in your logic export settings or was it to save some hard drive space? Or was it to match the audio cd mastering standard? If this is the latter, I don't think you can exactly consider being in the mastering process yet as you're still mixing things together. I think 44.1 conversion should come at the very end of your mastering process, and seems needed only to make an audio CD of your creation. Other than that, 48k seem to have become the general standard. So it is a safe bet to stick with it even for your final master.

  • @JanKun said:
    It depends on how many stems you have for each track. For exemple, did you export your drums as a single stereo stem or as "multi track".
    If I was in your case, I would stick to stereo mixdown of each track of the album and export separately only the stems I need for the transitioning. What you're doing is not wrong (there are no wrong in creation) and could work. But from an outsider pov it seems cumbersome and a lot of work, especially if each of the tracks comes with lots of stems. So I would say, plan ahead to know exactly what stems you need for your transitions then export a global mixdown and export separately only the transition stems. It will save you time, space and lots of headaches too.
    As @richardyot mentioned, I would Stick to 48k, by doing so, you won't be downsampling your original source. I wonder why you exported at 44.1k in the first place. Was it accidental because it was set as standard in your logic export settings or was it to save some hard drive space? Or was it to match the audio cd mastering standard? If this is the latter, I don't think you can exactly consider being in the mastering process yet as you're still mixing things together. I think 44.1 conversion should come at the very end of your mastering process, and seems needed only to make an audio CD of your creation. Other than that, 48k seem to have become the general standard. So it is a safe bet to stick with it even for your final master.

    Logic has 44100 set as standard, so that's the only reason I used that to start with. Have switched to 48k now.
    Funny you should mention just using the single stems needed for the transition, as that's what I'm doing now.

    The thing is that now I've made an initial structure of the tracks, listening to the tracks together, I'm now noticing things I need to change in the mix of the individual tracks, so going back to mix them again.

    Its a learning process with me at this time, so thanks for your advice. I've spent most my life never finishing tracks, so to get this close to actually finishing an album is quite exciting for myself.

    I'm lucky that this is just a hobby, so no stress of a deadline or need to make any money.

  • @richardyot said:
    The conversion is happening because Logic's projects are set to 48k by default, and you are exporting at 41k, so when you import the stems back into a new Logic project they are being converted back to 48k.

    The easy way to avoid that is simply to export the stems at 48k, to avoid the conversion.

    Personally I wouldn't be exporting stems to master, IMO it makes more sense to export a single stereo file for mastering, with about 6db of headroom. All you should be doing at the mastering stage really is setting the final levels and some corrective EQ if it's needed. If you are still balancing levels or bus compressing that's best done at the mix stage.

    My personal preference for albums is to try and roughly level-match all the songs at a similar loudness level just louder than the -14LUFS used by streaming platforms. So for example if all your tracks are around -13.5 LUFS you are maximising the dynamic range available to you without risking your track being turned down to any noticeable degree on Spotify (or being noticeably quieter than other tracks).

    Of course some people like to add more aggressive compression and limiting than that, but just be aware that you can't really make a track "louder" this way, you are just compressing more aggressively. It really depends on how much dynamic range you want in the finished track and how sharp you want the transients to be. More compression means less dynamic range and less prominent transients.

    If 2 tracks are mastered at the same loudness level, the aggressively compressed track without dynamic range will indeed "feel" and be perceived louder than the less compressed and more dynamic one. Lufs is an average loudness indicator. So compressing a track actually make a track louder at least in terms of the way our ears perceive it.

  • @JanKun said:

    @richardyot said:
    The conversion is happening because Logic's projects are set to 48k by default, and you are exporting at 41k, so when you import the stems back into a new Logic project they are being converted back to 48k.

    The easy way to avoid that is simply to export the stems at 48k, to avoid the conversion.

    Personally I wouldn't be exporting stems to master, IMO it makes more sense to export a single stereo file for mastering, with about 6db of headroom. All you should be doing at the mastering stage really is setting the final levels and some corrective EQ if it's needed. If you are still balancing levels or bus compressing that's best done at the mix stage.

    My personal preference for albums is to try and roughly level-match all the songs at a similar loudness level just louder than the -14LUFS used by streaming platforms. So for example if all your tracks are around -13.5 LUFS you are maximising the dynamic range available to you without risking your track being turned down to any noticeable degree on Spotify (or being noticeably quieter than other tracks).

    Of course some people like to add more aggressive compression and limiting than that, but just be aware that you can't really make a track "louder" this way, you are just compressing more aggressively. It really depends on how much dynamic range you want in the finished track and how sharp you want the transients to be. More compression means less dynamic range and less prominent transients.

    If 2 tracks are mastered at the same loudness level, the aggressively compressed track without dynamic range will indeed "feel" and be perceived louder than the less compressed and more dynamic one. Lufs is an average loudness indicator. So compressing a track actually make a track louder at least in terms of the way our ears perceive it.

    Any decent and up to date books on this subject you could recommend?

  • edited September 27

    @Fruitbat1919 said:

    @JanKun said:

    @richardyot said:
    The conversion is happening because Logic's projects are set to 48k by default, and you are exporting at 41k, so when you import the stems back into a new Logic project they are being converted back to 48k.

    The easy way to avoid that is simply to export the stems at 48k, to avoid the conversion.

    Personally I wouldn't be exporting stems to master, IMO it makes more sense to export a single stereo file for mastering, with about 6db of headroom. All you should be doing at the mastering stage really is setting the final levels and some corrective EQ if it's needed. If you are still balancing levels or bus compressing that's best done at the mix stage.

    My personal preference for albums is to try and roughly level-match all the songs at a similar loudness level just louder than the -14LUFS used by streaming platforms. So for example if all your tracks are around -13.5 LUFS you are maximising the dynamic range available to you without risking your track being turned down to any noticeable degree on Spotify (or being noticeably quieter than other tracks).

    Of course some people like to add more aggressive compression and limiting than that, but just be aware that you can't really make a track "louder" this way, you are just compressing more aggressively. It really depends on how much dynamic range you want in the finished track and how sharp you want the transients to be. More compression means less dynamic range and less prominent transients.

    If 2 tracks are mastered at the same loudness level, the aggressively compressed track without dynamic range will indeed "feel" and be perceived louder than the less compressed and more dynamic one. Lufs is an average loudness indicator. So compressing a track actually make a track louder at least in terms of the way our ears perceive it.

    Any decent and up to date books on this subject you could recommend?

    Everything I learned on the topic was from information I gathered online and from my dear friend @richardyot, who I was gently teasing with this remark as his comment about aggressive compression and loudness was slightly misleading.
    If you have questions on the topic please feel free to ask. Both Richard and me, and I am sure, plenty of other members well versed on the topic will be happy to shed a light

  • edited September 27

    @Fruitbat1919 said:

    @JanKun said:

    @richardyot said:
    The conversion is happening because Logic's projects are set to 48k by default, and you are exporting at 41k, so when you import the stems back into a new Logic project they are being converted back to 48k.

    The easy way to avoid that is simply to export the stems at 48k, to avoid the conversion.

    Personally I wouldn't be exporting stems to master, IMO it makes more sense to export a single stereo file for mastering, with about 6db of headroom. All you should be doing at the mastering stage really is setting the final levels and some corrective EQ if it's needed. If you are still balancing levels or bus compressing that's best done at the mix stage.

    My personal preference for albums is to try and roughly level-match all the songs at a similar loudness level just louder than the -14LUFS used by streaming platforms. So for example if all your tracks are around -13.5 LUFS you are maximising the dynamic range available to you without risking your track being turned down to any noticeable degree on Spotify (or being noticeably quieter than other tracks).

    Of course some people like to add more aggressive compression and limiting than that, but just be aware that you can't really make a track "louder" this way, you are just compressing more aggressively. It really depends on how much dynamic range you want in the finished track and how sharp you want the transients to be. More compression means less dynamic range and less prominent transients.

    If 2 tracks are mastered at the same loudness level, the aggressively compressed track without dynamic range will indeed "feel" and be perceived louder than the less compressed and more dynamic one. Lufs is an average loudness indicator. So compressing a track actually make a track louder at least in terms of the way our ears perceive it.

    Any decent and up to date books on this subject you could recommend?

    Double post.

  • @JanKun said:

    @richardyot said:
    The conversion is happening because Logic's projects are set to 48k by default, and you are exporting at 41k, so when you import the stems back into a new Logic project they are being converted back to 48k.

    The easy way to avoid that is simply to export the stems at 48k, to avoid the conversion.

    Personally I wouldn't be exporting stems to master, IMO it makes more sense to export a single stereo file for mastering, with about 6db of headroom. All you should be doing at the mastering stage really is setting the final levels and some corrective EQ if it's needed. If you are still balancing levels or bus compressing that's best done at the mix stage.

    My personal preference for albums is to try and roughly level-match all the songs at a similar loudness level just louder than the -14LUFS used by streaming platforms. So for example if all your tracks are around -13.5 LUFS you are maximising the dynamic range available to you without risking your track being turned down to any noticeable degree on Spotify (or being noticeably quieter than other tracks).

    Of course some people like to add more aggressive compression and limiting than that, but just be aware that you can't really make a track "louder" this way, you are just compressing more aggressively. It really depends on how much dynamic range you want in the finished track and how sharp you want the transients to be. More compression means less dynamic range and less prominent transients.

    If 2 tracks are mastered at the same loudness level, the aggressively compressed track without dynamic range will indeed "feel" and be perceived louder than the less compressed and more dynamic one. Lufs is an average loudness indicator. So compressing a track actually make a track louder at least in terms of the way our ears perceive it.

    I disagree :)

    The whole point of loudness normalisation is to avoid jarring transitions in volume between one track and another when a listener is shuffling between lots of different songs. So the streaming services will turn a track down if its perceived volume is higher than -14LUFS, at which point there is very little to be gained by aggressive limiting.

    A track that has been smashed into a brickwall limiter will not be perceived as louder after volume normalisation, it will just sound more compressed, with slightly softer transients and less separation between parts. It will sound more compressed, for sure, but not louder.

    To demonstrate this I've done a quick test in Logic with a short loop. The first section was exported at -13LUFS, the second section at -7LUFS and then turned down 6db to volume match the first (exactly as a streaming service would do). The second clip isn't perceived as louder IMO, it just sounds more compressed with softer transients and less separation between the guitar and drums. The fill at the end of the loop is audibly more dynamic in the first section, due to the lower compression.

    https://www.dropbox.com/scl/fi/4tp66zxmvths76ruuvru3/loudness-test.mp4?rlkey=5cdejx2ziig6ki4k3d4sod0ia&dl=0

  • A different way to think about it is that when you apply compression with a regular compressor plugin, the compression will actually make the track quieter unless you also apply make up gain.

    In a brick wall limiter like Pro L, the main control is the make-up gain: you increase the gain and the limiter applies the compression automatically with an infinite ratio. But if you apply 7db of gain with a limiter, and then the streaming services turn the volume of the track down by 7db, you haven't made anything louder, it's just more compressed and with the tops of the transients chopped off.

  • Your exemple is a small loop of section of a track which doesn't include a very wide dynamic range, so it make sense that in your exemple the compression doesn't affect much the listener's loudness perception. To be valid and show the effect of compression on dynamic range and the way we perceived loudness, your exemple should also incluse a very quiet section. In this case, applying heavy compression will bring the soft part up, erasing all the dynamic range, hence giving an impression of a more "dense" track.

  • @JanKun said:
    Your exemple is a small loop of section of a track which doesn't include a very wide dynamic range, so it make sense that in your exemple the compression doesn't affect much the listener's loudness perception. To be valid and show the effect of compression on dynamic range and the way we perceived loudness, your exemple should also incluse a very quiet section. In this case, applying heavy compression will bring the soft part up, erasing all the dynamic range, hence giving an impression of a more "dense" track.

    That's a good point, but to be fair it works both ways: in the more compressed example the quieter part will sound louder, but of course the louder part will sound quieter, so at that point it's about dynamic range rather than perceived loudness. It can be argued both ways, since the quieter part is louder with more compression, but the louder part will be louder with less compression.

    I'll do another test :)

  • @richardyot said:

    @JanKun said:
    Your exemple is a small loop of section of a track which doesn't include a very wide dynamic range, so it make sense that in your exemple the compression doesn't affect much the listener's loudness perception. To be valid and show the effect of compression on dynamic range and the way we perceived loudness, your exemple should also incluse a very quiet section. In this case, applying heavy compression will bring the soft part up, erasing all the dynamic range, hence giving an impression of a more "dense" track.

    That's a good point, but to be fair it works both ways: in the more compressed example the quieter part will sound louder, but of course the louder part will sound quieter, so at that point it's about dynamic range rather than perceived loudness. It can be argued both ways, since the quieter part is louder with more compression, but the louder part will be louder with less compression.

    I'll do another test :)

    No need to waste your time Richard, we understand each other and we're both on the "preserve the dynamic range" side😉. The effect of the compression on the way we perceive the overall loudness of the track depends on how wide the dynamic range between the soft and loud parts is before applying the compression and how much dynamic range is left after the compression. A track where the soft parts have been brought to the same level as the loud parts will be perceived louder because of the consistency of the volume on the overall length of the track. I agree that this will imply bringing the loudest part down, but I don't think this drop in volume of the loudest part is as perceptible as the effect of killing the dynamic range, especially to untrained ears.

  • I've done a new test with a loop that has a quiet section and a loud section:

    https://www.dropbox.com/scl/fi/75286kpsuuda4wjpzlq1y/loudness-test-2.mp4?rlkey=uh8qolay58azam4pzgopfvsdw&dl=0

    It was an interesting test, in order to get the second version down to -7LUFS I had to really push the limiter, so it's more extreme than the first test was.

    IMO there are good-faith arguments to say that either loop is "louder": in the more compressed example the quiet part is more attention-grabbing, but the impact of the loud part is much weaker. In the more dynamic loop the louder part is measurably louder, and the transients much snappier, and the drum fill is louder as well.

    Ultimately it's a creative decision that should be based on the dynamics rather than loudness IMO. The more dynamic loop is more exciting and the more compressed loop sounds quite dull to me.

  • How are you going to release the album?

    It will be different if you're making a CD for personal use than if you're making an album for streaming. If it's for streaming, there's not much point blending the tracks together.

    In any case, each song will need its own 'mastering'. The way I do it (I'm not suggesting this is the right way or indeed any kind of recommendation) is to A|B each song when mastering against those that I've already finished to make sure the levels and overall sound are what I'm after rather than put them all in the same project. Different songs can behave very differently and need different EQ, Compression and Limiting.

    There aren't any decent A|B plugins as far as I'm aware on iOS so I'd output all tracks of the current song to a bus and put the mastering plugins there so the stereo bus can be unprocessed which will allow you to mute/unmute the new song and the ones you're comparing it to.

    As for blending the songs together, this would be OK for CD but I wouldn't bother if you plan to release on streaming. And if you're making it for CD then the iPad ain't the way to go as you can't make a CD track with all the correct Metadata on iOS. Releasing an album as one long track is a good way to ensure nobody will listen to it :-)

  • edited September 27

    @JanKun said:

    @richardyot said:

    @JanKun said:
    Your exemple is a small loop of section of a track which doesn't include a very wide dynamic range, so it make sense that in your exemple the compression doesn't affect much the listener's loudness perception. To be valid and show the effect of compression on dynamic range and the way we perceived loudness, your exemple should also incluse a very quiet section. In this case, applying heavy compression will bring the soft part up, erasing all the dynamic range, hence giving an impression of a more "dense" track.

    That's a good point, but to be fair it works both ways: in the more compressed example the quieter part will sound louder, but of course the louder part will sound quieter, so at that point it's about dynamic range rather than perceived loudness. It can be argued both ways, since the quieter part is louder with more compression, but the louder part will be louder with less compression.

    I'll do another test :)

    No need to waste your time Richard, we understand each other and we're both on the "preserve the dynamic range" side😉. The effect of the compression on the way we perceive the overall loudness of the track depends on how wide the dynamic range between the soft and loud parts is before applying the compression and how much dynamic range is left after the compression. A track where the soft parts have been brought to the same level as the loud parts will be perceived louder because of the consistency of the volume on the overall length of the track. I agree that this will imply bringing the loudest part down, but I don't think this drop in volume of the loudest part is as perceptible as the effect of killing the dynamic range, especially to untrained ears.

    Here’s my initial go at putting some of the tracks together. Only used five as I don’t think the style of music goes well with really long albums. I don’t have the knowledge base you two obviously have. I have a lot to learn, but I just wanted to feel like I’ve sort of completed something rather than my usual never finish anything.

    I keep listening and hearing things that need adjusting, but I still probably will in five years time when I’ve learnt more and come back to listen to tracks made now.

    My music hasn’t got much in the way of commercial value - it needs to be listened to attentively and is poor as background music. The quiet parts alone would make it virtually impossible to hear well in a car.

    I think I’m going to soak in some more learning vids on music production and come back and listen again in time when I have fresh ears and hopefully a bit more knowledge.

    I have fun making my music though, so in that way it is successful. Thanks for the help both of you.

  • @klownshed said:
    How are you going to release the album?

    It will be different if you're making a CD for personal use than if you're making an album for streaming. If it's for streaming, there's not much point blending the tracks together.

    In any case, each song will need its own 'mastering'. The way I do it (I'm not suggesting this is the right way or indeed any kind of recommendation) is to A|B each song when mastering against those that I've already finished to make sure the levels and overall sound are what I'm after rather than put them all in the same project. Different songs can behave very differently and need different EQ, Compression and Limiting.

    There aren't any decent A|B plugins as far as I'm aware on iOS so I'd output all tracks of the current song to a bus and put the mastering plugins there so the stereo bus can be unprocessed which will allow you to mute/unmute the new song and the ones you're comparing it to.

    As for blending the songs together, this would be OK for CD but I wouldn't bother if you plan to release on streaming. And if you're making it for CD then the iPad ain't the way to go as you can't make a CD track with all the correct Metadata on iOS. Releasing an album as one long track is a good way to ensure nobody will listen to it :-)

    Yep, all good points. I’m only really making it for my own enjoyment of making music. The reason I’m putting some tracks together to make an album as such, is it helps me in some psychological way to feel like I’m finishing something.

    I’ve been making my own music for many many years, but never really ever finish tracks. I get so far then get either bored with the process or frustrated and move on. I’ve not even collected most of my unfinished tracks over the years. I’m still learning to play and create tracks. I just wanted to feel like I finished something - now I know though that tracks are all just really creations in progress, well at least while I’m learning the process to any real skill level.

    It’s been a process even learning how to balance this as a hobby, as I find it frustrating at times not knowing enough to get to that ‘I’m happy with my tracks’ level, but knowing that I am so easily distracted by my many other hobbies that I am already at a more comfortable skill level at. I am encouraged by the fact that while my progress is very slow, I am making some progress and am still enjoying making sound, even if it’s just noodling lol

  • @Fruitbat1919 said:

    @JanKun said:

    @richardyot said:

    @JanKun said:
    Your exemple is a small loop of section of a track which doesn't include a very wide dynamic range, so it make sense that in your exemple the compression doesn't affect much the listener's loudness perception. To be valid and show the effect of compression on dynamic range and the way we perceived loudness, your exemple should also incluse a very quiet section. In this case, applying heavy compression will bring the soft part up, erasing all the dynamic range, hence giving an impression of a more "dense" track.

    That's a good point, but to be fair it works both ways: in the more compressed example the quieter part will sound louder, but of course the louder part will sound quieter, so at that point it's about dynamic range rather than perceived loudness. It can be argued both ways, since the quieter part is louder with more compression, but the louder part will be louder with less compression.

    I'll do another test :)

    No need to waste your time Richard, we understand each other and we're both on the "preserve the dynamic range" side😉. The effect of the compression on the way we perceive the overall loudness of the track depends on how wide the dynamic range between the soft and loud parts is before applying the compression and how much dynamic range is left after the compression. A track where the soft parts have been brought to the same level as the loud parts will be perceived louder because of the consistency of the volume on the overall length of the track. I agree that this will imply bringing the loudest part down, but I don't think this drop in volume of the loudest part is as perceptible as the effect of killing the dynamic range, especially to untrained ears.

    Here’s my initial go at putting some of the tracks together. Only used five as I don’t think the style of music goes well with really long albums. I don’t have the knowledge base you two obviously have. I have a lot to learn, but I just wanted to feel like I’ve sort of completed something rather than my usual never finish anything.

    I keep listening and hearing things that need adjusting, but I still probably will in five years time when I’ve learnt more and come back to listen to tracks made now.

    My music hasn’t got much in the way of commercial value - it needs to be listened to attentively and is poor as background music. The quiet parts alone would make it virtually impossible to hear well in a car.

    I think I’m going to soak in some more learning vids on music production and come back and listen again in time when I have fresh ears and hopefully a bit more knowledge.

    I have fun making my music though, so in that way it is successful. Thanks for the help both of you.

    I had a listen - it's sounding good so far 👍

  • @Fruitbat1919 said:

    @JanKun said:

    @richardyot said:

    @JanKun said:
    Your exemple is a small loop of section of a track which doesn't include a very wide dynamic range, so it make sense that in your exemple the compression doesn't affect much the listener's loudness perception. To be valid and show the effect of compression on dynamic range and the way we perceived loudness, your exemple should also incluse a very quiet section. In this case, applying heavy compression will bring the soft part up, erasing all the dynamic range, hence giving an impression of a more "dense" track.

    That's a good point, but to be fair it works both ways: in the more compressed example the quieter part will sound louder, but of course the louder part will sound quieter, so at that point it's about dynamic range rather than perceived loudness. It can be argued both ways, since the quieter part is louder with more compression, but the louder part will be louder with less compression.

    I'll do another test :)

    No need to waste your time Richard, we understand each other and we're both on the "preserve the dynamic range" side😉. The effect of the compression on the way we perceive the overall loudness of the track depends on how wide the dynamic range between the soft and loud parts is before applying the compression and how much dynamic range is left after the compression. A track where the soft parts have been brought to the same level as the loud parts will be perceived louder because of the consistency of the volume on the overall length of the track. I agree that this will imply bringing the loudest part down, but I don't think this drop in volume of the loudest part is as perceptible as the effect of killing the dynamic range, especially to untrained ears.

    Here’s my initial go at putting some of the tracks together. Only used five as I don’t think the style of music goes well with really long albums. I don’t have the knowledge base you two obviously have. I have a lot to learn, but I just wanted to feel like I’ve sort of completed something rather than my usual never finish anything.

    I keep listening and hearing things that need adjusting, but I still probably will in five years time when I’ve learnt more and come back to listen to tracks made now.

    My music hasn’t got much in the way of commercial value - it needs to be listened to attentively and is poor as background music. The quiet parts alone would make it virtually impossible to hear well in a car.

    I think I’m going to soak in some more learning vids on music production and come back and listen again in time when I have fresh ears and hopefully a bit more knowledge.

    I have fun making my music though, so in that way it is successful. Thanks for the help both of you.

    Finishing a track is already a great achievement in itself, so you should be proud that you finished a full album. I had a listen, and I could hear that you put a lot of time and dedication to this project. Lots of interesting things, especially some of the timbre combination choices for sound layering. Transitions are nicely done, especially the one towards the end where you first show down the tempo before changing the mood seemlessly. Great job! Keep creating, keep sharing here!

  • edited September 27

    @richardyot @JanKun

    Thank you both for your kind comments and encouragement :)

  • Hey @richardyot @JanKun and @klownshed when you’re “mastering” your tracks for streaming are you looking to get each track to be at streaming services maximum level?
    Just curious because obviously different tracks do have different dynamics and even overall volume levels. There are times on albums where one song is going to be a bit quieter than others on the same album and it might sound a little boring after a while if every track is being played at the same loudness level.

    On the subject of LUFS and peak levels I have experimented with this and I’m of the opinion that it is a good idea to mix with a little bit of headroom in mind (ie getting the peaks to be sitting at somewhere around -3 dBTP and maybe the LUFS are sitting at -18 to -16) so that there is a little room to adjust in mastering without losing the dynamics. Actually aiming for these levels during the track mix. This way you’re balancing with the end in mind.

  • edited September 28

    @Mountain_Hamlet
    For mixing, I personally don't check the loudness but definitely keep some headroom and make sure to keep the peaks somewhere between -3 and -6 dBFS. I am also monitoring the dynamic range and, depending on the song, try to keep it somewhere between 6 and 10 dB and sometimes even more.

    In terms of loudness, each platform has slightly different standards. They all apply loudness normalisation but at different reference (mostly integrated -14 and -16 LUFS). Another point is that if your master is lower than their standard, not all of them will bring it up... And for those who do, limiting will be applied which will mess up your dynamic range... For all those reasons, I prefer to master somewhere around -13 and -11 LUFS (depending on the song). The streaming services will only turn down the overall loudness by applying simple gain reduction which will not affect the dynamic range. I think it is better to keep some headroom for the peaks at -1 dbTP too.

    I am working on my first solo album and it is going to be the first time I handle the mastering. Previous albums I have been involved in were mastered by professionals. I haven't thought yet about its overall listening experience in terms of loudness. My first thought would be to keep it as natural and dynamic as possible which means quieter parts. But on the other hand, I don't think people listen much to whole albums nowadays. With all the information we got available everywhere anytime, our attention span is generally getting shorter. So each tracks should also stand on their own. Compromise will have to be made...

  • Just do it.

    I'm sorry...this debate on getting the best quality audio means nothing if your music just isn’t good.

    Create, record, and release.

    And do it again. And again, and again… That’s the best way to improve your quality.

    Latency means nothing against the powerful swipe button. We’re in a judgemental era. Either your music is good or not.

  • edited September 28

    @seonnthaproducer said:
    Just do it.

    I'm sorry...this debate on getting the best quality audio means nothing if your music just isn’t good.

    Create, record, and release.

    And do it again. And again, and again… That’s the best way to improve your quality.

    Latency means nothing against the powerful swipe button. We’re in a judgemental era. Either your music is good or not.

    Even if your music is shit you still want it to sound as good as. you can make it.

    Music is my hobby. I like making music. And I like it to be as close to a commercial release as I can possibly get it. I have enough self-respect to want to make something as good as possible.

    And no, Music is not either good or not. It's not a binary. Some people will hate it, some might even love it. And everything in between. Most of us will be lucky to achieve averageness. And that's still very cool.

    But nobody has the right to be Judge Judy and executioner. None of us have the right to be grand-arbiter of taste either.

    Anybody releasing any music, no matter how few people will ever listen gets my respect, regardless of how shit I think their music is.

    Anyway, it's my turd. It's up to me how much polishing it should get. :-)

    Love and hugs.

    Money where mouth is:

    https://meestersmeeeth.uk

    At least I tried. :-)

  • @Mountain_Hamlet said:
    Hey @richardyot @JanKun and @klownshed when you’re “mastering” your tracks for streaming are you looking to get each track to be at streaming services maximum level?
    Just curious because obviously different tracks do have different dynamics and even overall volume levels. There are times on albums where one song is going to be a bit quieter than others on the same album and it might sound a little boring after a while if every track is being played at the same loudness level.

    On the subject of LUFS and peak levels I have experimented with this and I’m of the opinion that it is a good idea to mix with a little bit of headroom in mind (ie getting the peaks to be sitting at somewhere around -3 dBTP and maybe the LUFS are sitting at -18 to -16) so that there is a little room to adjust in mastering without losing the dynamics. Actually aiming for these levels during the track mix. This way you’re balancing with the end in mind.

    it's easy to overthink this. The only way I think works (for me) is lots of A|Bing. It is frankly impossible to sound as good as a professional commercial release. It's impossible to get the same clarity, loudness, dynamic range etc. as a professional mastering engineer on a laptop on my sofa. But we can get close enough not to be embarrassed, and certainly better than some commercial releases for sure.

    My personal experience leads me to aim for roughly -12LUFS with peaks of around 10.

    When I upload to Apple Music, for example, this gets me in the ball park of other music I listen to where my songs sound roughly the same 'loudness' without sounding too shitty.

    I tried the standard 14 LUFS thing, but that ends up with my music sounding quieter than everything else. Your milage will almost certainly vary as it really does depend on your kind of music.

    A singer songwriter with voice and acoustic guitar will need different treatment than my electronic nonsense for example. It's all relative. I can crush the hell out of my music with less detrimental effects than said acoustic song. My song never had much subtlety to ruin lol.

    If you use Apple Music, try adding your songs to your library. You can then stream them alongside your favourite commercial songs and see how they sound relatively. adjust to taste and try again. As it's in your personal library you can do this without actually releasing a song with Distrokid or CDBaby, etc.

  • @klownshed and @JanKun I think the overall thing here is keep one eye on levels, but don’t let that dominate the work you are doing. It is very easy to overthink things to a point where the vibe just gets killed and you end up putting the track in the bin and starting again.

  • @Fruitbat1919 nice work with what you have done so far. You have a good flow going here.

  • @Mountain_Hamlet said:
    @klownshed and @JanKun I think the overall thing here is keep one eye on levels, but don’t let that dominate the work you are doing. It is very easy to overthink things to a point where the vibe just gets killed and you end up putting the track in the bin and starting again.

    I only check LUFS levels when the song is basically finished and I’m bouncing the final version. That stage is all about getting the levels right. So no, not overthinking things at all. Quite the opposite. The song is done at that stage and far from being close to being put in the bin.

    The mix is usually fairly close before it gets to that stage. I make sure I’m not clipping or overloading as I go, adjusting the basic mix as I go to make sure everything is balanced.

    Once the arrangement is finalised then it’s time to mix, setting automation of levels, fx etc.

    The final stage where I’m effectively mastering for want of a better word is all about the levels. The song is 99.99% done at that stage and no single processor does much — that’s where I may make small EQ tweaks to clean up certain frequencies for example. And I’m rarely compressing more than 1dB here and there. And in my limiter plugin it’s small tweaks too, A|B’ing with reference tracks to make sure my songs sound reasonably consistent.

    But all those small tweaks add up and can make the difference between a muddy mix and a clearer, tighter ‘nicer’ mix.

    I don’t care what a LUFS is until I’m at that final stage.

  • @Mountain_Hamlet said:
    @Fruitbat1919 nice work with what you have done so far. You have a good flow going here.

    Thank you :)

Sign In or Register to comment.