Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Best plug-in or way to emulate an analog mixing board?

24

Comments

  • edited April 10

    I really like that STUDER 169 on/off demo:

  • interesting.. thx

  • A problem with that demo is that I’m fairly sure ON has a higher volume than OFF. As we all have human hearing louder = better.

  • @stormbeats said:

    @Antos3345 said:
    I am thinking about all this and maybe another solution would be to mate> @stormbeats said:

    @Fingolfinzz said:
    I’ll do a shameless plug here, if you use the GuitarML in BYOD, you can use snapshots of hardware. It’s not algorithm, it’s an actual snapshot of the hardware acquired through machine learning, so it’s the actual nonlinear behavior. I have sampled tubes and tape machines and have some freebies you can check out. algorithms for saturation are dead to me because the GuitarML stuff hits so much better.
    https://chamber93grit.gumroad.com/

    +1 I can confirm @Fingolfinzz

    I was looking in my CHOW (BYOD) and I can’t seem to find this Guitar ML ? Pedals?

    BYOD / Drive preset/GuitarML

    This is my view. I don’t see it.

  • edited April 10

    @Antos3345 oh that’s actually the preset system, the GuitarML can be found tapping on the “+” icon in the top right in the “Drive” menu. Here’s a video as well of a step by step visual on how to set it all up proper and everything

  • Super thanks again :)

  • @MadGav said:
    A problem with that demo is that I’m fairly sure ON has a higher volume than OFF. As we all have human hearing louder = better.

    There is a volume dial on your IR player. These Pasttofuturereverb IRs are quality and cover a lot of unique devices. I have bought a couple of them already and I can highly recommend them.

  • Yeah, the PTF IRs are interesting and do run the audio through some filtering or something, I like them. Their demos are poorly gain matched but they do make a difference.

    If I can ever swing the studio time costs, I’d like to get profiles of a mixing console for GuitarML and IRs of it. I’m curious how using both would end up shaping the sound.

  • I really don’t understand how the IR files work or how they can simulate an actual mixer? Is it the noise or circuit sound or what? Are they eq? I will try to also search on YouTube. Do they load in CHOW BYOD? Sorry, just a bit foreign to me.

  • edited April 11

    I got the guitar GL to work thanks and it’s some heavy saturation:) I see you can load more of your own with custom.. Where are more? This is cool.

  • I find myself using Saturn 2, and Nembrini PSA preamp model most often in my mixes. Although I do mostly acoustic instruments, never tried the Nembrini on a synth.

  • I used to use an older Mackie mixer in the ‘90s that have a punchy sound, not clean. I may have to try the Saturn. This GL Guitar Chow effect is cool too!

  • edited April 11

    @Antos3345 said:
    I really don’t understand how the IR files work or how they can simulate an actual mixer? Is it the noise or circuit sound or what? Are they eq? I will try to also search on YouTube. Do they load in CHOW BYOD? Sorry, just a bit foreign to me.

    Well, I hope I explain this well but the exact process you have to Google. How it kind of works is that, for instance, they have this old vintage mixing board and they shoot an impulse sound through that (like a loud click) and they record that. The recorded waveform from that sound is the Impulse Response. And here comes the magic, when you load that waveform in an IR player, every sound that will pass through it (a guitar, synths, whatever) will pass that Impulse Response waveform and it will sound EXACTLY like it would as when you would play that same sound through the original vintage mixing board. It is as close to the real deal as you can get.

    And as far as an IR player, I already explained: you can use the free one in BYOD (choose Amp IRs and hit the ‘Fender’, you will get a dropdown that says ‘Load from File’ and then just choose your own WAV impulse response (you will have to buy one of course from a third party source like the ones I gave you, or d/l some free ones), or Khafknar, Altispace 2 or any desktop IR (the free Convology tx). That’s it. Load the Impulse Response waveform into it, add it to your fx chain, simple.

    Ask if you want to know more, me or someone else can always help.

  • edited April 11

    super...this is very interesting and I will get into this! THX: ) I saw you have an IR of a Struder tape machine ? What about a mixing desk? Sorry I sound so slow about this, but it’s a completely new process for me.

  • edited April 11

    @Antos3345 said:
    I got the guitar GL to work thanks and it’s some heavy saturation:) I see you can load more of your own with custom.. Where are more? This is cool.

    So you just go to the custom button and it’ll open up your files app. If you have downloaded any GuitarML profiles, just navigate to where they’re downloaded and you’ll load the JSON file, that’ll be the profile itself and then it’ll load in the slot.

    Here’s that Mutant Studer profile from my page, it’s just the JSON file so you can load it just to have an idea of the process. Just unzip it cos I have the zip it to upload it here

  • I'd like to explain that there are two different ways of profiling audio equipment. Both schemes use a bunch of numbers, but they apply them differently. The older way is by Impulse Response (IR). An IR can model the time response of a linear system. So it can include frequency response, time delays, reverberation, etc. It cannot model any nonlinearity, drive, distortion, etc. The numbers in an IR are a time series, literally the transient response to an impulse, usually stored as a wav file. BYOD includes an IR loader; Thafknar is a slightly more refined implementation.

    The new-fangled way is Machine Learning (ML), which uses a neural network to represent a system. It can handle frequency response and nonlinearity. But it cannot represent the time dynamics that an IR handles so well. The numbers in an ML model are the coefficients connecting nodes of the neural network. In Guitar ML, they're stored as a matrix in json format; easy to read, but makes no sense without the neural net model. As already discussed, BYOD can load Guitar ML models.

    So, there's room for both approaches, depending on the phenomena that require modeling.

  • Ok, thanks again for your help and everything regarding this. It seems so easy, but then kinda deep.

  • edited April 12

    I loaded some of these IR wav files in the CHOW amp player. This was my one and if I like I will get more.
    So when I load these, are they like how it would sound with these mixer channels at some eq settings or? I don’t really see what the differences are between the files.
    Thanks

  • edited April 12

    @Antos3345 said:
    I loaded some of these IR wav files in the CHOW amp player. This was my one and if I like I will get more.
    So when I load these, are they like how it would sound with these mixer channels at some eq settings or? I don’t really see what the differences are between the files.
    Thanks

    Hey cool you bought them, very curious how work out for you!

    Well, the description says “ Place the channel IR as the first FX/Plug-in into your tracks and the Buss IR into your busses and stereo bus. ”

    You have 6 files, 3 WAV and 3 AIF.

    I think you are right about the eq channel, it’s probably an extra IR you can try, so the normal channel or the high eq one. (I would make a preset of both in your IR player so you can switch between them to see how it will affect your sound).

    Ok, so you have your instrument (guitar/synth/drummachine/whatever) and then place the CHANNEL IR right after that, so that it will be your first fx, and the BUS as the last fx on your master channel.

    I quote “ The “master” channel is actually a buss, because it takes the output of all the channels on the mixer and outputs them to your speakers or headphones, etc. “ from: https://www.homebrewaudio.com/9497/what-is-a-buss-in-audio-recording/

    Anyway, it doesn’t say so in the description but I think you should use these IRs at 100% wet, so if you use the BYOD Amp IR player, set its mix dial at 100% wet for both the CHANNEL IR as well as the BUS IR. You might need to fiddle with the volume, try and let us know if it works.

  • edited April 12

    I’m hearing a definite difference in sound and mood. Weird, but it definitely sounds warmer,like a tape or tube compression. Thanks again:)

  • @Antos3345 said:
    I’m hearing a definite difference in sound and mood. Weird, but it definitely sounds warmer,like a tape or tube compression. Thanks again:)

    Great!

  • @Jökulgil said:

    @uncledave said:
    I'd like to explain that there are two different ways of profiling audio equipment. Both schemes use a bunch of numbers, but they apply them differently. The older way is by Impulse Response (IR). An IR can model the time response of a linear system. So it can include frequency response, time delays, reverberation, etc. It cannot model any nonlinearity, drive, distortion, etc. The numbers in an IR are a time series, literally the transient response to an impulse, usually stored as a wav file. BYOD includes an IR loader; Thafknar is a slightly more refined implementation.

    The new-fangled way is Machine Learning (ML), which uses a neural network to represent a system. It can handle frequency response and nonlinearity. But it cannot represent the time dynamics that an IR handles so well. The numbers in an ML model are the coefficients connecting nodes of the neural network. In Guitar ML, they're stored as a matrix in json format; easy to read, but makes no sense without the neural net model. As already discussed, BYOD can load Guitar ML models.

    So, there's room for both approaches, depending on the phenomena that require modeling.

    Are wav and json the excepted two main ways of replicating an IR? Are there debates about which is more accurate/better sounding etc like with any sim? To share an IR preset is it literally just a wav or json file that you should expect and, do BOYD and Thafknar load both (json & wav) or json and wav respectively?

    Err.. Not exactly. An IR is basically always in wav form, because it is just a time response. Issues relate to the sample frequency (needs to match your audio data), and length. Also, some IRs can apply to a stereo receiver, and stereo source. Thafknar is more sophisticated in this way. Both Thafknar and the BYOD IR loader use wav files.

    ML data is different, because it is just a bunch of coefficients. Guitar ML seems to have adopted the json standard, and the BYOD Guitar ML module supports this. I cannot speak for any other ML profilers. Thafknar is not an ML program.

  • With the machine learning stuff, I have to use an audio file that’s about 3:30 minutes long and run that through for it to analyze. The output of the analyzing and whatnot will be the JSON file, NAM uses it too. No clue what’s going on inside when it’s analyzing, I think it just compares the before and after audio files and finds the differences in order to have a reference of the non linear behavior of my target and writes the nodes for it.

    I think the ML stuff is based of concepts from IR but pretty different from IR in application since it can’t capture space and IRs can’t capture nonlinearities. I only use lines when recording material for ML because any spatial aspects in the recording will really mess up my results

  • I just downloaded NAM player for my desktop and now it sounds more like an analog board..Thanks:)

  • @Jökulgil said:

    @uncledave said:
    Err.. Not exactly. An IR is basically always in wav form, because it is just a time response. Issues relate to the sample frequency (needs to match your audio data), and length. Also, some IRs can apply to a stereo receiver, and stereo source. Thafknar is more sophisticated in this way. Both Thafknar and the BYOD IR loader use wav files.

    ML data is different, because it is just a bunch of coefficients. Guitar ML seems to have adopted the json standard, and the BYOD Guitar ML module supports this. I cannot speak for any other ML profilers. Thafknar is not an ML program.

    Oh thank you @uncledave. So if I’ve understood you correctly, if people were to share some of their best free (not paid/licensed) IRs that illustrate what they think is impressive/interesting regarding IRs. They would share:

    Wav = Thafknar, BYOD IR loader or any other IR loader program
    OR
    json file = BYOD Guitar ML or any other Guitar ML program

    Have I got that right and is the ML IR on par with the original IRs or seen as inferior?

    The ML is not an Impulse Response (IR), it is data for a Neural Network (Machine Learning). It is used to profile the response of a nonlinear system. It is completely different from an IR, and does not compete with it.

  • In terms of guitar, the IR captures the cab and the GuitarML captures what’s going on in the amp itself.

  • edited April 13

    I bought the Daddy Kev Audio Dynamics book a while ago, and within it are a lot of pages of his recommended settings for console EQs, compressors etc. for individiual instruments and groups. IRs with these settings baked in would be killer..

    There isn't a lot in it for synths, but applying the horns or string instrument settings to synths with a similar envelope works too.

  • I think a lot of that would work better with AI profiles, honestly

  • They're mostly pre-compressor EQ and compressor settings for different compressor types (he recommends different compressor types depending on the instrument).

  • edited April 14

    @kirmesteggno said:
    They're mostly pre-compressor EQ and compressor settings for different compressor types (he recommends different compressor types depending on the instrument).

    I will have to look through that book. Right now, I’ve been mostly focused on capturing the saturation etc of hardware units but have been planning on doing some more “universal” settings to capture as well with some of the more sought after stuff like Pultec and the 1073. I know it mostly depends on your source but a high lift on like a 1073 sounds good on quite a lot of stuff. The AI captures can do snapshot of compression as well but I’ve run into compressors being even more source dependent than EQs but I have liked the results I’ve gotten from the 1176 and it does work on a lot of stuff. The Studer and Telefunkens I’ve played with got the tape compression captures as well so I’d definitely like to try that out

Sign In or Register to comment.