Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Best plug-in or way to emulate an analog mixing board?

124»

Comments

  • edited April 18

    @Fingolfinzz another one using AI Profiles . I’am pushing the input on the J37 hard . Testing the saturation

  • So, I completely overlooked the GuitarML thing in this discussion (and the freebies by @Fingolfinzz sorry about that), how does it differ from the AmpIR, both seem to load IRs?

  • @Slush said:
    So, I completely overlooked the GuitarML thing in this discussion (and the freebies by @Fingolfinzz sorry about that), how does it differ from the AmpIR, both seem to load IRs?

    IRs and GuitarML profiles are entirely different things. I believe someone described this earlier in the thread.

  • @Slush said:
    So, I completely overlooked the GuitarML thing in this discussion (and the freebies by @Fingolfinzz sorry about that), how does it differ from the AmpIR, both seem to load IRs?

    Se my previous post. Guitar and IR are two different ways of modeling equipment response. Both the data and how they work are different.

  • @slush In the most simplified way I can put it in terms of guitar, the IR would be to capture the cab and the GuitarML would capture what’s going on inside the circuitry of the amp. GuitarML loads the ai profiles which are the json files and the amp ir module would be for the impulses. One setup would be say I wanted a fender twin. Id run my audio file through the amp and use a load box to bypass the speaker because you don’t want any spatial aspects in the audio for the GuitarML capture. That file is about 4 minutes long. Then for the IR, I’d run an impulse through the amp and speaker and capture the spacial aspect of the amp with a mic. I could also capture the spring on it with the IR. Then I can get the amp setting I just captures by loading the GuitarML module first and then running that into the Amp Ir module. With outboard gear though, there isn’t much spacial stuff there I want to get so I just focus mainly on get the saturation of the particular hardware I’m sampling. I’ll do a few more like with the Pultec because the eq boosts on those are kind of special but for the most part, capturing the color of the hardware is enough because I can use fabfilter or something for eq curves

  • Also if anyone has any before and after one shots of gear they’d like to see sampled, I can put together audio files for it and run it through the machine learning and see what results I get. Samples of drums and like guitar, keys, and bass seem to get the best results.

  • @uncledave @Fingolfinzz Thanks for explaining, good info!

  • edited April 19

    @stormbeats said:
    @Fingolfinzz @Antos3345 and everyone. Here’s a quick beat I made in Beatmaker3 using BYOD / GuitarML and Pasttofuturereverbs Akai MPC60 12bit sampler AI profile & Studer J37 AI profile. No plugins used Just these .json profiles & Pasttofuturereverbs API EQ .json profile. All profiles on just one BUS / Aux The sound I’am getting from these profiles as well as @Fingolfinzz excellent AI profiles which I have used on other beats is incredible. I’ve today literally deleted a lot of my auv3 plugins and will be using AI profiles from here onwards…

    Hey what’s up. I’d just like clarification. You’re using what you bought from the first link that you posted earlier, 100% all on iOS with chow/guitarML?

    @Antos3345

    I need to be sure, because a few days ago I had bought this:

    https://pasttofuturereverbs.gumroad.com/l/wuogw

    But it won’t work on BYOD guitarML or the ampIR but had already admitted I fucked up being a newb to irs and profiles. I see your linked product says Proteus and NAM. My product page also said NAM and Genome. So would Proteus be what the keyword is here?

  • @Blipsford_Baubie . Proteus is a Guitar ML plugin for desktop. Data for Proteus should be compatible with the Guitar ML module in BYOD. NAM is a different machine learning model, not yet supported on iOS. So, yes Proteus is the keyword you need to look for.

  • Thanks @uncledave . You have the patience of a saint, taking us back to school and having to repeat yourself sometimes.

  • It might be confusing to some users that json is not a file type, it's just a file format. It's a text form dump of object data in the form of "name, value" pairs. A given json file is only useful to an app that implements the same object. So, only Guitar ML json files can be used by a Guitar ML module.

  • Thanks for the clarification, @uncledave. I’m kind of flying by the seat of my pants with all of it cos I’m learning as I go so I mostly just know it in the context of what I’ve been working on.

  • Can any users of this stuff clarify regarding frequency loss? Obviously I don’t mean about loss regarding what it’s modelling (maybe some high if an old tape machine). I know @Fingolfinzz has mentioned some low end loss with json files in BYOD. Is that specific to BYOD or does using a proteus file instead of json eliminate that? Can there be differences in quality of these files in terms of who’s capturing them or does the machine learning aspect kind of eliminate that? Would it be better to see if/when NAM comes to iOS or it’s possibly an issue there also? Sorry for all questions and thanks in advance!

  • @Zerozerozero it seems to be specific to how BYOD handles the file on the frequency loss. Genome seems to be able to handle it better. It’ll definitely make a difference depending on who is capturing and how they capture. The better the audio equipment and interface you have, the better results you’ll get. Even at the risk of promoting competition so to speak, I think Stuart at analogxai is doing the best job at it. He’s written custom code and has better equipment whereas I am using the default GuitarML code and can’t afford a better interface. I can’t speak for NAM but it apparently it’s getting better results lately from what I have been reading. I haven’t bothered with it much yet cos I’m just focusing on what works with iOS so maybe someone else can speak for NAM. When it gets released for iOS though, I’ll be looking into it more

  • Do the Guitar ML models require a certain sample rate? Or can the json data be used with any sample rate?

  • @uncledave so the audio files I use for the before and after sources that I feed to the machine code have to be in 44.1k for the GuitarML stuff. NAM uses 48k so there’s an advantage there. But when loading the JSON data in the GuitarML module, the sample rate doesn’t seem to make any difference as far as I can hear.

    If you’re ever interested in how to capture your own gear, I got started at the website https://guitarml.com/. The coder, Keith, is very informative and has a lot of good info to get going. I have a little bit of background in coding but not a ton and it was easy enough for me to follow along. Once I get my audio files, I just use a Google collab project that he provides that does all the machine learning stuff and then gives me the json data output.

  • Thanks @Fingolfinzz and all other contributors on this thread. Interesting stuff 👍🏻

  • @Fingolfinzz said:
    @Zerozerozero it seems to be specific to how BYOD handles the file on the frequency loss. Genome seems to be able to handle it better. It’ll definitely make a difference depending on who is capturing and how they capture. The better the audio equipment and interface you have, the better results you’ll get. Even at the risk of promoting competition so to speak, I think Stuart at analogxai is doing the best job at it. He’s written custom code and has better equipment whereas I am using the default GuitarML code and can’t afford a better interface. I can’t speak for NAM but it apparently it’s getting better results lately from what I have been reading. I haven’t bothered with it much yet cos I’m just focusing on what works with iOS so maybe someone else can speak for NAM. When it gets released for iOS though, I’ll be looking into it more

    Have you communicated this information about BYOD and low frequency loss to @chowdsp . He would probably be interested to know if it handles it less well than other GuitarML implementations.

  • @espiegel123 i was about to tell him but noticed that he has it on his todo list on his GitHub page. I think he’s just been having trouble pinpointing what’s causing it so it’s been an issue for a bit. It hasn’t been much of an issue for me so far cos I can boost the subs a bit if I need to post BYOD

  • @Fingolfinzz said:
    @espiegel123 i was about to tell him but noticed that he has it on his todo list on his GitHub page. I think he’s just been having trouble pinpointing what’s causing it so it’s been an issue for a bit. It hasn’t been much of an issue for me so far cos I can boost the subs a bit if I need to post BYOD

    Cool

Sign In or Register to comment.