Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Full creation workflow description with creative AI usage

edited April 20 in Other

Long time no post. I deeply fell into the AI rabbit hole. My inner computer nerd is back at full throttle. Anyway, next to the computer and data science exploration I'm on, I also follow the progress of musical usage of AI. There is definitely a great community exploring AI as a tool for musicians instead of simply generating songs with Suno, Udio and the like. Crafting sounds with AI instead browsing sample libraries. Just today there was a great thread on X of an artist called Vnderworld that explains a workflow and tool usage with a decent level of detail. I find it really cool.

https://x.com/_vnderworld/status/1702354872186949933

Here is also my list of AI music related accounts that I follow: https://x.com/i/lists/1740524000457998530

Comments

  • @krassmann said:
    Long time no post. I deeply fell into the AI rabbit hole. My inner computer nerd is back at full throttle. Anyway, next to the computer and data science exploration I'm on, I also follow the progress of musical usage of AI. There is definitely a great community exploring AI as a tool for musicians instead of simply generating songs with Suno, Udio and the like. Crafting sounds with AI instead browsing sample libraries. Just today there was a great thread on X of an artist called Vnderworld that explains a workflow and tool usage with a decent level of detail. I find it really cool.

    https://x.com/_vnderworld/status/1702354872186949933

    Here is also my list of AI music related accounts that I follow: https://x.com/i/lists/1740524000457998530

    I followed, but I have no idea what they’re talking about most of the time. Looking forward to the for dummies version!

  • @Wrlds2ndBstGeoshredr said:

    @krassmann said:
    Long time no post. I deeply fell into the AI rabbit hole. My inner computer nerd is back at full throttle. Anyway, next to the computer and data science exploration I'm on, I also follow the progress of musical usage of AI. There is definitely a great community exploring AI as a tool for musicians instead of simply generating songs with Suno, Udio and the like. Crafting sounds with AI instead browsing sample libraries. Just today there was a great thread on X of an artist called Vnderworld that explains a workflow and tool usage with a decent level of detail. I find it really cool.

    https://x.com/_vnderworld/status/1702354872186949933

    Here is also my list of AI music related accounts that I follow: https://x.com/i/lists/1740524000457998530

    I followed, but I have no idea what they’re talking about most of the time. Looking forward to the for dummies version!

    Also, I understand the desire to want more control over the output of these generative systems, but personally I like to wind them up and let them go and see what kind of results I get. It requires a bit of an adjustment in expectations. I think of myself as more of a "director" giving instructions to another very capable creator, which is the system.

  • edited April 21

    Fair enough, but I'm in the other camp. I prefer AI being a source of inspiration, a new way of synthesis. AI should empower musicians instead of replacing them. Relieving us from the boring stuff, like endlessly browsing samples. Probably also as a composer tool, too. Think Scaler on steroids. ATM my favourite is https://stableaudio.com/ because it generates perfect loops and follows tempo and key very well.

  • For Udio I've found this quite useful for prompt design: https://rateyourmusic.com/music_descriptor/

    Most tags in Udio are similar, and you can also browse music on RYM that goes in a direction you like and get inspiration from the descriptors it's using.

    Quite a few descriptors sit on opposites end of a spectrum, so best create your own spreadsheet and define the spectrums as you see fit. E.g: angry, aggressive, energetic, manic | calm, meditative, mellow, soothing

    I'm a bit into hypnotic into Techno and found some useful prompts for complex drones and background ambiences this way. The results are so good but also random that it's almost like using a well dialed in randomizer setting on a synth plugin.

    It's less ideal for drum breaks or drum machine sounds or simpler stuff most producers spend a lot of time on to get the details right. With AI it seems that it shines with increasing complexity where the sum is greater than the parts.

  • edited April 21

    @kirmesteggno Yeah, the results are still difficult to control. You definitely need to embrace the surprise. For drones it's not so problematic, taht's true. When I create loops with stableaudio I very often discard the drums after stem splitting and do them myself but I keep the melodic lines and the atmos.

    BTW, I really get good results with the stem separator at https://mage.triniti.plus. But demucs also still works well for me. For demucs I have a custom AI model trained on percussive instruments that separates the drums, which is quite unique.

    Udio sounds very good but I hate that it can't follow the tempo. I really love stableadio. The new audio-to-audio feature is really wild.

  • @krassmann said:
    @kirmesteggno Yeah, the results are still difficult to control. You definitely need to embrace the surprise. For drones it's not so problematic, taht's true. When I create loops with stableaudio I very often discard the drums after stem splitting and do them myself but I keep the melodic lines and the atmos.

    BTW, I really get good results with the stem separator at https://mage.triniti.plus. But demucs also still works well for me. For demucs I have a custom AI model trained on percussive instruments that separates the drums, which is quite unique.

    Udio sounds very good but I hate that it can't follow the tempo. I really love stableadio. The new audio-to-audio feature is really wild.

    On projects I bring into Logic Pro I just end up counting out the BPM myself. Would be cool if Udio provided tempo information for the user. Maybe in a year?

  • @krassmann said:
    BTW, I really get good results with the stem separator at https://mage.triniti.plus. But demucs also still works well for me. For demucs I have a custom AI model trained on percussive instruments that separates the drums, which is quite unique.

    Interesting. Is there a review, comparison or walkthrough of that trinity app? Can you train models with that app?

    I'm using the demucs in UVR on my Mac which supports the M1 GPU and is quite fast.

    Udio sounds very good but I hate that it can't follow the tempo. I really love stableadio. The new audio-to-audio feature is really wild.

    Great example in the video! Tried it as well a couple of days ago, I've posted an example in the other AI thread where I've copied a sample from an oldschool sample CD. I also fed it a chop from a 70s CTI record and got cought by the algo.

    When I generate drones in Udio (they're actually space horror soundtracks) I also have a basic 909 beat playing in Ableton in the background which makes it easier to decide what to keep.

  • edited April 21

    @kirmesteggno No you can't train models with triniti. You mean fine tuning demucs or generative models? The whole open source guys I know on X are training models based on Meta's MusicGen. They are desperately waiting that Stability AI at least releases the weights of stableaudio.

    There's some really cool and creative stuff but the audio quality of MusicGen is not really good. If you doing LoFi that's okay but otherwise it's mostly good for sampling or as audio input for stableaudio. Check this out:

    https://huggingface.co/spaces/nateraw/singing-songstarter

    https://huggingface.co/spaces/thepatch/zero-gpu-slot-machine

  • @krassmann said:
    @kirmesteggno No you can't train models with triniti. You mean fine tuning demucs or generative models? The whole open source guys I know on X are training models based on Meta's MusicGen. They are desperately waiting that Stability AI at least releases the weights of stableaudio.

    Don't know, feed it stuff that I think sounds great and somehow make it better that way. I have no idea how the training or fine tuning works. It would be cool if there was a way to just give it a folder and let it do its thing.

    There's some really cool and creative stuff but the audio quality of MusicGen is not really good. If you doing LoFi that's okay but otherwise it's mostly good for sampling or as audio input for stableaudio. Check this out:

    Thanks for the links, those look interesting. I've used the default MusicGen space since it came out on Huggingface. I've posted some beats I've generated with it on the first pages of the other thread. For LoFi beats it's quite nice indeed.

Sign In or Register to comment.