Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
Definitely sounds cool, I’ll look forward to seeing that!
AI generating spectrograms which in turn becomes music: https://www.riffusion.com/about
I think that example starts to point toward where audio production is headed. Since all audio is represented as parts of a spectrum, every element should ultimately be replicable, replaceable and able to be identified and manipulated with the assistance of machine learning. Even after a final mix it should be possible to extract or replace literally any part of the mix.
Neato!
I have to be honest; this shit concerns me (a lot!).
I have just broken away from my mundane IT support job (due to both the job, and other political reasons in our department) and I want to concentrate on working on my 3d art.
I’m 41, and I’ve had enough of what I’m doing, I figured it’s time to start doing what I will enjoy and can be proud of.
I was well aware of the concept of art not being so lucrative unfortunately… but at this point in my life, I felt like it was what I wanted to do.
Over on the 3d modelling forums I belong to, ppl are already starting to worry about the threat of nvidia producing 3d models from text input. And it worries me too.
I produce buildings in 3d. Not. Exactly incredible but I’m hoping to improve..
Www.Sketchfab.com/SkillipEvolver
It’s already happening and it’s going to get better by becoming faster and easier. You should pursue the career you want, but always be aware of the current state of the art in the marketplace.
Interesting! I wonder if you'll produce any Ambient to go with those and create a multimedia experience/installation of some sort.
I've played with Dream and Wonder. Got a lifetime license for Wonder and pay a monthly subscription for Dream. I like Wonder more to be honest, but both are fun to mess around with.
This piques my interest. There are apps that show spectrograms….hmmm…thinking…
Of course
.
Normally I avoid forwarding these vids but these folks are smart, creative, legit and blah blah
Just checked out that riffusion thing referenced above. The creators article opens with a couple of ‘meh’ examples of what it does, but further down the page, when they are demonstrating smooth interpolations from one sound to another… it gets seriously impressive.
One example, where the spectrogram of audio of someone typing gradually morphs into a jazz piece is particularly impressive.
It seems to me as an untechnical punter that this is a very different deployment of AI than the previous AI music tools which create rules-based generic (bland) library music. This is more akin to an audio version of those AI facial morphs you’ve probably already seen where Joe Schmo smoothly and imperceptibly turns into Tom Cruise.
I already want a web interface where I can upload one of my own audio samples and a text prompt to have it turn into, I dunno, ‘Lovecraftian monster slowly rising from the ocean’. Being able to spec key, tempo, and length…
It is all getting very… cool?
Now I’m jonesing for one of our superstar devs here to build a Dream by Wombo style iPad gateway to make it so. The riffusion guys seem to have put links to their code out there on GitHub, so it could happen. I’d pay good cash for that…
Looks great what is the IAP cost please?
Had another good session after 100+ generations of renderings:
Have a great Sunday everyone:
I don’t know why but this whole AI generated art reminds me a lot of my neighbour’s astroturf.
Sadly, we sleep walk into this with very little thought of the consequences, almost as if the Corporatocracy says if we don’t do it our opposition will, but worry not, we could always drop-out, into the Zuckerverse.
Imho it’s no different than Sampling in Music. It all depends on how you use the output it gives you. If you just rely on the initial image then yes, it is like astroturf…but if you slice, dice, resample and build upon parts and pieces of the output you create an entirely new landscape to enjoy. I’m taking this opportunity to get into my Robert Rauschenberg mode with all of these tools.
That’s my view of it after thinking about it over the last few months. The long view is that for visual artists who want to use the system to iterate on their preferred styles and motifs it is an amazing tool and this is the lens i am seeing it through. I see it in the same light as i see Riffer, Fugue Machine, PlayBeat3, Scaler 2—a tool to push me in to new places based on my intention.
They need to work out the legality of it all though…but in the meantime, it’s a vast Sample Crate i am enjoying to create visuals with when inspiration strikes. I’m using it to generate a ton of textures and elements i can use for years. 😉
Anybody here selling their creations? and what platforms? Giclee prints? or downloads?
Be interested to know where providing actual prnts or downloads is best?
I’m posting my work here: https://fineartamerica.com/profiles/echo-opera/collections/portraits
Just like audio sampling one day they will be able to trace the original content the AI scanned and copyright claim the F%^k out of everyone. Be prepared.
I think this video is a useful primer and introduction for anyone to how AI images are made and what they represent. Referring to the kind of data visualisation that AI images are. An infographic of the dataset, a map which reveals the connections in the data, in this case of these text to image models, datasets of images. (and he compares to the famous cholera data map by John Snow which started this journey). When we get to explore this technology in realtime, only possible now on very high end computers, you will see this map visualisation more clearly. But it's a good intro for anyone who wants to learn to use AI as an artist because knowing how it works (and what it is designed to do) is important in creating work with it, rather than just using it as a magic "black box" you have no control over.
Diffusion rendering is nothing like audio sampling. Not to say that it has totally ethical roots of course, and while there are a few rare cases of overfitting in even the best ML models (where you can tell the source material) for the most part in 99.9999999% of diffusion renderings things are completely, effectively laundered into oblivion.
You don't even have to render in realtime to get a clear sense of that, just rendering sequences that blend between datapoints illustrates it well too.
That's not strictly true, the data is in the model and can be replicated and reconstructed quite accurately but it's never asked to usually or it happens accidentally, since you have very little control over the data with text to image/audio apps. So while it's not exactly like traditional sampling it's a different type of sampling, eg data sampling, but it's closer to it than not.
The images cannot be replicated and reconstructed unless they are overfitted, for example with many extraneous duplicates present in the original dataset when the model was trained. In the case of Stable Diffusion it was trained on billions of images and the model is only 4gigs. There is no way you can replicate and reconstruct those billions of images from a 4 gig file. There are certainly some examples of overfitting there but the vast overwhelming majority are vapor.
True, as to the legality, we’ll that’s being debated.
Wow your animation is brilliant!
Sorry I can’t seem to find the link any chance you can post it here, thanks
That link in there is the comparison one - if it works for you, the left panel is the unprocessed version, the right is interpreted by stable diff…
thank you for checking it out and your kind words, I’ve been unable to do much on it this last month but I’ve made a week plan over the last few days so I should be getting into it again (alongside the 7 other projects I’ve somehow assigned myself 😅😂)