Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Logic Pro takes music-making to the next level with new AI features
Session Players, Stem Splitter, and ChromaGlow make Logic Pro for iPad and Mac smarter than ever
Comments
I think it won't be long before they offer "Session Guitarists" and "Session Singers" to go along with the others. And if they don't do it, possibly Suno or Udio will consider integrating their AI's into a more production-centric workflow.
Another couple releases and Logic will be able to generate and release full tracks for you while you sleep!
I wouldn't call those AI features the next level of music-making. Sure, these are interesting tools for those without composition/arrangement/performance skills. Those can also be guidelines for new ideas for "proficient" producers.
But I don't think those features will become the center of production workflow for people who are still enjoying creating by themselves, because it is like taking away all the fun and basically the whole purpose of music making for them.
Those features will be perfect tools for people who are only interested in getting "the final product" as fast as they can with minimum hassle, though I believe, as somebody mentioned here, that those kind of people will be more likely to use services like Suno or Udio or whatever easy solution soon to be developed.
I honestly don't think Composers, musicians who still enjoy the whole process of music making for what it originally is, will change drastically their workflow because of AI.
I personally still feel good playing an instrument,composing a song, writing the lyrics and then singing them, arranging and mixing the whole thing. Having AI do this for me? No, thanks. Having fun jamming with an AI, why not.
stem splitter just (in most cases) helps steal ideas from other tracks .. no revolution if you ask me, more like sad thing …
i can see how it may be usable for doing remixes, but definitely not groundbreaking feature .. lot people will use it just for stealing ideas and recycling loops made by other people … i am skeptic here, but maybe it’s just i anm old grumpy man yelling on clouds.
I think for generating copyright-free sample fodder to make a track like Justice/Daft Punk, AI is good. And perhaps for generating vocals from sheet music (hopefully there'll be a tool out there like that), AI is good. But for the fun factor, producing music yourself can't be beat. And it's even more fun when collaborating with others.
I don’t care about the AI aspect of the new instruments but I’m looking forward to trying the new basses. They have articulations and slides and stuff according the the press release — I’d be surprised if you can’t load them up and play them manually.
I will happily ditch IK Modo Bass if they’re good. Buggy crap that they are on desktop lol.
I agree with you. I’m sure I’ll have fun with all this, and I’m really excited to try it all out but I won’t let it take away the joy of writing lyrics, melodies, working out chords and playing real instruments. If it’s all automated I think it would decimate my enjoyment and more than anything, the satisfaction I get from creating the way I do now.
There’s a new chord track. Write your own chords and the AI stuff will follow… might be useful for the odd sketch which you will hopefully be able to convert to midi and edit like you can with the drummer.
AI help is cool. I use the drummers all the time to get an idea then make it my own. Hope the keyboard and bass is better than the garageband one is. It just stinks they have made it a m4 requirement. Makes mini sad it won’t run the new tools and now I’m maybe thinking of the 11 in pro. Apple did its job I guess. Get the money anyway you can plus a subscription on top of that.
An M1 or better chip is only a recommendation for the session players, it is a requirement for the stem splitter.
M4 requirement is still an missunderstanding - no functionality in Logic Pro demands any better than M1…
Everything runs fine on my mini iPad now. What changes that I need to have a m1 chipper higher? Session players shouldn’t pull more than the drummers unless the piano pulls that much cpu. Only thing I can think off. Running ai master takes a lot but I can do that on the mini.
Of course some people will use it on commercial releases but given that AI generated stuff is "original" (or at least creatively laundered) I think people would be more inclined to use it for splitting AI generated tracks from Udio etc.
Logic Pro 2 runs on A12 Bionic chip or later and iPadOS 17.4 or later.
Double post… 😬
well.. as i said, i am probabbly old grumpy man, i see using AI in this manner more like de-evolution of music creation... definitely not foing to touch this tech for music making, i like when all in my music is from my head 🤣
Now you have me really confused, maybe I'm misunderstanding because we're usually on the same page. The whole genre of modern music started with hip-hop/rappers stealing samples and made it their own music, this evolved into techno and todays mishmash of genres that no sane person can keep track of.
It's a very simplified description of events but you get the gist.🙃
Great summit of the situation!
In Techno it's relatively easy to create the ingredients you need from scratch for a single producer, in other genres you have rely much more on sample libraries, studio connections or collaborations, and AI is basically an alternative for that.
For Techno it can still be useful though, I've generated a bunch of cool space soundtracks with Udio which I'm sometimes slicing up and processing with other tools to create nice dark athmospheres. I've attached an unprocessed one to the post. Many still use Omnisphere for this kind of stuff, or maybe even old sample CDs from the 90s.
Will be interesting to see how it influences other genres now that they're more accessible for one person producing.
For me that sounds like an ordinary day in the subway of Manhattan NYC!
Cool sounds will come…
Haha, you made me laugh so much. I agree with everything you said and much more. Hitting the randomize button will not make you more of a musician than you already are.
Good catch. It's very possible that the ambient track is was trained on was processing a field recording of a train entering a station. The brakes at the end before the shimmer reverb kicks in sound a bit like train brakes.
Techno is much older than sample based Hip Hop music though. The thing is that Techno wasn't called Techno when the principles it is based on were developed, but experimental Krautrock and later Minimal Electro/EBM, both genres based on abstract recording techniques and synth patches, 100% original. This was the stuff Juan Atkins in Detroit and also the House people in chicago were putting on before they started to experiment with gear and made their own records.
Sampling in Hip Hop got started by Marley Marl and his SP-1200 with the intend of having unique drum sounds because everything sounded almost the same back then due to most tracks being based on the few drum machines and synths that were available and affordable at that time.
Pitching the sample to generate more sample time was discovered by Pete Rock I think, and this plus new machines with more sample time sparked the whole crate digging sample flipping scene in the 90s.
Yah same here. I use AI/ML for work. When I make music these days I mainly like to press pads and hear notes/ hits of my choosing in realtime. At one time I was more into chopping up sounds on a timeline making collages so if I ever get that impulse again AI stuff may be useful, but yah for now the AI music stuff I spat out felt more like a custom playlist than 'music making'. I imagine there will be more advanced tools like Control Net soon enough though where you can take something you have played and use AI/ML to alter it, which could be fun. You can kind of do that already with Stable Audio from the looks of it but it is still super crude.
They really call everything AI now, don't they?
I’m drinking AI coffee as we speak in fact.
Could be a joke by AI Bundy…
This is a nice balanced take. I don't understand the rabid anti-AI sentiment I've seen elsewhere. Don't like it? Don't use it.
I have little interest in it in its current state, but it's clearly just another tool. If it could jam with me in an interactive and rewarding way, that could be very enticing.
I see no reason why a completely interactive "AI"-driven system shouldn't be able to do precisely this in a year or two (at most). The pace of innovation in this space right now is mind-blowing even to me.