Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
Yepper. Copyright-free music. 😉 No need to pay royalties.
I think the problem is that it doesn’t
This AI hasn‘t yet learned that the most important notes are the ones you don‘t play.
Or to quote James Jamerson: if you don‘t feel the note, don‘t play it
(I‘m not biased against ai generated stuff in any way, but this is a very humble example)
Well AI is definitely more interesting than any music I posted here considering the number of interactions with this post.
“AI” as we’re talking isn’t suitable for audio (midi is another story) because it can’t encode phase (cause phase by itself looks like random noise, it has no macro features to learn)
No matter how impressive it gets, it always sounds like bland poorly recorded mp3s from the early 90s. Lots of phasing issues.
Now if someone trained a model on midi files of all Bach fugues, or something, then I’m sure it could produce some amazing fugues by itself
Right now it produces bad elevator music, so unless that’s your line of business, I don’t understand what to be worried about or threaten by it
It’s just another tool we can use, it won’t force or replace anyone from making music. That’s preposterous IMHO
I’m not sure there’s any correlation between quantity and quality Joseph.
In the UK The Sun is the most widely read newspaper and it’s only fit for wrapping fish and chips in !
.
I like that example. Surely AI couldn’t learn the notes you don’t play because it’s trained on the notes you play.
But you could train another AI on the “not” notes, and yet another AI on top of both the previous two together.
But again this works great for midi, audio not so much (yet)
Very true for me too 😂 Maybe we need to be more contrevertial.
Imho it‘s a really bad example because Band-in-a-Box could spill out such tracks all day long.
In both midi and rearranged audio fragments (aka Realtracks), and it even „knows“ about groove. At least for a decade now, in midi even longer.
I don‘t see any fundamental difference between an AI and a music student.
Both are learning in a very similar way... but the machine has the greater endurance, or call it unlimited resources (if provided).
.
Ok I get your point, but “unlimited resources” might be key here. BIAB great as it was was only procedural. With AI you can let it infer patterns a human wouldn’t detect. Like we’ve seen for instance in chess. That will only expand our knowledge, not replace it. We can learn something from AI, it won’t make artists obsolete in the foreseeable future, maybe beyond it will, but for now let’s exploit it as much we can, before it gets sentient and kills us all, that’s my main point
lol, Joseph, it’s like most jazz in the mainstream, alright. Banal jazz. Is that a genre? I wouldn’t worry, yet. First the negatives and then the positives.
True jazz has an element of surprise, even if it’s soft and subtle. There is absolutely none of that, as is the case with the aforementioned predilection for banality in human jazz (hell, classical, too). When I worked in clubs I was well.aware I was merely background for seduction and anti-salubrious behavior. People, in general and frankly, don’t take jazz seriously. If it sounds like jazz it is jazz… which is what you posted.
It took a long time to learn to listen to every note in a Parker line, but once I learned I couldn’t stop listening to every note. I believe this is not the case with 97.5% of jazz “fans”. If you listen to every note in your above posts you can, I think, understand how the AI brain is working. It seems modular. Packets of notes strung together.
The other, real shortcoming, for now, IMHO, is the phrasing, or the lack thereof. Once again, it mimics real players playing banal jazz. And, let us not forget, the lack of dynamic modulation. Finally, I only heard one or two, coincidental spots where the “band” came together. Unless you have instrumentalists playing off each other (and I don’t think this is happening here, there is, again, a lack of surprise. For now.
As to the musical content, I believe, at this point, the ai can not step outside the assembled resource material. That’s why there is no reason to give it all up. Until ai can disappear in its bedroom for a week and come out with A Love Supreme we’re safe (if that matters).
However, I haven’t the slightest doubt the above criticisms will be overcome, eventually. Here’s the plus side. There will be more Coltrane to listen to, and, probably more Mozart, and maybe even Robot Yardbird Diddy that has a jazz breakthrough we can’t imagine.
The internet technology allows for millions of creatives to be heard that would never have had the opportunity before. No one can listen to it all, so what’s the difference that a million more robots join in the fun. Maybe they’ll start to listen to each other and start smoking weed. Is that bad?
“There is nothing good or bad that the mind does not make it soy sauce. “
AI Shakespeare
“There is a field beyond right and wrong. I will meet you at Costco. “
AI Rumi
I hope AI will develop a sense of humor. At that point I’d be happy to have it over for dinner.
Brb, training an AI on all of Taylor Swift’s hits, to generate the next multi billion empire
I wouldn’t eat fish and chips if it came wrapped in the Sun!!
I fvckin love this post.
Yeah, that were also my first thoughts. Nice attempt, but would never make it into my playlist...
Well... to reveal a small secret... I‘ve always watched BIAB with a lot of interest, because it‘s not procedural. It‘s rule based, which was what most AI was called in the 90s.
I was fortunate enough to use such a system in my own projects and (long story short) it was a huge relief compared to C in data related applications.
The funny aspect: goals were in fact often „procedural“, but the action was derived from descriptions of facts.
Including the ability to alter methods by results, a process that may be called learning.
This system was of course able to detect patterns that a human wouldn‘t detect, as BIAB is able to detect key and timing in an audio track, a task on which myself fails (but any trained musician would laugh at me).
Today AI is mostly about Artificial Neural Networks, which use a different algorithmic approach, but essentially it is still about the finding of patterns and rules.
Which works quite well on huge amounts of data, but if there isn‘t... there‘s a problem.
And of course results depend on the net‘s quality and training capabilities.
In musical context the most interesting question is the moment of creativity.
Imho it‘s at least „thinkable“ that an AI detects characteristics of certain performances, which means it then could (at least potentially) „act“ accordingly.
But I have no idea about requirements for such a net‘s design.
Ditto
But it does
Nice, here are some short sound pieces I've generated a while ago with the free MusicGen.
I made some loops in Koala out of them but kinda lost the project files, probably deleted them because the sound is only really usable as background music for video, not songs.
Never made a damn dime on music no one’s ever heard, so clearly I’m not motivated by whatever AI might replace, and cannot be dissuaded.
Anyone who has tinkered under the hood with prompt engineering knows that by increasing the temperature you can get more and more wild machine interpretations of the patterns from the training corpus. In fact some researchers studying entropy are experimenting with GenAI. It’s not a stretch to say that these models are “hallucination machines” and we are trying to tame them to exhibit the most human like output. I believe we will see a whole new generation of artists who will lean into the entropic nature of these models and come up with truly unique sounds and compositions. The human’s role will be to curate the machine’s inputs and outputs.
As I recall, the process of letting entropy decay in a neural network is even called annealing.
Great name for an A.I band
This is my mindset to, find ways to integrate new tools. There’s also models that generate MIDI, I wanna try using those in creative ways with Python MIDI libraries to generate and modify MIDI files, then import that into Gadget/Drambo and see what comes out
For anyone in the US it’s pretty much the equivalent of the NY Post.
Yah it is funny to see people hear a couple examples and then make assumptions about 'what the AI knows about notes' when it really doesn't even work this way. The people who criticize diffusion rendering like Midjourney and Stable Diffusion call them '21st Century collage machines' and that to me is a pretty good metaphor. People are quick to jump on a specific implementation of diffusion rendering as being 'a new AI' but really they are all just messing with the same mass remixing of human made data. With images there are a host of tools now for improving input training and steering output results so people who don't know any better are seeing more coherent results from this area and say things like 'the AI is getting better' when the truth is people are getting better at essentially playing DJ on the soup of human data and given more powerful ways to guide the inputs and outputs of it. Yah it is a big hallucinatory laundering scheme really.
If I would venture a guess (I havent used BIAB in ages, so I may be totally off) I’d say it relies on markov chains and a lot of fine tuned rules. Ok it’s kind of a neural network but certainly not LLM based unless I’m totally out of the loop
Can AI focus my mind on the eternal now as I observe my fingers moving over my instrument almost of their own free will, the sounds they make reflecting what I heard in my mind the instant before, enveloping my soul in tranquility?
Definitely not, as LLMs didn‘t exist when it was designed
I have no idea how it was prototyped, but back then Lisp or Prolog were common tools.
Such systems were frequently rewritten in C/Assembler once the functionality of the engine worked as intended.
(Apple choose a similar path when they transferred huge amounts of Lisp source into Objective C because „Lisp is a difficult language“)
With rule based systems you get along with as few data as is necessary to describe a fact.
No need to deduce this fact from examining big data, which may not even exist for a specific case. So called „AI“ is not restricted to currently hyped LLMs or ANNs
I fully agree with this statement. In the final outcome, it’s actually not much different from any generator, simulation, or emulation tools we've already been using for years, and you all take those for granted, not even realizing it’s the machine work.
How legal it is to use real artists work without permission to learn those ML models and to freely state their names in the model “prompt” is the real question. What consequences will those original authors suffer?