Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
Heard it’s very good for coding
I’m gonna leave it, as you are now cherry picking responses not even part of our thread, to respond to. Not sure why you’re choosing to ignore my valid constructive criticism, but I’m aware self reflection can be difficult.
Best of luck in your endeavors, cheers.
no need for this topic to get too salty, i know we are all old moaning gits here, but we have to do better and show AI that us humans are worth keeping alive.
ongoing forum wars do us no good in the end.
What bugged me about Humans is that they centered the plot around the key question: At what level of sentience does a being deserve full civil rights? Obviously there is no clear answer to that. But instead of exploring the issue, the writers decided that of course robots deserve rights. The good guys supported robot rights; the bad guys opposed them. It was that simplistic. I prefer smart writers who assume their audience is equally smart.
I agree with this. Let's keep it peaceful.
Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.
Is this a fact, or your assumption about assumptions? 😏 (I'm kidding of course. Lol)
Yanno, You’re right. I agree. I guess what I meant was making random assumptions about this person felt gross to me. I would still push on saying that random assumptions shouldn’t be made online considering the topic. It had no relevance nor no factual merit, nor any sense. It was pulled out of thin air.
Obviously the room feels different, but figured a space for music could do without assumptions on a person.
I didn’t mean to be overly salty or negative, just wanted to be fair to this person. I would hate to see my tweet clipped out and have random forum say random assumptions about me and not one person mention it as being a bit unfair.
Anyways, good point.
Lol very funny.
I have made a bit of an ass of myself it seems. Such is life. Hilarious.
It's a bit of a mine field. Not making assumptions is impossible. But there are some that should be resisted, and some that definitely should not be verbalized.
And you are right as far as I can tell. On the internet, it's hard not to let those assumptions sneak into the mind, but typing the words out is something that can (and should) be considered beforehand.
Well said. Appreciate the responses. Back to making music.
Cheers
Trust me mate. I often make an ass of myself too. Part of the human condition, or at least that's the ass-umption I'm making. 😂 That, and bad puns.
Ass-toundingly awful, but brilliant. 😅
For me, the big problem with these machine-learning program is it lies when it doesn't know the answer while instead of admitting that it doesn't know...
Totally agree, it was by far the most annoying part of the program.
The developers should definitely remove lying from AI when it doesn't know the answer or have the data to help.
The human trait of lying cannot be in AI, even if you could have it as an option for more human like conversations. Still a bad idea in the long run IMO.
It’s not really lying…it doesn’t know truth from fiction..which is why it can present both as fact….it needs to provide an answer but it doesn’t really know if the answer is right or not. Contextually if it seems right, that’s good enough. That’s why people can’t be lazy and treat it like a search engine. That’s not what it’s for on its own. It’s an assistant, and like any assistant, it can help you get a job done quicker, but you need to be able to know whether or not it’s actually doing a good job or if its full of sh*t. Most people don’t use chatGPT or the other AIs to their best capabilities because they assume its supposed to be some smart version of google search.learn to make it automate tasks for you. Teach it all about your iPad music studio set up and let it help you organize, or make new things. The tool is powerful, the users unfortunately are not most of the time…
let me add some context, I use the word lying as its best describes the AI's action in certain situations. it might not be technically lying per se however the outcome is pretty much the same to the end user.
In my example i asked AI what key bar 5 of piece of music modulated to, AI response with b major, this response is incorrect. I tell the AI its incorrect, AI apologises and tell me that is A minor which is incorrect and so forth.
My point is the AI at this point needs to respond that it doesn't have specific data on that question, I not saying that technically this is easy to do.
To just give another incorrect answer is worse to me than AI sticking to its original answer and saying its correct. I understand both outcomes will lead me to making a mistake.
A nonresponse or an an admission that the AI is incapable to give a definite answer is perfectly acceptable and infact should be of the highest priority
Humans are not smart enough to entertain AI. The countdown clock is already ticking for us. They'll breed the few human specimen left and keep them in a Zoo for pure enjoyment: watch the male kill each other with rudimentary weapons to get the female favours. Seems like a lot of fun for a Sunday afternoon stroll.
like a battle to the death in a Colosseum? I got a captain Kirk fights Spock with sharp shovels image in mind.
Yes the recognition is also very good. I first thought that voice recognition and output are somehow tied / integrated directly into the "main" LLM in some novel way because they were both so good and natural, but apparently, they're still using "traditional", separate (although very good) speech-to-text and text-to-speech ANNs.
Everything is only remembered per-session. In the most recent models, I think it's 8192 tokens (mostly equivalent to words). So anything either of you said more than 8192 words "ago" is "forgotten" by the model.
I think this has already been settled a few posts later so let's keep it at that 😊
The concept of "calling out" someone is completely alien to me. Meanwhile, saying everything in jest is my standard mode of operation, and it gets worse the worse the world around us becomes 😁 (so I'm now at roughly 700% on the Jest Richter Scale)
Thank you, much appreciated! 😌
Yep. Actually it's a basic requirement for evolutionary survival. It's a part of the wider concept of "pattern recognition", one of the most fundamental parts of cognition. Just goes to show you how crazy the times are we live in in the West, where any such endeavour is branded as "discrimination" and essentially means the societal death penalty 😃 (but I digress)
It's not as straightforward as many might think. Most people still think that stuff like GPT or Claude is some form of "program" that has been "programmed" by humans, with "data" that gets "searched" and then "output" and thus could be "filtered" in some way. That is mostly wrong. There is some traditional code involved, but that's maybe a few hundred or thousand lines that just "run" the neural network. The intelligence, AND all knowledge, comes from billions and even trillions of numbers that basically no-one, not even the folks at OpenAI, have any idea what they are or why they're causing intelligence. (I'm dumbing this down a little but not a lot 😉)
When a neural network like GPT-4 or a brain generates an action upon stimuli (inputs), it is due to a (complicated and nested) neural "pathway" being followed from input to output. A neural network does "know" how "certain" it is with the outputs it's generating, as this is a function of the "strength" of the connections it is following (every connection in a neural network has a certain "strength").
So, for stuff that GPT isn't "sure" about, the strength of the connections it followed will be lower, i.e. it will be less "certain" about the answer. It's a matter of adjusting the thresholds of what connections to follow and which to discard, etc. -- this can already be tuned to a degree in the API, i.e. developer version of GPT (called the "Temperature" of the model).
OK, enough boring talk (again it's all not totally technically accurate but this is a music forum 😜)
yeah, all the data its uses comes from data up until jan 2022.
It cant make any further adjustments to its learning until OpenAI give it more data in the next version.
also this is why it has no Realtime data knowledge to pull from, like today's Schedules for basketball games... unless you wanna know the schedule from 2021.
new discoveries after Jan 2022 will not be known by chatgpt 4 o
I was referring to it remembering your personal conversations with it and what you taught it -- that is the 8192 token limit.
The "static" "knowledge" it has is up to a certain point in time too yeah -- you can actually just ask it something like "What date is your knowledge cutoff" and it'll happily respond
its definitely not straightforward, however its annoying to have the AI lie and very annoying, a weak aspect of the system currently.
yep, I understood what you said
Now, chatgpt remembers all info across your chat history - this was added in a recent update!
Oh! Thanks... is this different to Custom Instructions? (apparently I should read those more often 😂)
In the latest Harper’s magazine an article pointed out this idea:
“Over the past year, several AI companies have advertised positions for writers and poets. As it becomes more difficult to discreetly swallow immense quantities of copyrighted material, the dataset needs new high quality inputs. Why would a tech company pay for content, given the ocean of data still liberally accessible on the internet? Industry leaders realize that, more and more, the texts available online will be co-written or simply re-written, by their own tools, inevitably degrading the the quality of future iterations of the model.”
My interpretation of that is there will be a kind of “in-breeding” without new input of real human imagination.