Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
It is different, yes. It simply remembers your entire conversation history. You can ask it anything about what you previously discussed and it should be able to recall it, analyse it etc. It also now has a feature where it will kind of create its own custom instructions based on things you say. For example, if you casually mention that you're a software dev, you might see a message popping up after that which says something like 'memorising'. It is marking that info as significant, to improve its future answers' relevance. But... It still tends to waffle on, answers are often still prone to hallucination. Far from perfect, still useful in skilled hands tho!
Don’t give them too many ideas for hybridisation.
I’d say we have to sort out ‘human rights’ before anything else, certainly before the rights of a mechanical mannequin.
Right on.
The word discrimination was never used and there was no death penalty persecution happening. I still stand that person being so assumptive was wrong, but as a person who’s done plenty of work on myself, I could read a room and recgonize no one was feeling that so I let it go.
There is no societal death penalty for anyone, they just rebrand, the world shines shit and calls it gold.
You’re conflating pattern recognition and my point which was unnecessary assumptions being voiced. I ask anyone to put yourself in that original tweets shoes and tell yourself you would love seeing those kind of assumptions made about you on a forum. It’s a bad look. Full stop.
My intentions were to be kinder to people who, for all we know, could happen upon this forum and I still stand it was a bad look to let that person just pull shit out of thin air.
Honestly I’m completely over it. If you need the last response, it’s yours. Cheers.
I think you covered it. Assumptions are unavoidable. But as you say we should all strive to be kinder by considering what we say and exercizing some self control when typing away. I try to be better but it's small steps.
BTW, I think it was established in the 1986 documentary "Christine" where Will Darnell explained so well:
Great scene. Forgot about.
Thanks for your input. Appreciate it. Cheers.
I understand what you said, but it did create a story to fill in the void when it cannot find the answer.
I asked it the following question: "Provide a detailed analysis of the third movement of Bartok's String Quartet No.4." I did the analysis myself (but never published it.) I searched the web for a good analysis of this movement, but couldn't find one that's good enough—basically, no real "detailed" analysis exists, thus the question. FYI the movement is tempo marked as "Non troppo lento" (slow, but not too much), and begins with a slow E Pentatonic scale descending passage, alternately played by the 2 violins and viola. All notes were held to form a sustained, static chord accompanying a melody played by the cello.
And here's part of the answer:
"...The first section, marked "Allegro" (fast and lively), begins with a lively and energetic melody in the first violin, which is accompanied by rapid arpeggios in the other three instruments..."
No human errors, no matter how amateur he/she is, would rival this. Again, I agree with you completely that it's up to us to check whether ChatGPT gets its answer from reliable sources, etc., but in this case, I don't believe any human sources for the analysis could be this wrong. It looks more like it cannot find the analysis of this specific movement of the quartet, and so pulled other analyses of other quartets to create the answer instead.
It’s almost like chatGPT makes some assumptions.
…but you can roll it in glitter
no need for forum wars guys, i know we are all old geezers but let's give a good showing for the AI.
Would you put on a good show for the reflection in the mirror, or rather the origination of that image?
I rather impress the AI than the humans, don't bite the future hand that will feed you.
I'm just jesting a little, no need for forum wars regardless.
There is no need for a forum war. I will extend an invitation to any particular individual(s) to discuss my assumptions in a PM
My dad died last year and I had a screen shot of his medical prescription. I wanted to find out what meds where associated with a heart condition so I posted the screen shot and asked chatgpt what meds were heart meds.
It gave me a detailed synopsis of each one and gave a summary of only the ones that were associated with a heart condition. It did it in seconds. Would have taken me a fair bit of time to pull that information myself. It's really quite amazing technology.
sorry to hear about your Dad. RIP
I haven't got into the visual side yet, but it sounds like the same mind-blowing feedback I'm getting from voice to text.... with the caveat of the (perceived) lying if it doesn't have the specific correct data.
Implicit in a lot of this thread is a notion that LLMs (of which ChatGPT is one) are designed for fact/truth discrimination. They are not; it is not what they are designed to do. They aren’t “intelligent “ in in the sense of being designed to analyze information for truth. They are designed to generate language consistent with the corpus they were trained on.
They are essentially predictive text engines trained on an ENORMOUSLY (really really really enormous) LARGE amount of source material. If the corpus they are trained on has any bad information in it, that information will make its way into what it returns.
LLMs are very good at generating text that SOUNDS accurate—and for the average person, the quality of the sentences will be better than what they might write themselves in terms of style. But they often spit out convincing sentences that are factually wrong.
I have a few friends that find it useful for programming because the corpus seems to have sufficient material that it supplies reasonably relevant code —because these friends are expert coders, they quickly recognize when it gives them bad code. A couple of friends, also expert coders, work in areas for which the corpus must not have much relevant code, because they have found it not very useful except for code they don’t need help with.
Little discussed is how much benefit these systems are compared to the energy they consume (lots) or the ethics of companies making profit that is 100% reliant on other people’s work (the corpus on which these systems train).
It seems that there's also, implicit in your making this point, a suggestion that humans have some advantage over AI because humans have some innate superiority at identifying "truth". I would suggest that this is not an advantage humans have over LLM-AI. Or, if humans do have some advantage, far more is required to establish that than simply to say LLM-AIs "are not designed for fact/truth discrimination." Humans have evolved to adopt beliefs that maximize fitness, not truth. See, e.g, https://www.scientificamerican.com/article/did-humans-evolve-to-see-things-as-they-really-are/
its designed to give data to the user, if it gives false data then that is a fault in the system.
it's just annoying how the algo responds to questions when the data is incorrect or absent. it gives a perceived image of lying. obviously, people understand it just a machine and not lying.
its something the devs have to work on. infact, perceived lying is actually useful but not when you use a tool for precise work or if you wanna modulate to the correct key in music. clearly this is an issue.
LLM's simply are not designed to do this -- any attempt to add 'truth discrimination' is essentially a hack. If you read the writing about LLMs from people that are both hugely knowledgeable AND have no vested interested (i.e. no profit motive) they have a lot of enlightening things to say about what this technology can and can't do -- even with refinement. Jaron Lanier has written some really good pieces going through this -- there are a lot of technologists that have a vested interest in selling LLMs as delivering more than they do/can -- because they have a huge profit motive.
Humans with expertise in a field are by no means infallible -- but they are able to identify errors in a way that LLM's cannot. There are certainly areas where various types of AIs are less fallible than individual humans.
When one switches freely between discussing AI and LLMs (a very particular, if impressive, application of machine learning/AI), it can be confusing. LLM's are a particular category of tool with particular limitations. Other AI tools have other applications and limitations. I think it is important not to treat LLMs as AI writ large. LLMs are amazing at what they were designed to do -- but they are not designed for the kind of analysis that experts in a field do. They just aren't. I don't mean "they aren't there yet", I mean that isn't what that tool does.
The next big push relative to the advance in A.I is reasoning and eliminating hallucinations which I believe OpenAI are currently working on. The goal is to establish A.I as being reliable and trustworthy.
As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.
But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.
The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.
Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.
Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.
These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.
Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.
There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.
I implore the lot of you to do some research into why it’s a horrible product.
Or don’t, it’s your life.
Either way, we all are, quite literally, being scammed.
AI has just started. Not sure what you are referring to relative to a burst.
What an exemplary post with all reference articles, love that part of it and wish more people here (and elsewhere) would link to their claims. Not sure I agree with your take on it but that's another story.
Goldman Sachs has already questioned its viability, therefore it’s worth.
I think about how Crypto, Metaverse, it was all supposed to be the next big thing. Now they run away from talking about it.
Pushing the goal post for unlimited growth in the tech sector. They’ve run out of ideas and this AI (Which to be clear I know it’s generative LLM’s) is the next wool they’re pulling over on the customers. It’s all bullshit.
Side note - AI has not just started. Machine Learning, LM’s have been around for decades.
Updated Goldman Sachs Article;
https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf?ref=404media.co
Thank you that’s kind. I wish I would have linked more. Lord knows I have a bunch of them. But either way, even while not agreeing, I appreciate you taking the time to respond in kind.
The Goldman Sachs article is over a year old and well out of date.
LLMs have come a long way since then and advances show no sign of slowing up. Quite the contrary.
I've been following advances very closely. Huge amounts of money and effort are getting plowed into this. A.I is going to be deeply imbedded into every aspect of our lives if we want it or not.
Here’s the new one, apologies. This is within the last couple weeks.
The money being plowed to it makes no difference to what I’m saying. Believe what you will, I’ll do the same.
Also AI, Machine Learning, has been part of our lives. If you actually believe people will use this new wave of generative AI technology to make it viable and sustainable, while being profitable, I implore you the opportunity to post proof from non-biased insiders who have vested interest in this being the next big thing.
Thanks for the article. In conclusion it does say that A.I will pay off but at the moment it's constrained by GPU availability. There's a bit in there that states (conservatively) that in 10 years 25% of human jobs will be replaced. That's quite a decent return of investment. I think it will be quicker than that.
Don't get me wrong. A.I is going to be the biggest disrupter and impact to humans than any other technology. I'm not hugely optimistic that we will handle the transition well.
But no. The bubble isn't about to burst. There is no bubble. It's only unrelenting progress. Quite frightening really.