Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
Yes, they will be embedded whether we want them or not. One of things that Jaron Lanier talks about in some of the articles that he has written is the misunderstanding of how LMM's work (even on the part of people that are computer scientists but not expert in machine learning) is that you can do all kinds of refining to improve what something like ChatGPT does -- but you can't categorically change what it does -- only how effectively it does what it does. There are a lot of smoke and mirrors on the part of people with billions and billions at stake -- who want to convince you and everyone (particularly investors) that particular AI systems are something other than what they are. You can improve how well an LLM does what it was set up to do (again: massively effective predictive text generator that generates text responsive to your prompts. It has a massive amount of text to draw from -- and generates the expected response extrapolated from its database. It can refine how it weights and filters things -- but there is nothing in it to determine truth -- and one of the things that people with no money to gain point out: the more that LLM output becomes (inadvertently) part of the corpus it trains on, the more prone to error it will be when erroneous data enter the corpus.
So much money is at stake that you need to really take anything that any interested party has to say about the future capabilities. Lanier pointed out early on that there would be massive apparent improvement that is just the natural fruition of a better corpus and better filtering of the corpus and tweaking of the algorithms used -- and that LLMs will hit a wall and a new generation of systems will need to be generated. They are amazingly powerful tools -- they just aren't quite the tool that some people want us to think.
Unfortunately these companies don’t deal in decades. They’re short sighted, as they constantly prove time and time again.
They’re ran by CEO’s who have vested interest in share holder value being the highest possible as quick as possible
The market will not wait until then, imo. Therefore the huge hype and spend has turned it into a bubble. It’s gonna burst.
The fact these ghouls hate creatives as much as they do should render every single creative using it as an opponent rather than an ally to these companies and their products that were again, trained on people’s creations without implicit permission, therefore stolen.
They’ll never be able to afford the copyrights. They’ll never be able to get the usage under the necessary conditions to justify cost in time for the market to come for their profits.
I’m cynical, I realize. But it seems you’re somewhat aware of the situation, in what world has the market ever waited 10 plus years for profits.
Well said. I literally have nothing to add but this is the exact thing I’m trying to say, We are being sold a product that’s marketed incorrectly by MBA’s in C-Suite levels of their companies.
They have little to no knowledge of the product, all they want is a number to go up. They’ll lie egregiously to get there.
I don't disagree. But on the bright side, it's saved me a lot of time and effort at work.
The NY Times clearly doesn't understand how this works. Chips improve on a less than annual cycle. Their processing power improves and their power requirements drop every single time advances are made. This is how technology cycles work. The imaginary "all the energy a country can produce" claim is completely bogus. They need to look up Moore's Law.
Okay, the NYT linked a peer reviewed article with a plethora of sourcing from many people in academia Your vague mention of Moores Law as a gotcha is wild.
Please cite any proof that Moores Law trumps this peer-reviewed study as it currently stands.
Apathy is a feeling I yearn for, lucky you.
Erroneously presumptive. My default affect tends toward acceptance and non-attachment.
"Alex de Vries is a PhD candidate at the VU Amsterdam School of Business and Economics and the founder of Digiconomist, a research company dedicated to exposing the unintended consequences of digital trends. His research focuses on the environmental impact of emerging technologies and has played a major role in the global discussion regarding the sustainability of blockchain technology."
Yeah, that sounds really "unbiased." Also, whatever are the results of this "peer-reviewed study" are hidden behind a paywall so the alleged results are not subject to discussion or debate.
Hit a chord it seems?
So you’re stating that you’re accepting yet non-attached.. if that’s not indifferent.. which is the definition of apathy.. then I suppose you’re well within your right to call it what you want. Doesn’t change the definitions of the word.
I’m not here to judge and to be honest I couldn’t care less what your position is, or anyone else’s. I’m just sharing what I’ve seen with sources where it seems relevant to the community, being there’s a lot of the creatives here, should be in the know of. Embracing this technology is ridiculous for any creatives to be doing. Full stop.
You made a point to say that you’ve gotten work done after I posted many sources about the harms of AI. You quoted me, and your response was “saved me time and effort at work”
That’s apathetic, but as I’ve stated previously, I’m well aware self-reflection is a hard thing to do. It’s all good either way.
I’ll continue to be “erroneously presumptive”, with the very little information you gave in a quoted response to me that was, yet again, apathetic.
Cheers.
Buddy. You’re whinging about a PhD candidates work being to show the unintended consequences of digital trends. We all live in a capitalist society. This person has a business and markets themselves to get research funding to publish papers showing that. This is how it works. They’re peer reviewed. Unlike marketing experts who just can literally say whatever they want.
By your logic anyone who studies anything at length is biased. That’s delusional. If I didn’t know better I would assume you work for AI or big tech.
But considering you find research biased, that’s the end of this conversation for me. You’ve lost the plot.
Best.
Completely true.
LOL. You cited a NYT link to a source which has no evidence... unless you pay for it. That's not evidence.
In the interest of getting back to less controversial topics, I think this exchange is over.
Have a nice day.
This post should be stickied as it was written like a grad school term paper (and no one asked him to).
“You can’t fire me I quit!”
Lol. Hilarious.
What happened to "...that’s the end of this conversation for me"?
Why don't you contact me via DM if you have additional issues you need to work out? This is not the place for this kind of derailing conversation.
Not true that ‘there is no bubble’. The rate of development is impressive, but many highly informed observers state that it is by no means a certainty that the problem of “hallucinations” is solvable. See people like Gary Marcus on Twitter, for example, for more informed critique of optimistic outlooks. If the problem of hallucinations is not solved, AI will never end up disrupting society anywhere near as much as was earlier expected, and this will lead to a MASSIVE drop in value of shares in AI tech companies, exactly in the same way as happened with crypto metaverse projects. None of this is to say that AI is not useful, but we may have to limit our expectations drastically. Lots of other points Offbrands made also seem accurate, including the energy issues. I still find chatGPT and Claude valuable and inspiring in some ways, but their usefulness is drastically limited by the problem of hallucination (as well as by the poor ability of most humans using them to use them to full potential).
My immediate reaction is there is some conflation going on between "success," as defined as profitability and viability as a business investment vs. disruptive impact. I don't believe there's a determinative correlation.
Agreed it would be a crap shoot to expect return on investment. Profits could happen. It could look like they're happening and then crash. It could never take off. Some people are going to make fortunes. Some are going to lose their shirts. (I'm not touching related investments at this point, myself. It's too uncertain.)
But that isn't going to stop AI gaining huge, scary, dangerous, wildly beneficial, practical, and absolutely disruptive capabilities. That much is guaranteed regardless of return on investment. To shrug that off based on whether it's a good investment or not is a mistake.
I'm greatly fond of my Shark robot vacuum, "Buddy". He's reasonably smart, never complains, and feels like a little friend puttering around the house doing the floors while I tidy up the house. He's respectful toward the cats, but also doesn't let them intimidate him, much to their annoyance.
I look forward to a future with a couple other helpers like him. I prefer them not speech enabled, but I'm weird that way. Don't like talking (to humans either). 😂
The next big leap is AI companies working on reasoning technologies. Couple that with an improvement of high quality training data will yield the next step. It’s going to be an interactive non ending process but in the short term it will deliver AI that can be relied on to perform a good subset of human related tasks autonomously reliably and with trustworthy results. Things will rapidly improve after that.
AI scientists who are deeply imbedded working out the problems of AI say there’s no slowing down of the technology and super intelligence is just a matter of time. Observers of AI tech companies can state what they wish relative to AI advances but if they talk about bubbles then they don’t understand how much effort is being poured into this endeavour and the huge advances that are happening at a global level on a near daily basis.
I don’t think people understand how fast things will change when AI is embedded in the millions of laptops, desktops, tablets and smart devices around the world….up until now AI has been opt in or service-based. While it has scrubbed the internet for what it can find, the greater advance is having the world actively feeding you data on how people talk, and think and feel, and connect. The things they search for and share. Schedules, mails, messages, location data, and all these other things that even at the lowest level of invasiveness will help it grow constantly.
You raise a good point that I hadn't considered in that when all these AI systems are embedded into our laptops/phones etc... there is going to be a vast amount of additional high quality training data available. That's really going to catapult progress.
the main reason I was impressed with this current version of chatgpt with voice to text was the fact that it just works extremely well at helping me extract data in a very quick and human way.... just by talking.
all the creative stuff so far as been amusing at best and completely useless personally.
i consider myself a creative however I'm not worried about AI in that field, up until now AI has been really boring to me, however using it as a information extraction tool via voice is incredible.
Microsoft CTO Kevin Scott says we are not at diminishing returns and that scaling laws will continue to extend and debunks the twitter trolls who states it's all a scam.
So no. There is no bubble.
Be very cautious about treating what technologists with billions of dollars at stake have to say about yhd state of AI and its benefits. The CTO of Microsoft, for example, has a very vested interest in having people (investors) think that AI should be in every product and has only upside and untapped potential…oh and by the way, we are making enormous profit on something that relied 100% on data we for which we didn’t compensate anyone…and we’ll worry about the enormous energy burden later
Here's a citation from an engineering-centric source regarding energy consumption and A.I.
https://www.prnewswire.com/news-releases/epri-study-data-centers-could-consume-up-to-9-of-us-electricity-generation-by-2030-302157970.html
But even this source is basing its 9% of US electricity generation by 2030 on current trends for something which is relatively new. I sincerely doubt the likelihood of these projections, especially since populations across the world are currently in decline. Declining populations mean there will be reduced power consumption needs. And reduced power requirements will come from more efficient programming and more efficient, lower power processors.
The CTO of a company that’s funneled billions of dollars into this exact technology said we haven’t hit diminishing returns and then proceeds to explain scaling laws while debunking Twitter trolls.
Must be true! That settles that! No bubble.
This is exactly my point, jokes aside.
ANY C-Suite level executive whose company has a blatant bias with the technology working is going to sing, dance, and bullshit their way to get people to believe them.
Silicon Valley is full of people who all want to be a version of Steve Jobs. The problem is only Steve Jobs was Steve Jobs. Most of these people I argue shouldn’t even be on camera. But their egos need to be fed.
Break down what you said again and explain to yourself and then this thread how that wouldn’t be arguably the most biased opinionated source that’s been posted on here, I’m not trying to be rude, just wanting you and anyone reading this thread to think more critically.
EDIT: this is a perfect opportunity for my 2nd main rule in life, if a man has his facial hair cut like fucking Colonel Sanders unironically, turn the video off immediately. They aren’t to be trusted.
Pin this, sticky it, hang it in the Smithsonian.
That did genuinely made me chuckle. Thanks
I understand. My views and sharing was just more of a, I really love this forum and I want to make sure anyone who comes across knows that it isn’t an echo chamber for AI talk.
I personally know people who have lost jobs in Creative fields. Voice actors mostly, but a couple of copy writers as well. Soon it’ll be much more. I just don’t even want to engage with that technology at all.
The CTO of OpenAI, Mira Murati, has some pretty weird views on creativity. I imagine because she’s never been a creative. She sees creativity as a barrier that only a few talented people can access. I find that the most baffling thing I’ve ever heard about creativity.
She goes on to say the jobs overtaken will be ones that never should have existed in the first place.
That for me, was the end of ever using these chat bots.
(For some reason the time stamp won’t work - it’s at 27:53 for anyone interested)