Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
You mean kinda like our current non-AI politicians? 😉
I’d rather give Apple 50 a year, pay for Pianoteq, than give chatgpt my info
It sounds like 3.5 is still free but GPT+ which lets you select GPT 4 is 20$/month, like it has been on the website.
Absolutely. Many people will be burned by this. I mainly use it for goofy trivia and creative prompts.
Hopefully they’ll be kicked out soon!
what's the hype about this.
it's a advanced searching machine.
tldw; openAI passes on the legal bills to you
That's my point. The people who don't properly validate AI answers are the same that don't properly validate web results. And search result rankings are usually influenced by advertising and god knows what other manipulation. My point is odds of getting a correct result are actually higher using current AI than surfing the web. At least until it begins to be manipulated in the same way search results are now.
This is just an evolution. Search engines evolved from being time savers to some sort of hybrid of time saver, advertising revenue generator, and manipulative tool. chatGPT style AI is just the next level evolution of the search engine.
Totally agree that it is all too easy. But I don't think it's any different than web searching today. Just faster.
To be clear - I don't think any of this is healthy for human mental development. But it's here. Resisting it will be no more effective than resisting the emergence of the world-wide web.
How is that different from today? 🤷🏼♂️
Oh, but they are. It has already pervaded virtually every industry. That train has already left the station.
Well said. You managed to summarize and improve on my word salad in one sentence.
Now it knows and now it hates you too.
Be afraid.
Be very afraid.
No. It just requires an email to log in. I use the Hide My Email feature of iCloud to insure I don't get spam to my primary email address
Sounds like Bing has limits on number of turns/conversations though and GPT 4 on Open AI also has plugins for the current world too.
ChatGPT already has everything you’ve ever written online. You gave it your deepest thoughts and wishes already. It’s OK
Because what comes out is just a parroting of the world’s thoughts. Be afraid.
Not exactly. It uses search, but it also combines data to come up with "creative" solutions. Sometimes the answers are nonsense, but that's quickly becoming less common.
That's what I remembered but I wasn't in the mood to research it in contradiction of the person who posted that phone number is required. I did the same with hide my email.
There's no way I would provide a phone number. But I thought maybe, just maybe, they caught me at a weak moment. 😂
If you don't already have one, you should get a Google Voice phone number. It's free and you can use it to avoid getting spam with their filtering options.
@monzoid: old story, can’t remember the author.:
The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:
‘Great machine: is there a God?”
A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:
“There is now.”
Maybe I used my google account.
Sheesh. I'm getting rusty. That is something I would have checked and kept track of in my better days.
The systems today are remarkable, but they're not "scary smart". In perhaps 5-7 years from now and going forward people are going to start feeling very threatened by A.I. as these systems become fully integrated with sensors everywhere, robot bodies and so on. People will regularly be wondering what is their role in the world when their computer has millions of times greater intelligence than they do and there are robots capable of performing most physical tasks.
I think the thing that brought it home to me, which I have already mentioned elsewhere on the forum, was watching one of those cute ‘red stick man vs blue stick man’ AI learning animations, where two teams of two randomly flailing little guys playing hide and seek had to cooperate to construct a base to keep the opponents from seeing them, by learning to shift objects about to block doorways etcetera.
At one point one of the stick men evolved a behaviour completely outside the ‘rules’ of the simulation, learning to throw itself out of the arena and back into it, inside the blockaded ‘enemy’ compound. Nothing the scientists had set up in the scenario had anticipated this possibility. The AI broke the rules to win the game.
And yet scientists still think they can control vastly more sophisticated learning algorithms.
My own thoughts differ. I do think systems are becoming terrifyingly smart. I feel there is a huge danger of unintentional escape into, or intentional infiltration of, our infrastructure control systems, virtually all of which are reachable via the internet. It wouldn't take much to create vast disaster.
A disaster doesn't require AI to be malevolent. Nor does it it require a single robot. Unintended consequences of brute force trying by AI of strategies to learn or solve a problem are very probable. Eventual leakage out of controlled environments is virtually assured at this point. And if anyone thinks bad actors aren't developing ways to weaponize this, they're dreaming.
I'm convinced that the most likely end of humanity is no longer the nuclear threat but AI leakage into infrastructure control. An intentional catastrophic nuclear or ESD attack is at least moderated by even crazy leaders sense of self preservation and by the expense and practical difficulty of pulling it off. Potential AI created disasters have no such restrictions. The only thing standing in the way is their temporary isolation from vulnerable infrastructure.
Replacing jobs? We'll adjust to that like we have every technological advance so far. I don't think that's at all what is scary.
I'm not being hyperbolic here. I'm also not saying I predict it will happen. But I also think it's one of the most likely outcomes for humanity at this point.
This guy, one of the originators of this AI technology back in 70's( or 80's?) thinks that AI may actually exert control over physical world by surreptitiously influencing people it's interacting with to do its bidding. He's been making the rounds on media recently, to try to get awareness up:
Thanks - I'll watch that video with interest.
My first thought based on the title is - influencing humans is a slow and inefficient process. It's also something that can provide time in most cases for humans to react to problems and put safeguards in place. That's less of a fear than self-training AI escaping its confines and interacting directly with infrastructure control.
Critical systems have to be operated within very particular ways to remain safe. That's why specialists are needed. AI learning is brute-force, trying millions of permutations of random actions all resulting in failure until eventually a successful outcome is found. There's no harm done in a sandboxed environment. Not in the real world.
When machine learning systems demonstrate independent reasoning and a clear goal of self-interest and self-preservation, that’s when people should be very concerned. People are the apex predators of our world and a self-interested artificial intelligence would no doubt display this same trait.
Oops! I apologize.
In my head these posts were in a more appropriate AI related thread. Sorry for the OT!
Your posts seem to be on topic. At least they are related.