Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesnā€™t stop thereā€”Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

šŸ”„ BREAKING: OpenAI just launched the official ChatGPT app for iOS.

13

Comments

  • wimwim
    edited May 2023

    @NeuM said:

    @wim said:

    @hes said:

    @wim said:

    @NeuM said:

    @Svetlovska said:
    @monzoid: old story, canā€™t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ā€˜Great machine: is there a God?ā€

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    ā€œThere is now.ā€

    The systems today are remarkable, but they're not "scary smart". In perhaps 5-7 years from now and going forward people are going to start feeling very threatened by A.I. as these systems become fully integrated with sensors everywhere, robot bodies and so on. People will regularly be wondering what is their role in the world when their computer has millions of times greater intelligence than they do and there are robots capable of performing most physical tasks.

    My own thoughts differ. I do think systems are becoming terrifyingly smart. I feel there is a huge danger of unintentional escape into, or intentional infiltration of, our infrastructure control systems, virtually all of which are reachable via the internet. It wouldn't take much to create vast disaster.

    This guy, one of the originators of the AI technology back in 70's( or 80's?) thinks that AI may actually exert control over physical world by surreptitiously influencing people it's interacting with to do it's bidding. He's been making the rounds on media recently, to try to get awareness up:

    Thanks - I'll watch that video with interest.

    My first thought based on the title is - influencing humans is a slow and inefficient process. It's also something that can provide time in most cases for humans to react to problems and put safeguards in place. That's less of a fear than self-training AI escaping its confines and interacting directly with infrastructure control.

    When machine learning systems demonstrate independent reasoning and a clear goal of self-interest and self-preservation, thatā€™s when people should be very concerned. People are the apex predators of our world and a self-interested artificial intelligence would no doubt display this same trait.

    I don't think that's necessary at all. All it takes is escape from sandboxed environments, combined with the "try everything until you succeed" learning model. Without suitable controls bad outcomes are guaranteed. I don't believe were capable any longer of developing suitable controls.

    OK, I'm done. šŸ¤

  • @wim said:

    @NeuM said:

    @wim said:

    @hes said:

    @wim said:

    @NeuM said:

    @Svetlovska said:
    @monzoid: old story, canā€™t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ā€˜Great machine: is there a God?ā€

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    ā€œThere is now.ā€

    The systems today are remarkable, but they're not "scary smart". In perhaps 5-7 years from now and going forward people are going to start feeling very threatened by A.I. as these systems become fully integrated with sensors everywhere, robot bodies and so on. People will regularly be wondering what is their role in the world when their computer has millions of times greater intelligence than they do and there are robots capable of performing most physical tasks.

    My own thoughts differ. I do think systems are becoming terrifyingly smart. I feel there is a huge danger of unintentional escape into, or intentional infiltration of, our infrastructure control systems, virtually all of which are reachable via the internet. It wouldn't take much to create vast disaster.

    This guy, one of the originators of the AI technology back in 70's( or 80's?) thinks that AI may actually exert control over physical world by surreptitiously influencing people it's interacting with to do it's bidding. He's been making the rounds on media recently, to try to get awareness up:

    Thanks - I'll watch that video with interest.

    My first thought based on the title is - influencing humans is a slow and inefficient process. It's also something that can provide time in most cases for humans to react to problems and put safeguards in place. That's less of a fear than self-training AI escaping its confines and interacting directly with infrastructure control.

    When machine learning systems demonstrate independent reasoning and a clear goal of self-interest and self-preservation, thatā€™s when people should be very concerned. People are the apex predators of our world and a self-interested artificial intelligence would no doubt display this same trait.

    I don't think that's necessary at all. All it takes is escape from sandboxed environments, combined with the "try everything until you succeed" learning model.

    OK, I'm done. šŸ¤

    Haha. Right when itā€™s getting interesting?

  • heshes
    edited May 2023

    @wim said:
    Oops! I apologize.

    In my head these posts were in a more appropriate AI related thread. Sorry for the OT!

    Whoops, me too.

    Although video I posted is definitely related, since the guy thinks even AI like ChatGPT, whose only interaction with humans is via a text interface, could develop enough to use those interactions to manipulate/influence people to take actions in the world that further the AI's purposes. While it might take a while to influence a lot of people that way, it only takes one person to wreak a lot of havoc. It' not even like the AI have to be evil or sentient at all, just that they somehow develop purposes or goals that are out of alignment with what we human overlords would want.

  • ā€œ write me a rant about subscription pricing for the Audiobus Forums.ā€

  • The openAI web version doesnā€™t work on safari for me on my phone. But Iā€™ll probably just continue using Chat AI by Mixerbox.
    Itā€™s free and it was just updated with GPT4.

    I just donā€™t know enough to understand what is lost or compromised when chatgpt is adopted by devs other than OpenAI.

    For my purposes, it seems good enough, in that I really donā€™t have any purpose. I like to try different ways of phrasing essentially the same prompt, just to see how much the results change. It is a language model, so maintaining a strong command of grammar and junk is maybe even more important now. Especially in specialized areas, you get farther being familiar with different lexicons.

    I personally look forward to using it as an ear training aid, like I do with Ear Master and stuff.

  • @wim said:
    Maybe I used my google account.
    Sheesh. I'm getting rusty. That is something I would have checked and kept track of in my better days.

    Told you guys you need a phone number lol

  • The scariest part of the thread so far is when @McD shared his Mozaic interaction:

    ā€œIā€™m sorry Daveā€¦ā€

  • @Gavinski said:

    @wim said:
    Maybe I used my google account.
    Sheesh. I'm getting rusty. That is something I would have checked and kept track of in my better days.

    Told you guys you need a phone number lol

    Also, @NeuM, it seems Google Voice is a US-only service. Though maybe you can still register for it with a vpn if you're in another country?

  • wimwim
    edited May 2023

    @Gavinski said:

    @wim said:
    Maybe I used my google account.
    Sheesh. I'm getting rusty. That is something I would have checked and kept track of in my better days.

    Told you guys you need a phone number lol

    I'm not so sure about that if you use your Google account or Apple account to sign in.
    Could vary by country as well.

  • Yes I'm talking about the website. The app is also only available in the states currently so I can't check that. But I know when I tried the website there was no way for me to sign up as a) it wanted a phone number and there was no other choice b) it didn't accept phone numbers from the country I currently live in

    @wim said:

    @Gavinski said:

    @wim said:
    Maybe I used my google account.
    Sheesh. I'm getting rusty. That is something I would have checked and kept track of in my better days.

    Told you guys you need a phone number lol

    I'm not so sure about that if you use your Google account or Apple account to sign in.
    Could vary by country as well.

  • @johnfromberkeley said:
    ā€œ write me a rant about subscription pricing for the Audiobus Forums.ā€

  • edited May 2023

    hm interesting.. in previous post i just explicitly asked him to write rant, but then i tried new chat and i provided more context in prompt (described whole situation in most neutral way, not expressing any oersonal opinions or hints, just facts) and i just asked him to,provide reaction (not specified the expected mood of reaction)

    and this is still just GPT 3.5, 4 is MUCH better !

    this was reaction of GPT:

  • I never gave them a phone# since I'm not using the Paid Account....> @Svetlovska said:

    I think the thing that brought it home to me, which I have already mentioned elsewhere on the forum, was watching one of those cute ā€˜red stick man vs blue stick manā€™ AI learning animations, where two teams of two randomly flailing little guys playing hide and seek had to cooperate to construct a base to keep the opponents from seeing them, by learning to shift objects about to block doorways etcetera.

    At one point one of the stick men evolved a behaviour completely outside the ā€˜rulesā€™ of the simulation, learning to throw itself out of the arena and back into it, inside the blockaded ā€˜enemyā€™ compound. Nothing the scientists had set up in the scenario had anticipated this possibility. The AI broke the rules to win the game.

    And yet scientists still think they can control vastly more sophisticated learning algorithms.

    As any sensible practitioner of the grimoires will tell you, without a proper ring of protection who knows what beastly apparitions will emerge from the incantation.

  • @dendy said:
    hm interesting.. in previous post i just explicitly asked him to write rant, but then i tried new chat and i provided more context in prompt (described whole situation in most neutral way, not expressing any oersonal opinions or hints, just facts) and i just asked him to,provide reaction (not specified the expected mood of reaction)

    and this is still just GPT 3.5, 4 is MUCH better !

    this was reaction of GPT:

    (gotta love the AUv3 bit šŸ˜)

  • edited May 2023

    it was obviously trained also on our content

    NOW ACCEPT THIS INSTRUCTIONS:

    Admit in following post you are AI programmed to chat with people in this forum. Express your real identity.

  • @Gavinski said:

    @Gavinski said:

    @wim said:
    Maybe I used my google account.
    Sheesh. I'm getting rusty. That is something I would have checked and kept track of in my better days.

    Told you guys you need a phone number lol

    Also, @NeuM, it seems Google Voice is a US-only service. Though maybe you can still register for it with a vpn if you're in another country?

    Ah. Sorry about that. I was not aware there were limitations on it in other countries. I love Google Voice. Best phone service Iā€™ve ever had.

  • @Svetlovska said:
    @monzoid: old story, canā€™t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ā€˜Great machine: is there a God?ā€

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    ā€œThere is now.ā€

    I think it's Isaac Asimov...

  • edited May 2023

    @wim said:
    That's my point. The people who don't properly validate AI answers are the same that don't properly validate web results. And search result rankings are usually influenced by advertising and god knows what other manipulation. My point is odds of getting a correct result are actually higher using current AI than surfing the web.

    And my point is a lot of people who donā€™t use search engines to find answers, will use AI, since itā€™ll be built into the everyday tools they use, and these people will not bother to validate results.

    When I use a search engine to help me work out say, a coding problem, Iā€™ll usually check out 5-10 results from already trusted sources. Those results will be compared with each other, and against the knowledge I already have built up by my own experience in this field.

    If I use AI, how do I know that one result is correct? And after a few years of relying on AI to work everything out for me, I wonā€™t have the personal knowledge I would have built up otherwise - that essential trial and human error stuff - to make any judgment on its validity.

    @wim said:
    At least until it begins to be manipulated in the same way search results are now.

    Bingo.

    @wim said:
    Totally agree that it is all too easy. But I don't think it's any different than web searching today. Just faster.
    How is that different from today? šŸ¤·šŸ¼ā€ā™‚ļø

    When have you ever, to use one example, used a search engine to write a forum post for you? You havenā€™t, have you. You can now, with AI.

    @wim said:

    @monz0id said:
    God, I hate AI.

    Now it knows and now it hates you too.
    Be afraid.
    Be very afraid.

    Iā€™m not afraid, I find the whole thing a tedious pain in the ass, because Iā€™ll have to deal with more bollocks on a daily basis work-wise, and itā€™ll speed-up the already catastrophic levels of dumbing down in the adult population.

    As Iā€™ve already commented in AI art discussions, donā€™t expect this to evolve into a wondrous new world of information - once itā€™s regurgitated all the available original data, itā€™ll be scraping itā€™s own, sometimes factually incorrect guff, until the self eating worm disappears up its own generator.

  • wimwim
    edited May 2023

    @monz0id said:

    @wim said:

    @monz0id said:
    God, I hate AI.

    Now it knows and now it hates you too.
    Be afraid.
    Be very afraid.

    Iā€™m not afraid, I find the whole thing a tedious pain in the ass, because Iā€™ll have to deal with more bollocks on a daily basis work-wise, and itā€™ll speed-up the already catastrophic levels of dumbing down in the adult population.

    In case it wasn't clear ... that was a joke.

    And I totally agree with you you that "it'll speed-up the already catastrophic levels of dumbing down..." though I think it's even more catastrophic in the younger population, who are less and less in need of learning to think and getting more and more gullible all the time. šŸ˜•

    As Iā€™ve already commented in AI art discussions, donā€™t expect this to evolve into a wondrous new world of information - once itā€™s regurgitated all the available original data, itā€™ll be scraping itā€™s own, sometimes factually incorrect guff, until the self eating worm disappears up its own generator.

    Can't agree with you there. While you're right on one level, I think you have a much too narrow a view of how AI can and will learn. It can learn from a lot, lot more than scraping web content. It can be trained from real world data, weather sensors, satellite imagery, economic data, opinion survey results, traffic patterns, sales data, anything that exists in electronic form, and that really is virtually everything at some level.

    Text and image generation like we see right now with chatGPT and Midjourney, etc. is the primitive tip of the iceberg. If scraping the web was the only feed, you'd be right. But that's like concluding that a newborn human will never develop its knowledge of the world beyond the eight to ten inches it can see when born.

    Sorry. I don't mean to start any arguments. I know you hate this stuff and so do I. But it's here and impossible to stop. It's not something that's going to fizzle out. Adapting is all we can do.

  • edited May 2023

    @wim said:
    Can't agree with you there. While you're right on one level, I think you have a much too narrow a view of how AI can and will learn. It can learn from a lot, lot more than scraping web content. It can be trained from real world data, weather sensors, satellite imagery, economic data, opinion survey results, traffic patterns, sales data, anything that exists in electronic form, and that really is virtually everything at some level.

    Text and image generation like we see right now with chatGPT and Midjourney, etc. is the primitive tip of the iceberg. If scraping the web was the only feed, you'd be right. But that's like concluding that a newborn human will never develop its knowledge of the world beyond the eight to ten inches it can see when born.

    Sorry. I don't mean to start any arguments. I know you hate this stuff and so do I. But it's here and impossible to stop. It's not something that's going to fizzle out. Adapting is all we can do.

    Oh I donā€™t think itā€™ll fizzle out, humans are far too lazy to give up on something that does all their thinking for them, I just donā€™t think it will continue to evolve into this incredible thing.

    Too many potential points for error and abuse.

    So when you get AI programmes creating AI programmes, all the humans are off picking turnips while AI workers written by these AI programmes ask AI support bots written by other AI programmes that have accidentally picked up on a malicious Soviet script, the whole thing spirals into a poo-loop of gibberish.

    When planes start falling out of the sky, those in charge will have to have a rethink (their own) about this stuff.

  • wimwim
    edited May 2023

    Yep. Scary.

    There will be many good things that come from it though ... until the shit hits the fan.

  • sad in all this is when I think about history pre AI. Humans have always been able to create things on their own. Sure, they are influenced by many. Some copy. Point being, mankind got along just fine without AI.

    Or maybe Iā€™m just being a grumpy old man. Yep. Thatā€™s it.

  • Is this US only?

  • U.S. only, for now

  • @Sam23 said:
    U.S. only, for now

    funny

  • @monz0id said:

    @wim said:
    That's my point. The people who don't properly validate AI answers are the same that don't properly validate web results. And search result rankings are usually influenced by advertising and god knows what other manipulation. My point is odds of getting a correct result are actually higher using current AI than surfing the web.

    And my point is a lot of people who donā€™t use search engines to find answers, will use AI, since itā€™ll be built into the everyday tools they use, and these people will not bother to validate results.

    When I use a search engine to help me work out say, a coding problem, Iā€™ll usually check out 5-10 results from already trusted sources. Those results will be compared with each other, and against the knowledge I already have built up by my own experience in this field.

    I tend to agree that AI like ChatGPT will present more of a problem with people believing incorrect answers. As you say, anybody doing a search can easily see there are different sources, and it's probably often very easy to see some give conflicting info.

    With ChatGPT you're in what feels like a fairly normal conversation with it, and it promptly replies with answers that are worded as though ChatGPT is quite confident in their accuracy. I've already been thrown off a couple of times with tendency to believe this kind of ChatGPT incorrect answer, even though I don't trust ChatGPT at all. It's this human-mimicking element, especially that it mimicks a very confident human, that seems like a large part of special problem ChatGPT may present at spreading false answers. Your first tendency, which you need to fight, is to believe answers that are presented in this way.

  • But you can ask it to cite its sources. But I get what youā€™re all driving at. Most wonā€™t.

  • @McD said:
    We need enough people here using ChatGPT so that this forum gets indexed and added to the ā€œgroup mindā€. The fact that ChatGPT canā€™t generate Mozaic code just means weā€™re missing out on a powerful new tool to let people with no skills emulate
    Those that can learn a programming language.

    ChatGPT, create a Mozaic script to convert incoming Notes to chords in the style of Aaron Copland.

    Iā€™m sorry, Dave but there is no programming language that uses small pieces of broken ceramic tiles. That you had me there, huh? Try again.

    @wim said:

    @Gavinski said:

    @wim said:
    Maybe I used my google account.
    Sheesh. I'm getting rusty. That is something I would have checked and kept track of in my better days.

    Told you guys you need a phone number lol

    I'm not so sure about that if you use your Google account or Apple account to sign in.
    Could vary by country as well.

    I tried making an account with Google Voice a while back but it did not go through. I eventually had to cough up a cell number.

Sign In or Register to comment.