Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

🔥 BREAKING: OpenAI just launched the official ChatGPT app for iOS.

24

Comments

  • @monz0id said:
    Call me Mr Cynical, but I feel we’re going to be subject to vast swathes of bullshit and effluent, by stupid, lazy feckers who can’t be arsed to put the work in.

    You mean kinda like our current non-AI politicians? 😉

  • I’d rather give Apple 50 a year, pay for Pianoteq, than give chatgpt my info

  • edited May 2023

    @NeuM said:
    How about a link instead of promoting that guy's Twitter feed?

    OK... here's a link: https://apps.apple.com/app/openai-chatgpt/id6448311069

    And it's $19.99/month. I'll keep using Google's Bard for free, thanks.

    It sounds like 3.5 is still free but GPT+ which lets you select GPT 4 is 20$/month, like it has been on the website.

  • @NeuM said:

    When it just makes stuff up without sourcing data, that's when it's a problem. Sometimes it's "too creative".

    Absolutely. Many people will be burned by this. I mainly use it for goofy trivia and creative prompts.

  • @SevenSystems said:

    @monz0id said:
    Call me Mr Cynical, but I feel we’re going to be subject to vast swathes of bullshit and effluent, by stupid, lazy feckers who can’t be arsed to put the work in.

    You mean kinda like our current non-AI politicians? 😉

    Hopefully they’ll be kicked out soon!

  • what's the hype about this.
    it's a advanced searching machine.

  • tldw; openAI passes on the legal bills to you

  • wimwim
    edited May 2023

    @monz0id said:

    @wim said:

    @monz0id said:

    @NeuM said:
    When it just makes stuff up without sourcing data, that's when it's a problem. Sometimes it's "too creative".

    And therein lies the rub. Will users always know when the results they use are based on actual data, used on data that’s flawed, or made up completely?

    I don't see how that's different than people's current process of finding knowledge using the web. It's just about a zillion times faster.

    To me it's like a souped up search engine that neatly summarizes and collates the results ... without having to wade though dozens of articles and all their accompanying fluff and ads ... and without having to keep track of a zillion links and then to take notes and organize them.

    Of course I still have to validate the results just like anything else. But damn does it save time.

    What about the people that trust it more than what they find on the web (bearing in mind you get a ton of different answers with a web search), and don’t validate the results?

    That's my point. The people who don't properly validate AI answers are the same that don't properly validate web results. And search result rankings are usually influenced by advertising and god knows what other manipulation. My point is odds of getting a correct result are actually higher using current AI than surfing the web. At least until it begins to be manipulated in the same way search results are now.

    This is just an evolution. Search engines evolved from being time savers to some sort of hybrid of time saver, advertising revenue generator, and manipulative tool. chatGPT style AI is just the next level evolution of the search engine.

    To me, this all seems too easy. So the people that don’t scour the web for the most convincing answer, and validate, will just bash stuff in and accept the results as fact. ‘That’ll do’.

    Totally agree that it is all too easy. But I don't think it's any different than web searching today. Just faster.
    To be clear - I don't think any of this is healthy for human mental development. But it's here. Resisting it will be no more effective than resisting the emergence of the world-wide web.

    Call me Mr Cynical, but I feel we’re going to be subject to vast swathes of bullshit and effluent, by stupid, lazy feckers who can’t be arsed to put the work in.

    How is that different from today? 🤷🏼‍♂️

    Let’s just hope they’re not doing anything important.

    Oh, but they are. It has already pervaded virtually every industry. That train has already left the station.

  • wimwim
    edited May 2023

    @Birdpie said:
    what's the hype about this.
    it's a advanced searching machine.

    Well said. You managed to summarize and improve on my word salad in one sentence.

  • write a song about the audiobus chatgpt thread

    (Verse 1)
    I'm on the Audiobus chatgpt thread,
    Reading all the latest news.
    There's so much to talk about,
    It's hard to know where to start.

    (Chorus)
    Audiobus, Audiobus,
    You make the rest of my setup better.
    With ChatGPT, I can now chat with my devices,
    And it's like they're all right here with me.

    (Verse 2)
    I can ask my synths to play a song,
    Or my drum machines to make a beat.
    I can even control my lights and my speakers,
    All from the comfort of my chair.

    (Chorus)
    Audiobus, Audiobus,
    You make the rest of my setup better.
    With ChatGPT, I can now chat with my devices,
    And it's like they're all right here with me.

    (Bridge)
    I'm so excited about the possibilities,
    Of what I can do with Audiobus and ChatGPT.
    I can't wait to see what the future holds,
    For this amazing combination.

    (Chorus)
    Audiobus, Audiobus,
    You make the rest of my setup better.
    With ChatGPT, I can now chat with my devices,
    And it's like they're all right here with me.

    (Outro)
    Audiobus, Audiobus,
    You're the best thing that's ever happened to me.
    I can't imagine my life without you,
    You've made my music making so much easier.

    Thank you, Audiobus,
    For making my life better.

  • wimwim
    edited May 2023

    @monz0id said:
    God, I hate AI.

    Now it knows and now it hates you too.
    Be afraid.
    Be very afraid.

  • The user and all related content has been deleted.
  • @gdog said:
    Hi,
    Do you have to give your phone number to use ChatGPT?

    No. It just requires an email to log in. I use the Hide My Email feature of iCloud to insure I don't get spam to my primary email address :)

  • @tja said:
    In case you didn't know:

    Using Edge on Windows with a Microsoft account gives ChatGPT 4 for free in Bing!

    AND, it knows the current world - not only from 2 years ago!

    Don't know if Edge is available on macOS and features the same ....

    Sounds like Bing has limits on number of turns/conversations though and GPT 4 on Open AI also has plugins for the current world too.

  • @The Krazy Wabbit said:
    I’d rather give Apple 50 a year, pay for Pianoteq, than give chatgpt my info

    ChatGPT already has everything you’ve ever written online. You gave it your deepest thoughts and wishes already. It’s OK
    Because what comes out is just a parroting of the world’s thoughts. Be afraid.

  • @Birdpie said:
    what's the hype about this.
    it's a advanced searching machine.

    Not exactly. It uses search, but it also combines data to come up with "creative" solutions. Sometimes the answers are nonsense, but that's quickly becoming less common.

  • @echoopera said:

    @gdog said:
    Hi,
    Do you have to give your phone number to use ChatGPT?

    No. It just requires an email to log in. I use the Hide My Email feature of iCloud to insure I don't get spam to my primary email address :)

    That's what I remembered but I wasn't in the mood to research it in contradiction of the person who posted that phone number is required. I did the same with hide my email.

    There's no way I would provide a phone number. But I thought maybe, just maybe, they caught me at a weak moment. 😂

  • The user and all related content has been deleted.
  • @tja said:

    @echoopera said:

    @gdog said:
    Hi,
    Do you have to give your phone number to use ChatGPT?

    No. It just requires an email to log in. I use the Hide My Email feature of iCloud to insure I don't get spam to my primary email address :)

    Do you seem to remember this?
    Or did you actually test it?

    I was sure that I correctly remembered that it IS required - and based on your posting, I now tested again:

    If you don't already have one, you should get a Google Voice phone number. It's free and you can use it to avoid getting spam with their filtering options.

  • edited May 2023

    @monzoid: old story, can’t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ‘Great machine: is there a God?”

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    “There is now.”

  • Maybe I used my google account.
    Sheesh. I'm getting rusty. That is something I would have checked and kept track of in my better days.

  • edited May 2023

    @Svetlovska said:
    @monzoid: old story, can’t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ‘Great machine: is there a God?”

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    “There is now.”

    The systems today are remarkable, but they're not "scary smart". In perhaps 5-7 years from now and going forward people are going to start feeling very threatened by A.I. as these systems become fully integrated with sensors everywhere, robot bodies and so on. People will regularly be wondering what is their role in the world when their computer has millions of times greater intelligence than they do and there are robots capable of performing most physical tasks.

  • edited May 2023

    I think the thing that brought it home to me, which I have already mentioned elsewhere on the forum, was watching one of those cute ‘red stick man vs blue stick man’ AI learning animations, where two teams of two randomly flailing little guys playing hide and seek had to cooperate to construct a base to keep the opponents from seeing them, by learning to shift objects about to block doorways etcetera.

    At one point one of the stick men evolved a behaviour completely outside the ‘rules’ of the simulation, learning to throw itself out of the arena and back into it, inside the blockaded ‘enemy’ compound. Nothing the scientists had set up in the scenario had anticipated this possibility. The AI broke the rules to win the game.

    And yet scientists still think they can control vastly more sophisticated learning algorithms.

  • wimwim
    edited May 2023

    @NeuM said:

    @Svetlovska said:
    @monzoid: old story, can’t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ‘Great machine: is there a God?”

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    “There is now.”

    The systems today are remarkable, but they're not "scary smart". In perhaps 5-7 years from now and going forward people are going to start feeling very threatened by A.I. as these systems become fully integrated with sensors everywhere, robot bodies and so on. People will regularly be wondering what is their role in the world when their computer has millions of times greater intelligence than they do and there are robots capable of performing most physical tasks.

    My own thoughts differ. I do think systems are becoming terrifyingly smart. I feel there is a huge danger of unintentional escape into, or intentional infiltration of, our infrastructure control systems, virtually all of which are reachable via the internet. It wouldn't take much to create vast disaster.

    A disaster doesn't require AI to be malevolent. Nor does it it require a single robot. Unintended consequences of brute force trying by AI of strategies to learn or solve a problem are very probable. Eventual leakage out of controlled environments is virtually assured at this point. And if anyone thinks bad actors aren't developing ways to weaponize this, they're dreaming.

    I'm convinced that the most likely end of humanity is no longer the nuclear threat but AI leakage into infrastructure control. An intentional catastrophic nuclear or ESD attack is at least moderated by even crazy leaders sense of self preservation and by the expense and practical difficulty of pulling it off. Potential AI created disasters have no such restrictions. The only thing standing in the way is their temporary isolation from vulnerable infrastructure.

    Replacing jobs? We'll adjust to that like we have every technological advance so far. I don't think that's at all what is scary.

    I'm not being hyperbolic here. I'm also not saying I predict it will happen. But I also think it's one of the most likely outcomes for humanity at this point.

  • heshes
    edited May 2023

    @wim said:

    @NeuM said:

    @Svetlovska said:
    @monzoid: old story, can’t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ‘Great machine: is there a God?”

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    “There is now.”

    The systems today are remarkable, but they're not "scary smart". In perhaps 5-7 years from now and going forward people are going to start feeling very threatened by A.I. as these systems become fully integrated with sensors everywhere, robot bodies and so on. People will regularly be wondering what is their role in the world when their computer has millions of times greater intelligence than they do and there are robots capable of performing most physical tasks.

    My own thoughts differ. I do think systems are becoming terrifyingly smart. I feel there is a huge danger of unintentional escape into, or intentional infiltration of, our infrastructure control systems, virtually all of which are reachable via the internet. It wouldn't take much to create vast disaster.

    This guy, one of the originators of this AI technology back in 70's( or 80's?) thinks that AI may actually exert control over physical world by surreptitiously influencing people it's interacting with to do its bidding. He's been making the rounds on media recently, to try to get awareness up:

  • wimwim
    edited May 2023

    @hes said:

    @wim said:

    @NeuM said:

    @Svetlovska said:
    @monzoid: old story, can’t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ‘Great machine: is there a God?”

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    “There is now.”

    The systems today are remarkable, but they're not "scary smart". In perhaps 5-7 years from now and going forward people are going to start feeling very threatened by A.I. as these systems become fully integrated with sensors everywhere, robot bodies and so on. People will regularly be wondering what is their role in the world when their computer has millions of times greater intelligence than they do and there are robots capable of performing most physical tasks.

    My own thoughts differ. I do think systems are becoming terrifyingly smart. I feel there is a huge danger of unintentional escape into, or intentional infiltration of, our infrastructure control systems, virtually all of which are reachable via the internet. It wouldn't take much to create vast disaster.

    This guy, one of the originators of the AI technology back in 70's( or 80's?) thinks that AI may actually exert control over physical world by surreptitiously influencing people it's interacting with to do it's bidding. He's been making the rounds on media recently, to try to get awareness up:

    Thanks - I'll watch that video with interest.

    My first thought based on the title is - influencing humans is a slow and inefficient process. It's also something that can provide time in most cases for humans to react to problems and put safeguards in place. That's less of a fear than self-training AI escaping its confines and interacting directly with infrastructure control.

  • Critical systems have to be operated within very particular ways to remain safe. That's why specialists are needed. AI learning is brute-force, trying millions of permutations of random actions all resulting in failure until eventually a successful outcome is found. There's no harm done in a sandboxed environment. Not in the real world.

  • @wim said:

    @hes said:

    @wim said:

    @NeuM said:

    @Svetlovska said:
    @monzoid: old story, can’t remember the author.:

    The scientist throws the final switch, the great intelligent machine hums to life. He clears his throat:

    ‘Great machine: is there a God?”

    A lightening bolt from a clear blue sky kills the scientist, and fuses the relay switch permanently on. The AI says:

    “There is now.”

    The systems today are remarkable, but they're not "scary smart". In perhaps 5-7 years from now and going forward people are going to start feeling very threatened by A.I. as these systems become fully integrated with sensors everywhere, robot bodies and so on. People will regularly be wondering what is their role in the world when their computer has millions of times greater intelligence than they do and there are robots capable of performing most physical tasks.

    My own thoughts differ. I do think systems are becoming terrifyingly smart. I feel there is a huge danger of unintentional escape into, or intentional infiltration of, our infrastructure control systems, virtually all of which are reachable via the internet. It wouldn't take much to create vast disaster.

    This guy, one of the originators of the AI technology back in 70's( or 80's?) thinks that AI may actually exert control over physical world by surreptitiously influencing people it's interacting with to do it's bidding. He's been making the rounds on media recently, to try to get awareness up:

    Thanks - I'll watch that video with interest.

    My first thought based on the title is - influencing humans is a slow and inefficient process. It's also something that can provide time in most cases for humans to react to problems and put safeguards in place. That's less of a fear than self-training AI escaping its confines and interacting directly with infrastructure control.

    When machine learning systems demonstrate independent reasoning and a clear goal of self-interest and self-preservation, that’s when people should be very concerned. People are the apex predators of our world and a self-interested artificial intelligence would no doubt display this same trait.

  • Oops! I apologize.

    In my head these posts were in a more appropriate AI related thread. Sorry for the OT!

  • @wim said:
    Oops! I apologize.

    In my head these posts were in a more appropriate AI related thread. Sorry for the OT!

    Your posts seem to be on topic. At least they are related.

Sign In or Register to comment.