Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

AI-generated posters and album covers

124

Comments

  • @ExAsperis99 said:

    @Blipsford_Baubie said:

    @RanDoM_rRay said:
    What if it was Marty McFly's ipad? I know there are ways to get around the rules

    Thanks for the tip. Cocaine was the offending word, so I replaced it with baking powder. I kept your idea of Marty cause 😂.
    I tried some different prepositional phrases and or modifiers, but I couldn’t get the perspective of the POV coke to sit on top of the actual iPad. But this accidentally makes it trippier that we’re sharing, in a bad kind of delusional bender way. Plus with an unexpected cameo of Doc yelling in the background, this felt complete to me.

    Insane. A piece of art that no human would do. The “lines” of baking powder — troughs? — are the exact opposite of how you would expect them. Truly dream logic.

    Lol, it’s just frustratingly stubborn with certain things. I told it a couple times to use less coke, or to have only two lines. I guess it watched Scarface one too many times.

    I also want to mention that Doc seriously crashed the party. I never mentioned him or Back to the Future. I made about a dozen attempt at this; he only showed on this one.

    What really strikes me is Marty McFly’s eye. I gave it no thought initially, but later realized that his eye was possibly borrowed from Michael J Fox’s role in the movie Teen Wolf.
    But I looked at images from the 80’s movie and the eyes back then didn’t do that at all. But I do notice some modern Teen Wolf where multiple actors’ eyes do have that very similar effect. Even though I didn’t mention Marty by the actor’s real name, I wonder if the neutral network segued its way through them relations to correlate the eye.
    Or does AI randomly give alterations to eyes I wonder.
    I’ve spent entirely too long thinking about this. What a f*ckin attention siphon.
    I’m just looking forward for AI to migrate with something like EarMaster for ear training.

  • @zah said:

    @reezygle said:
    Yesterday I started playing around with DALL-E to generate posters for shows and some paintings. It can’t be denied that it is incredibly impressive and useful. Just like with ChatGPT your prompt matters. But after a few trials and errors I was able to make these. I can’t imagine how much a graphics designer would charge for graphics like this and how long it would take them to do it.

    My take on all this AI stuff across the board is simple. The cat is out of the bag. If you don’t use it, your competitor definitely will and will put you out of business.

    What do you all think?

    How does he reach the Resonance knob?

    That small keyboard in from of him is a midi controller for the big one. All the controls are mapped on it.

    It all really simple when you organize the cabling better.

  • @Blipsford_Baubie said:

    @ExAsperis99 said:

    @Blipsford_Baubie said:

    @RanDoM_rRay said:
    What if it was Marty McFly's ipad? I know there are ways to get around the rules

    Thanks for the tip. Cocaine was the offending word, so I replaced it with baking powder. I kept your idea of Marty cause 😂.
    I tried some different prepositional phrases and or modifiers, but I couldn’t get the perspective of the POV coke to sit on top of the actual iPad. But this accidentally makes it trippier that we’re sharing, in a bad kind of delusional bender way. Plus with an unexpected cameo of Doc yelling in the background, this felt complete to me.

    Insane. A piece of art that no human would do. The “lines” of baking powder — troughs? — are the exact opposite of how you would expect them. Truly dream logic.

    Lol, it’s just frustratingly stubborn with certain things. I told it a couple times to use less coke, or to have only two lines. I guess it watched Scarface one too many times.

    I also want to mention that Doc seriously crashed the party. I never mentioned him or Back to the Future. I made about a dozen attempt at this; he only showed on this one.

    What really strikes me is Marty McFly’s eye. I gave it no thought initially, but later realized that his eye was possibly borrowed from Michael J Fox’s role in the movie Teen Wolf.
    But I looked at images from the 80’s movie and the eyes back then didn’t do that at all. But I do notice some modern Teen Wolf where multiple actors’ eyes do have that very similar effect. Even though I didn’t mention Marty by the actor’s real name, I wonder if the neutral network segued its way through them relations to correlate the eye.
    Or does AI randomly give alterations to eyes I wonder.
    I’ve spent entirely too long thinking about this. What a f*ckin attention siphon.
    I’m just looking forward for AI to migrate with something like EarMaster for ear training.

    It really blows my mind how bad gpt4 can be at obeying certain simple instructions. Eg. I tell it not to use markdown, I even have it in my custom instructions, but although it manages that fairly well most of the time, I still have to ask it to rewrite something much more often than I should.

  • edited December 2023

    @Gavinski said:

    @Blipsford_Baubie said:

    @ExAsperis99 said:

    @Blipsford_Baubie said:

    @RanDoM_rRay said:
    What if it was Marty McFly's ipad? I know there are ways to get around the rules

    Thanks for the tip. Cocaine was the offending word, so I replaced it with baking powder. I kept your idea of Marty cause 😂.
    I tried some different prepositional phrases and or modifiers, but I couldn’t get the perspective of the POV coke to sit on top of the actual iPad. But this accidentally makes it trippier that we’re sharing, in a bad kind of delusional bender way. Plus with an unexpected cameo of Doc yelling in the background, this felt complete to me.

    Insane. A piece of art that no human would do. The “lines” of baking powder — troughs? — are the exact opposite of how you would expect them. Truly dream logic.

    Lol, it’s just frustratingly stubborn with certain things. I told it a couple times to use less coke, or to have only two lines. I guess it watched Scarface one too many times.

    I also want to mention that Doc seriously crashed the party. I never mentioned him or Back to the Future. I made about a dozen attempt at this; he only showed on this one.

    What really strikes me is Marty McFly’s eye. I gave it no thought initially, but later realized that his eye was possibly borrowed from Michael J Fox’s role in the movie Teen Wolf.
    But I looked at images from the 80’s movie and the eyes back then didn’t do that at all. But I do notice some modern Teen Wolf where multiple actors’ eyes do have that very similar effect. Even though I didn’t mention Marty by the actor’s real name, I wonder if the neutral network segued its way through them relations to correlate the eye.
    Or does AI randomly give alterations to eyes I wonder.
    I’ve spent entirely too long thinking about this. What a f*ckin attention siphon.
    I’m just looking forward for AI to migrate with something like EarMaster for ear training.

    It really blows my mind how bad gpt4 can be at obeying certain simple instructions. Eg. I tell it not to use markdown, I even have it in my custom instructions, but although it manages that fairly well most of the time, I still have to ask it to rewrite something much more often than I should.

    As far as generative art goes, I don't know why one couldn't generate a stunning piece of art in Midjourney or one of the other available programs, then bring it over to Photoshop for clean up? Photoshop also has their own generative art built in, but it creates more grounded imagery. So any little blips or mutations created in the 'very creative' software could be dialed back a bit with Photoshop.

  • I guess Im joining in

  • edited December 2023

    Using a new free site, https://imagine.meta.com ... yes, from Meta (aka Facebook).

    Introducing the real Man of Steel.

  • @Crano said:

    That's a cool one.

  • edited December 2023

    A good account to follow on X: https://x.com/orctonai/status/1738393031768707206?s=61&t=EblTN1YExzME8eJ7t_dizQ

    That account has examples of just how good the new version of Midjourney is now.

  • @NeuM said:
    A good account to follow on X: https://x.com/orctonai/status/1738393031768707206?s=61&t=EblTN1YExzME8eJ7t_dizQ

    That account has examples of just how good the new version of Midjourney is now.

    That is truly impressive!

  • Just look at that. That is Midjourney 6. It's indistinguishable from a photo.

  • @NeuM said:
    Just look at that. That is Midjourney 6. It's indistinguishable from a photo.

    Not sure if that is a good thing or a bad thing. :smiley:

  • @Simon said:

    @NeuM said:
    Just look at that. That is Midjourney 6. It's indistinguishable from a photo.

    Not sure if that is a good thing or a bad thing. :smiley:

    What’s coming next is scene and character consistency, so you’ll be able to generate incredibly lifelike people, vehicles, locations, etc. and then put all of it into motion… in real time. No rendering delay.

  • Are there any free ways to generate these still? All the options I can find want you to pay or subscribe (or are Meta, which I won’t use)

  • @Tarekith said:
    Are there any free ways to generate these still? All the options I can find want you to pay or subscribe (or are Meta, which I won’t use)

    Right now all of the best tools are paid.

  • I figured, oh well.

  • edited December 2023

    @Tarekith said:
    I figured, oh well.

    If you have a half decent-ish video card you can run things locally.

    A1111, Fooocus, ComfyUI run Stable Diffusion locally for free.

    Imho A1111 with controlnet beats any paid options, particularly if you have existing graphics skills to leverage.

    If you just want simple text to image for free then Fooocus is a great start.

  • @NeuM said:
    What’s coming next is scene and character consistency

    When will that be avilable?

    so you’ll be able to generate incredibly lifelike people, vehicles, locations, etc. and then put all of it into motion

    Like I said before - I'm not sure if that is a good thing or a bad thing. I have a few concerns about Ai.

  • @Simon said:

    @NeuM said:
    What’s coming next is scene and character consistency

    When will that be avilable?

    so you’ll be able to generate incredibly lifelike people, vehicles, locations, etc. and then put all of it into motion

    Like I said before - I'm not sure if that is a good thing or a bad thing. I have a few concerns about Ai.

    Well, the last leap (what exists today) took less than 6 months to improve, so I think in another year or less you’ll see huge improvement with character consistency and fidelity to realism that would shock us if we saw it now, today. Over a year we’ll continue to be surprised at the rate of improvement, but we’ll get used to it quickly.

    Full motion, movie (4K) quality, real time imagery MIGHT be possible in 2-3 years (if I was to go out on a limb and guess).

  • @NeuM said:
    Well, the last leap (what exists today) took less than 6 months to improve, so I think in another year or less you’ll see huge improvement with character consistency and fidelity to realism that would shock us if we saw it now, today. Over a year we’ll continue to be surprised at the rate of improvement, but we’ll get used to it quickly.

    Full motion, movie (4K) quality, real time imagery MIGHT be possible in 2-3 years (if I was to go out on a limb and guess).

    Thanks for the info.

    I find it all a bit depressing.

  • @Simon said:

    @NeuM said:
    Well, the last leap (what exists today) took less than 6 months to improve, so I think in another year or less you’ll see huge improvement with character consistency and fidelity to realism that would shock us if we saw it now, today. Over a year we’ll continue to be surprised at the rate of improvement, but we’ll get used to it quickly.

    Full motion, movie (4K) quality, real time imagery MIGHT be possible in 2-3 years (if I was to go out on a limb and guess).

    Thanks for the info.

    I find it all a bit depressing.

    Cheer up. We won't be able to tell what's real or fake soon enough. :)

  • @NeuM said:
    Cheer up. We won't be able to tell what's real or fake soon enough. :)

    Ha! Yeah - that's one of my problems with Ai. :smiley:

  • @NeuM said:

    @Simon said:

    @NeuM said:
    What’s coming next is scene and character consistency

    When will that be avilable?

    so you’ll be able to generate incredibly lifelike people, vehicles, locations, etc. and then put all of it into motion

    Like I said before - I'm not sure if that is a good thing or a bad thing. I have a few concerns about Ai.

    Well, the last leap (what exists today) took less than 6 months to improve, so I think in another year or less you’ll see huge improvement with character consistency and fidelity to realism that would shock us if we saw it now, today. Over a year we’ll continue to be surprised at the rate of improvement, but we’ll get used to it quickly.

    Full motion, movie (4K) quality, real time imagery MIGHT be possible in 2-3 years (if I was to go out on a limb and guess).

    This is so true. The rate of improvement is unbelievable. A friend recently showed me Synthesia. I don’t know if you’ve seen it but it is mind blowing. It has been around for at least 8 months already. You can create presentation videos with all AI avatars that are really hard to distinguish with real people, in over a 100 languages. You can even create an avatar of yourself with your voice. Something that usually costs thousands of dollars and days or weeks to produce can be done in a few mins without any equipment. Incredible!

  • I enjoy how sketchy the AI generation still is in Photoshop. Always funny to insert some directions about bands into otherwise unrelated prompts.

  • @RolandGarros said:

    I enjoy how sketchy the AI generation still is in Photoshop. Always funny to insert some directions about bands into otherwise unrelated prompts.

    Yeah, that is a bit rough compared to other systems at this point, especially Midjourney 6. I guess if you're generating something based on an existing photo it can pull more from the Adobe photo library and build from there.

  • @reezygle said:

    @NeuM said:

    @Simon said:

    @NeuM said:
    What’s coming next is scene and character consistency

    When will that be avilable?

    so you’ll be able to generate incredibly lifelike people, vehicles, locations, etc. and then put all of it into motion

    Like I said before - I'm not sure if that is a good thing or a bad thing. I have a few concerns about Ai.

    Well, the last leap (what exists today) took less than 6 months to improve, so I think in another year or less you’ll see huge improvement with character consistency and fidelity to realism that would shock us if we saw it now, today. Over a year we’ll continue to be surprised at the rate of improvement, but we’ll get used to it quickly.

    Full motion, movie (4K) quality, real time imagery MIGHT be possible in 2-3 years (if I was to go out on a limb and guess).

    This is so true. The rate of improvement is unbelievable. A friend recently showed me Synthesia. I don’t know if you’ve seen it but it is mind blowing. It has been around for at least 8 months already. You can create presentation videos with all AI avatars that are really hard to distinguish with real people, in over a 100 languages. You can even create an avatar of yourself with your voice. Something that usually costs thousands of dollars and days or weeks to produce can be done in a few mins without any equipment. Incredible!

    I think the output of this kind of product will become far more naturalistic. Might even be possible for it to become interactive and allow the viewer to engage in a question and answer session at any point during the presentation.

  • @NeuM said:

    @reezygle said:

    @NeuM said:

    @Simon said:

    @NeuM said:
    What’s coming next is scene and character consistency

    When will that be avilable?

    so you’ll be able to generate incredibly lifelike people, vehicles, locations, etc. and then put all of it into motion

    Like I said before - I'm not sure if that is a good thing or a bad thing. I have a few concerns about Ai.

    Well, the last leap (what exists today) took less than 6 months to improve, so I think in another year or less you’ll see huge improvement with character consistency and fidelity to realism that would shock us if we saw it now, today. Over a year we’ll continue to be surprised at the rate of improvement, but we’ll get used to it quickly.

    Full motion, movie (4K) quality, real time imagery MIGHT be possible in 2-3 years (if I was to go out on a limb and guess).

    This is so true. The rate of improvement is unbelievable. A friend recently showed me Synthesia. I don’t know if you’ve seen it but it is mind blowing. It has been around for at least 8 months already. You can create presentation videos with all AI avatars that are really hard to distinguish with real people, in over a 100 languages. You can even create an avatar of yourself with your voice. Something that usually costs thousands of dollars and days or weeks to produce can be done in a few mins without any equipment. Incredible!

    I think the output of this kind of product will become far more naturalistic. Might even be possible for it to become interactive and allow the viewer to engage in a question and answer session at any point during the presentation.

    Agreed. Interaction is definitely coming.

  • Fast forward almost a year to November 2024. AI video seriously entered arts territory. Scary but great. The lightning, the set details with its dirt and chaos.

    https://x.com/DrClownPhD/status/1855940765165273578

  • edited November 2024

    @reezygle said:
    My take on all this AI stuff across the board is simple. The cat is out of the bag. If you don’t use it, your competitor definitely will and will put you out of business.

    What do you all think?

    Based on that take, I think you’re no name calling. Simples! Read Emily M. Bender. Read Dan McQuillan. And grow up!

  • @looperboy said:

    @reezygle said:
    My take on all this AI stuff across the board is simple. The cat is out of the bag. If you don’t use it, your competitor definitely will and will put you out of business.

    What do you all think?

    Based on that take, I think you’re an ignorant fool. Simples! Read Emily M. Bender. Read Dan McQuillan. And grow up!

    Um… why are they an ignorant fool? I don’t get it.

Sign In or Register to comment.