Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Your A.I. (Generative) Music+Sound Experiments

1246789

Comments

  • edited March 2024

    @NeuM : ‘Like’ … well, the genre is not especially to my taste, but I certainly appreciate the execution of it. What worries me more is that ‘my’ own effort I actually, unironically, did like. Sure, there were slight infelicities in the lyrics which could have born some tweaking but essentially, it would have done just fine as background mood music and, yes, I even liked the melancholia it simulated. It is way more accomplished than I am at conventional song construction.

    This already is ‘good enough’ for an awful lot of music use cases. Give it another three months, and… I think the field of endeavour for working library music makers, film and tv composers, studio musicians, heck, studios themselves, and the manufacturers of all the hardware they contain, has pretty much gone in a heartbeat. They are all still making buggy whips in the age of the automobile. Slowly, and more importantly, expensively.

    Sure, some prestige projects, like AAA movies will still have the elite RealHuman™️ Hans Zimmers of this world attached, but 90-95% of everything else? Nope. Creatives are bad for the bottom line.

    Yes, some people will still buy buggy whips as a craft thing, and some people will still enjoy making their own buggy whips the old fashioned way, but no one is going to get rich or famous, or even make rent, as a maker of buggy whips any more. And when the AI avatars get up to speed on the stadium circuit, as they are already beginning to, when the AI pop star influencers storm TikTok…

    Well, see my previous post re the Dead Internet Theory.

    We might still be making our authentic art in our little survivalist bunkers because naked apes gotta ape, but out there it will be an astroturfed AI wasteland of AI influencers boosting AI artists to an audience of AI punters. Over all that noise, no one will hear the human creatives screaming in their silos.

    Hope you are inner directed. You’ll need to be if you want to keep the motivation to make art in a world which will be incapable of and unwilling to acknowledge your existence.

  • edited March 2024

    Let's put it this way: The world is already permanently changed. The toothpaste is out of the tube and there's no putting it back. So... artists and musicians will adapt. This has been the case when it comes to the arts for millennia.

  • edited March 2024

    …(this space reserved for a future post)…

  • @Svetlovska said:

    >

    We might still be making our authentic art in our little survivalist bunkers because naked apes gotta ape, but out there it will be an astroturfed AI wasteland of AI influencers boosting AI artists to an audience of AI punters.

    Yah, I don't think people will be looking to others (AI influencers or AI artists) so much for their music and will just be steering music to their individual whims and tastes. Music will just morph in realtime to the whim of the listener. "Artists" will be seen as ugly outdated gatekeepers who made boring static non interactive artifacts.

  • @AudioGus said:

    @Svetlovska said:

    >

    We might still be making our authentic art in our little survivalist bunkers because naked apes gotta ape, but out there it will be an astroturfed AI wasteland of AI influencers boosting AI artists to an audience of AI punters.

    Yah, I don't think people will be looking to others (AI influencers or AI artists) so much for their music and will just be steering music to their individual whims and tastes. Music will just morph in realtime to the whim of the listener. "Artists" will be seen as ugly outdated gatekeepers who made boring static non interactive artifacts.

    As we approach something closer to AGI, creating customized music soundtracks for every person on the planet will probably be a trivial problem for these systems to solve.

  • What happened to the "The Jackpot's Gone" track and my reply to it?

  • @kirmesteggno said:
    What happened to the "The Jackpot's Gone" track and my reply to it?

    Was it a dirty song about sesame street characters by chance?

  • "Books That Kill" - a hair metal spoof of Motley Crue's "Looks That Kill"

  • @AudioGus said:

    @kirmesteggno said:
    What happened to the "The Jackpot's Gone" track and my reply to it?

    Was it a dirty song about sesame street characters by chance?

    lol no, just some guy mourning over a missed Jackpot.

    In my reply I've posted the AI stems from UVR and wrote about maybe remixing it because it sounded a bit bright and thin. But the original post and my reply got nuked from this thread somehow.

    I still have it and the stems and I'll delete them from my drive if that's what you want @NeuM no problem.

  • edited March 2024

    @NeuM said:

    @Blipsford_Baubie said:
    Regarding commercial use, I read the FAQ page, but I cannot access the Terms of Service without creating an account.
    The FAQ infer you would need to keep a subscription indefinitely, as long as you’re material is up for commercial use.
    But I’d like clarification.

    That's my impression also. It looks like as long as you are a paying customer, anything you create during that time is yours to sell downloads or stream anywhere online or through a music service. Whatever revenue you generate during that term is yours, no revenue splitting. Anything you create BEFORE you become a paying customer (with the exception of your own lyrics) is theirs. If you start paying for their service AFTER you've created the greatest generated song of all time, that's still theirs and they won't negotiate the rights. The assumption is you get ALL RIGHTS to your own paid music, but if you stop paying their "licensing fee" in the form of a subscription, then the rights (minus your own lyrics) revert back to Suno AI.

    In terms of exchanged value (your subscription fee for a world-class generative music service), that's not a bad deal. Perhaps at some point they'll come up with some kind of more permanent buyout for their customers, but that's what it appears to be today.

    Also, they have an account on X if you care to question them there, or you could simply join at no cost to you, except the 'cost' of your email address.

    UPDATE: To save you a little time, I turned their Terms of Service into a PDF so you can review it here.

    This is an (AI-generated) analysis of those terms and conditions. Note the clause preventing you from engaging in class action lawsuits:

    Here are the most important things for a musician to understand in these Terms of Service, along with some commentary on positives and potential concerns.

    Key Points:

    By using Suno's service, you agree to be bound by these terms. The terms can change at any time.
    The service allows you to provide input content which Suno processes to generate audio/visual output. Some content may be shared publicly with other users.
    You grant Suno a very broad license to use your input content and the output generated from it, even for commercial purposes, without compensating you, if on the free tier. This is a big deal to carefully consider.
    If you pay for a subscription, Suno assigns you ownership of outputs generated from your inputs. Free tier users are only granted a limited license to use outputs for internal, non-commercial purposes.
    Suno disclaims all warranties and liability related to the service. Your use is at your own risk.
    Most disputes must be resolved through binding individual arbitration rather than lawsuits. You waive your right to participate in class actions against Suno.
    Positives:

    The paid tier providing ownership of work product to the musician is good, though it requires an ongoing subscription.
    The terms seem fairly standard for AI generative services of this nature.
    Potential Concerns:

    Suno gets a broad license to commercially exploit your input material and generated output without paying you, even if you cancel your subscription later. Some musicians may not be comfortable with this.
    You bear all liability risk for your use of the service and outputs. Warranties are disclaimed.
    The mandatory arbitration clause and class action waiver makes it very difficult to sue Suno if any serious disputes arise.
    Overall, the musician should carefully weigh the benefits of using Suno's AI tools against the broad rights they must grant to Suno and the liability risks they assume. The terms are not unusual for this space but do favor Suno heavily.

  • edited March 2024

    Suno.ai is a shitty and greedy startup, I regret giving them input/feedback early on before they launched their service.

    Unfortunately they're not the only ones now that most crypto bros are on the generative AI hypetrain after their bank man got fried..

  • edited March 2024

    @kirmesteggno said:
    Suno.ai is a shitty and greedy startup, I regret giving them input/feedback early on before they launched their service.

    Unfortunately they're not the only ones now that most crypto bros are on the generative AI hypetrain after their bank man got fried..

    Can you share more about why you regret it, if the info is not too personal?

  • @Gavinski said:

    @kirmesteggno said:
    Suno.ai is a shitty and greedy startup, I regret giving them input/feedback early on before they launched their service.

    Unfortunately they're not the only ones now that most crypto bros are on the generative AI hypetrain after their bank man got fried..

    Can you share more about why yyu regret it, if the info is not too personal?

    I wouldn't have provided feedback if I knew about how restrictive they'd make the service. They did a survey within their newsletter funnel asking people what they'd like to see.

  • @Gavinski said:

    @NeuM said:

    @Blipsford_Baubie said:
    Regarding commercial use, I read the FAQ page, but I cannot access the Terms of Service without creating an account.
    The FAQ infer you would need to keep a subscription indefinitely, as long as you’re material is up for commercial use.
    But I’d like clarification.

    That's my impression also. It looks like as long as you are a paying customer, anything you create during that time is yours to sell downloads or stream anywhere online or through a music service. Whatever revenue you generate during that term is yours, no revenue splitting. Anything you create BEFORE you become a paying customer (with the exception of your own lyrics) is theirs. If you start paying for their service AFTER you've created the greatest generated song of all time, that's still theirs and they won't negotiate the rights. The assumption is you get ALL RIGHTS to your own paid music, but if you stop paying their "licensing fee" in the form of a subscription, then the rights (minus your own lyrics) revert back to Suno AI.

    In terms of exchanged value (your subscription fee for a world-class generative music service), that's not a bad deal. Perhaps at some point they'll come up with some kind of more permanent buyout for their customers, but that's what it appears to be today.

    Also, they have an account on X if you care to question them there, or you could simply join at no cost to you, except the 'cost' of your email address.

    UPDATE: To save you a little time, I turned their Terms of Service into a PDF so you can review it here.

    This is an (AI-generated) analysis of those terms and conditions. Note the clause preventing you from engaging in class action lawsuits:

    Here are the most important things for a musician to understand in these Terms of Service, along with some commentary on positives and potential concerns.

    Key Points:

    By using Suno's service, you agree to be bound by these terms. The terms can change at any time.
    The service allows you to provide input content which Suno processes to generate audio/visual output. Some content may be shared publicly with other users.
    You grant Suno a very broad license to use your input content and the output generated from it, even for commercial purposes, without compensating you, if on the free tier. This is a big deal to carefully consider.
    If you pay for a subscription, Suno assigns you ownership of outputs generated from your inputs. Free tier users are only granted a limited license to use outputs for internal, non-commercial purposes.
    Suno disclaims all warranties and liability related to the service. Your use is at your own risk.
    Most disputes must be resolved through binding individual arbitration rather than lawsuits. You waive your right to participate in class actions against Suno.
    Positives:

    The paid tier providing ownership of work product to the musician is good, though it requires an ongoing subscription.
    The terms seem fairly standard for AI generative services of this nature.
    Potential Concerns:

    Suno gets a broad license to commercially exploit your input material and generated output without paying you, even if you cancel your subscription later. Some musicians may not be comfortable with this.
    You bear all liability risk for your use of the service and outputs. Warranties are disclaimed.
    The mandatory arbitration clause and class action waiver makes it very difficult to sue Suno if any serious disputes arise.
    Overall, the musician should carefully weigh the benefits of using Suno's AI tools against the broad rights they must grant to Suno and the liability risks they assume. The terms are not unusual for this space but do favor Suno heavily.

    One other thing to beware here, you just never know whether they could make their subscription prohibitively expensive at a future date. And if you unsub you've lost all rights to your catalogue made with their stuff.

  • @Gavinski said:
    One other thing to beware here, you just never know whether they could make their subscription prohibitively expensive at a future date. And if you unsub you've lost all rights to your catalogue made with their stuff.

    They're really got high on their own farts, the generated stuff is very far from being actually usable as is. It's impressive as a toy in the sense of "look what that puter can do", that's about it.

  • @kirmesteggno said:

    @Gavinski said:
    One other thing to beware here, you just never know whether they could make their subscription prohibitively expensive at a future date. And if you unsub you've lost all rights to your catalogue made with their stuff.

    They're really got high on their own farts, the generated stuff is very far from being actually usable as is. It's impressive as a toy in the sense of "look what that puter can do", that's about it.

    I think this is one reason I'm evolving towards making Prog Rock (even if it'll be all sample based like how Justice produces their stuff). AI can't hold a candle to the sheer creativity required for Prog Rock and tracks in a Prog Rock style. :mrgreen:

  • edited March 2024

    @jwmmakerofmusic said:

    @kirmesteggno said:

    @Gavinski said:
    One other thing to beware here, you just never know whether they could make their subscription prohibitively expensive at a future date. And if you unsub you've lost all rights to your catalogue made with their stuff.

    They're really got high on their own farts, the generated stuff is very far from being actually usable as is. It's impressive as a toy in the sense of "look what that puter can do", that's about it.

    I think this is one reason I'm evolving towards making Prog Rock (even if it'll be all sample based like how Justice produces their stuff). AI can't hold a candle to the sheer creativity required for Prog Rock and tracks in a Prog Rock style. :mrgreen:

    For me music is just a life long hobby and AI or trends will never dictate what I produce and what not. It's like video games or watching movies for others. AI is just a toy and sample source for me.

    People who do it professionally and actually make money with music either aren't true artists (have little to no artistic sovereignity), or they don't earn their income from the music directly but from the fame (touring, merch) and/or insights into their process (courses, samples, masters).

    The AI generated stuff only really replaces the fake artists and those who are already behaving like robots in this industry, not actual artists whose fans want to peek behind the curtains and are music fans themselves.

    That's my opinion about AI.

    What happened in the last decades is that the barrier to create music got low enough and music collecting kinda devalued due to streaming that music experts (consumers/collectors) got into DJing and production and care about the production process, workflow and tools used by artists they admire, they basically became producers themselves. This trend will only increase with AI because it makes it even easier for many to get started.

    But this also implies that AI songs won't matter to those people if there isn't a process and human artistry attached to them until it becomes easy to create your own AIs and the process attached to that (the artists themselves).

  • @kirmesteggno said:

    @jwmmakerofmusic said:

    @kirmesteggno said:

    @Gavinski said:
    One other thing to beware here, you just never know whether they could make their subscription prohibitively expensive at a future date. And if you unsub you've lost all rights to your catalogue made with their stuff.

    They're really got high on their own farts, the generated stuff is very far from being actually usable as is. It's impressive as a toy in the sense of "look what that puter can do", that's about it.

    I think this is one reason I'm evolving towards making Prog Rock (even if it'll be all sample based like how Justice produces their stuff). AI can't hold a candle to the sheer creativity required for Prog Rock and tracks in a Prog Rock style. :mrgreen:

    For me music is just a life long hobby and AI or trends will never dictate what I produce and what not. It's like video games or watching movies for others. AI is just a toy and sample source for me.

    People who do it professionally and actually make money with music either aren't true artists (have little to no artistic sovereignity), or they don't earn their income from the music directly but from the fame (touring, merch) and/or insights into their process (courses, samples, masters).

    The AI generated stuff only really replaces the fake artists and those who are already behaving like robots in this industry, not actual artists whose fans want to peek behind the curtains and are music fans themselves.

    That's my opinion about AI.

    What happened in the last decades is that the barrier to create music got low enough and music collecting kinda devalued due to streaming that music experts (consumers/collectors) got into DJing and production and care about the production process, workflow and tools used by artists they admire, they basically became producers themselves. This trend will only increase with AI because it makes it even easier for many to get started.

    But this also implies that AI songs won't matter to those people if there isn't a process and human artistry attached to them until it becomes easy to create your own AIs and the process attached to that (the artists themselves).

    There are a ton of AI tools now for image making that give a range of flexibility (dare I say expression) for artists. No doubt that sort of thing will come to music. But yah, it is not like the barrier to entry for music was high pre AI with sequencers etc. Here Bruce ruminates about kids with thumb drives replacing actual musicians...

    https://blabbermouth.net/news/bruce-dickinson-says-concert-ticket-prices-have-gone-through-the-roof-ive-got-no-interest-in-paying-1200-to-see-u2

  • edited March 2024

    @AudioGus said:

    @kirmesteggno said:

    @jwmmakerofmusic said:

    @kirmesteggno said:

    @Gavinski said:
    One other thing to beware here, you just never know whether they could make their subscription prohibitively expensive at a future date. And if you unsub you've lost all rights to your catalogue made with their stuff.

    They're really got high on their own farts, the generated stuff is very far from being actually usable as is. It's impressive as a toy in the sense of "look what that puter can do", that's about it.

    I think this is one reason I'm evolving towards making Prog Rock (even if it'll be all sample based like how Justice produces their stuff). AI can't hold a candle to the sheer creativity required for Prog Rock and tracks in a Prog Rock style. :mrgreen:

    For me music is just a life long hobby and AI or trends will never dictate what I produce and what not. It's like video games or watching movies for others. AI is just a toy and sample source for me.

    People who do it professionally and actually make money with music either aren't true artists (have little to no artistic sovereignity), or they don't earn their income from the music directly but from the fame (touring, merch) and/or insights into their process (courses, samples, masters).

    The AI generated stuff only really replaces the fake artists and those who are already behaving like robots in this industry, not actual artists whose fans want to peek behind the curtains and are music fans themselves.

    That's my opinion about AI.

    What happened in the last decades is that the barrier to create music got low enough and music collecting kinda devalued due to streaming that music experts (consumers/collectors) got into DJing and production and care about the production process, workflow and tools used by artists they admire, they basically became producers themselves. This trend will only increase with AI because it makes it even easier for many to get started.

    But this also implies that AI songs won't matter to those people if there isn't a process and human artistry attached to them until it becomes easy to create your own AIs and the process attached to that (the artists themselves).

    There are a ton of AI tools now for image making that give a range of flexibility (dare I say expression) for artists. No doubt that sort of thing will come to music. But yah, it is not like the barrier to entry for music was high pre AI with sequencers etc. Here Bruce ruminates about kids with thumb drives replacing actual musicians...

    https://blabbermouth.net/news/bruce-dickinson-says-concert-ticket-prices-have-gone-through-the-roof-ive-got-no-interest-in-paying-1200-to-see-u2

    From my last post it may sound like I'm against AI but I'm not. It makes the process behind the creation and the creative decisions more important and valuable.

    My hypothesis is that AI will actually create more producers, and those producers may discover you through your process watching maybe a video of you making the track before they hear the actual finished track.

    Producing on iPads also makes it easy to screen record and document the process which is a huge plus. Being able to offer insight into projects will become more and more important as the end results get devalued by AI the better it gets.

    You can already be creative with AI and take a Suno track and run it through a stem separator, apply different processings and layers to those stems in a daw and do some sort of stem mixing and mastering. Or you treat it like a sampled record and flip it into something new, cut it into a sample pack etc.

    What's lame is the marketing of Suno and tech bros who really think that it can compete with actual artists or library music right now. No artists in their right mind will rely on it and include it into their process with those terms. You're better off using Tracklib or other library music labels.

  • @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    @jwmmakerofmusic said:

    @kirmesteggno said:

    @Gavinski said:
    One other thing to beware here, you just never know whether they could make their subscription prohibitively expensive at a future date. And if you unsub you've lost all rights to your catalogue made with their stuff.

    They're really got high on their own farts, the generated stuff is very far from being actually usable as is. It's impressive as a toy in the sense of "look what that puter can do", that's about it.

    I think this is one reason I'm evolving towards making Prog Rock (even if it'll be all sample based like how Justice produces their stuff). AI can't hold a candle to the sheer creativity required for Prog Rock and tracks in a Prog Rock style. :mrgreen:

    For me music is just a life long hobby and AI or trends will never dictate what I produce and what not. It's like video games or watching movies for others. AI is just a toy and sample source for me.

    People who do it professionally and actually make money with music either aren't true artists (have little to no artistic sovereignity), or they don't earn their income from the music directly but from the fame (touring, merch) and/or insights into their process (courses, samples, masters).

    The AI generated stuff only really replaces the fake artists and those who are already behaving like robots in this industry, not actual artists whose fans want to peek behind the curtains and are music fans themselves.

    That's my opinion about AI.

    What happened in the last decades is that the barrier to create music got low enough and music collecting kinda devalued due to streaming that music experts (consumers/collectors) got into DJing and production and care about the production process, workflow and tools used by artists they admire, they basically became producers themselves. This trend will only increase with AI because it makes it even easier for many to get started.

    But this also implies that AI songs won't matter to those people if there isn't a process and human artistry attached to them until it becomes easy to create your own AIs and the process attached to that (the artists themselves).

    There are a ton of AI tools now for image making that give a range of flexibility (dare I say expression) for artists. No doubt that sort of thing will come to music. But yah, it is not like the barrier to entry for music was high pre AI with sequencers etc. Here Bruce ruminates about kids with thumb drives replacing actual musicians...

    https://blabbermouth.net/news/bruce-dickinson-says-concert-ticket-prices-have-gone-through-the-roof-ive-got-no-interest-in-paying-1200-to-see-u2

    From my last post it may sound like I'm against AI but I'm not. It makes the process behind the creation and the creative decisions more important and valuable.

    My hypothesis is that AI will actually create more producers, and those producers may discover you through your process watching maybe a video of you making the track before they hear the actual finished track.

    Producing on iPads also makes it easy to screen record and document the process which is a huge plus. Being able to offer insight into projects will become more and more important as the end results get devalued by AI the better it gets.

    You can already be creative with AI and take a Suno track and run it through a stem separator, apply different processings and layers to those stems in a daw and do some sort of stem mixing and mastering. Or you treat it like a sampled record and flip it into something new, cut it into a sample pack etc.

    What's lame is the marketing of Suno and tech bros who really think that it can compete with actual artists or library music right now. No artists in their right mind will rely on it and include it into their process with those terms. You're better off using Tracklib or other library music labels.

    I have enjoyed putting Suno outputs through stem separation and messing with it. I am kind of holding off now though because I dove deep and hard into messing with AI images over the past two+ years that I found every six months there was a leap that rendered a ton of the previous six months mute. I mean in some ways being an early adopter with tools can give you an instinctive edge but at the same time a lot of hours can be spent on things that will be leap frogged soon enough.

    I actually do find that maybe one in 20 Suno generations can go into a playlist and doesn't need anything for me to just enjoy listening to it like any other track out there. Usually it is cheeky and has a degree of ironically 'so bad it is good' but hey, I like it when humans make that stuff too.

    As for listeners I figure there will definitely be a wide range of tastes and habits. I do imagine that soon enough there will be a very large percentage of people who just end up listening to machine based streams almost exclusively without a second thought as to how they are made.

  • @AudioGus said:
    I have enjoyed putting Suno outputs through stem separation and messing with it. I am kind of holding off now though because I dove deep and hard into messing with AI images over the past two+ years that I found every six months there was a leap that rendered a ton of the previous six months mute. I mean in some ways being an early adopter with tools can give you an instinctive edge but at the same time a lot of hours can be spent on things that will be leap frogged soon enough.

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    I actually do find that maybe one in 20 Suno generations can go into a playlist and doesn't need anything for me to just enjoy listening to it like any other track out there. Usually it is cheeky and has a degree of ironically 'so bad it is good' but hey, I like it when humans make that stuff too.

    I like browsing through wacky AI posts and other funny posts on Reddit. But for me what I've heard so far isn't something that would compete with stuff I put on when I want to listen to music. It's more a "wow AI can do that that's cool".

    As for listeners I figure there will definitely be a wide range of tastes and habits. I do imagine that soon enough there will be a very large percentage of people who just end up listening to machine based streams almost exclusively without a second thought as to how they are made.

    Like the radio basically where I know the songs/hooks but not the track or artist names.

  • edited March 2024

    @kirmesteggno said:

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    Control Net and Loras in a Stable Diffusion UI that properly supports them (with the bonus of running locally for free on your own GPU) enable you to massively accelerate the production of any kind of static image at any level of minimal of maximal complexity. It really is limitless now. Not saying it is without a certain degree of human knowhow and taste though. There is currently nothing remotely close to this for music but it will happen.

  • @AudioGus said:

    @kirmesteggno said:

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    Control Net and Loras in a Stable Diffusion UI that properly supports them (with the bonus of running locally for free on your own GPU) enable you to massively accelerate the production of any kind of static image at any level of minimal of maximal complexity. It really is limitless now. Not saying it is without a certain degree of human knowhow and taste though. There is currently nothing remotely close to this for music but it will happen.

    I'm better at graphics than music production. Are Stability Matrix and DrawThings good app options on the Mac? Something else that runs locally?

  • @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    Control Net and Loras in a Stable Diffusion UI that properly supports them (with the bonus of running locally for free on your own GPU) enable you to massively accelerate the production of any kind of static image at any level of minimal of maximal complexity. It really is limitless now. Not saying it is without a certain degree of human knowhow and taste though. There is currently nothing remotely close to this for music but it will happen.

    I'm better at graphics than music production. Are Stability Matrix and DrawThings good app options on the Mac? Something else that runs locally?

    Hmmm, I am not sure what Mac performance is like but Draw Things on my M1 Pro takes several minutes to render an image that would take about 15 seconds on my PC with a 3090. It does support Control Net and Lora so it does check those boxes though. Being able to render and mix between variations super fast is pretty critical for me.

  • @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    Control Net and Loras in a Stable Diffusion UI that properly supports them (with the bonus of running locally for free on your own GPU) enable you to massively accelerate the production of any kind of static image at any level of minimal of maximal complexity. It really is limitless now. Not saying it is without a certain degree of human knowhow and taste though. There is currently nothing remotely close to this for music but it will happen.

    I'm better at graphics than music production. Are Stability Matrix and DrawThings good app options on the Mac? Something else that runs locally?

    Hmmm, I am not sure what Mac performance is like but Draw Things on my M1 Pro takes several minutes to render an image that would take about 15 seconds on my PC with a 3090. It does support Control Net and Lora so it does check those boxes though. Being able to render and mix between variations super fast is pretty critical for me.

    Thanks! I've found some Loras on Civit like this one: https://civitai.com/models/276981/20-osaka-metro-20-series-sd15

    My goal for this would be to generate flat images of train models from the sidewalk perspective. I don't mind waiting because I'd do something else in the meantime anyway.

  • edited March 2024

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    Control Net and Loras in a Stable Diffusion UI that properly supports them (with the bonus of running locally for free on your own GPU) enable you to massively accelerate the production of any kind of static image at any level of minimal of maximal complexity. It really is limitless now. Not saying it is without a certain degree of human knowhow and taste though. There is currently nothing remotely close to this for music but it will happen.

    I'm better at graphics than music production. Are Stability Matrix and DrawThings good app options on the Mac? Something else that runs locally?

    Hmmm, I am not sure what Mac performance is like but Draw Things on my M1 Pro takes several minutes to render an image that would take about 15 seconds on my PC with a 3090. It does support Control Net and Lora so it does check those boxes though. Being able to render and mix between variations super fast is pretty critical for me.

    Thanks! I've found some Loras on Civit like this one: https://civitai.com/models/276981/20-osaka-metro-20-series-sd15

    My goal for this would be to generate flat images of train models from the sidewalk perspective. I don't mind waiting because I'd do something else in the meantime anyway.

    For that you could pretty much make or find a side view train image and use img2img and control net to make a ton of variations quite easily.

    For me the need for quick rendering is about how quickly I get within the ball park to start working. Sometimes it takes several dozen images of flailing around in failure-ville before I even begin.

  • edited March 2024

    @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    Control Net and Loras in a Stable Diffusion UI that properly supports them (with the bonus of running locally for free on your own GPU) enable you to massively accelerate the production of any kind of static image at any level of minimal of maximal complexity. It really is limitless now. Not saying it is without a certain degree of human knowhow and taste though. There is currently nothing remotely close to this for music but it will happen.

    I'm better at graphics than music production. Are Stability Matrix and DrawThings good app options on the Mac? Something else that runs locally?

    Hmmm, I am not sure what Mac performance is like but Draw Things on my M1 Pro takes several minutes to render an image that would take about 15 seconds on my PC with a 3090. It does support Control Net and Lora so it does check those boxes though. Being able to render and mix between variations super fast is pretty critical for me.

    Thanks! I've found some Loras on Civit like this one: https://civitai.com/models/276981/20-osaka-metro-20-series-sd15

    My goal for this would be to generate flat images of train models from the sidewalk perspective. I don't mind waiting because I'd do something else in the meantime anyway.

    For that you could pretty much make or find a side view train image and use img2img and control net to make a ton of variations quite easily.

    Control net and Loras are separate things or do they work together, e.g img2img using the Lora and the image as reference?

    For me the need for quick rendering is about how quickly I get within the ball park to start working. Sometimes it takes several dozen images of flailing around in failure-ville before I even begin.

    Dialing in the prompt I guess. Would love to watch such a process from end to end. Any YT creators you can recommend?

  • edited March 2024

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    Control Net and Loras in a Stable Diffusion UI that properly supports them (with the bonus of running locally for free on your own GPU) enable you to massively accelerate the production of any kind of static image at any level of minimal of maximal complexity. It really is limitless now. Not saying it is without a certain degree of human knowhow and taste though. There is currently nothing remotely close to this for music but it will happen.

    I'm better at graphics than music production. Are Stability Matrix and DrawThings good app options on the Mac? Something else that runs locally?

    Hmmm, I am not sure what Mac performance is like but Draw Things on my M1 Pro takes several minutes to render an image that would take about 15 seconds on my PC with a 3090. It does support Control Net and Lora so it does check those boxes though. Being able to render and mix between variations super fast is pretty critical for me.

    Thanks! I've found some Loras on Civit like this one: https://civitai.com/models/276981/20-osaka-metro-20-series-sd15

    My goal for this would be to generate flat images of train models from the sidewalk perspective. I don't mind waiting because I'd do something else in the meantime anyway.

    For that you could pretty much make or find a side view train image and use img2img and control net to make a ton of variations quite easily.

    Control net and Loras are separate things or do they work together, e.g img2img using the Lora and the image as reference?

    Control Net and Lora are two different things, but yes they can and do work together.

    I wouldn't think a Lora would really be necessary for the specific train example, a good model like Crystal Clear XL Prime would have it covered. Best to just stick with img2img initially, get to know how that works then play around with control net once img2img makes sense and you want more specific control.

    For SDXL control net models I think there is mainly still Canny, Depth and iP Adapter (which kind of reduces my need for Lora) but really if you can art (sketch/paint/photoshop) you really don't need more than those models. SD1.5 has a lot more control net options but a lot of them are just experiments, most of which have very niche uses now if any. Maybe for some developers making super specific apps/tools they are useful.

    Lora are a whole other layer to the process. Most of the time I don't use them. Back in SD1.5 days I used (and trained) them a lot more but SDXL models are very robust now.

    For me the need for quick rendering is about how quickly I get within the ball park to start working. Sometimes it takes several dozen images of flailing around in failure-ville before I even begin.

    Dialing in the prompt I guess. Would love to watch such a process from end to end. Any YT creators you can recommend?

    Hmmm, not too sure of YT folks who show process. For the most part I just get my news from Reddit for new tools/features. Whenever I see YT folks stumble through I just get nerd rage and start wanting to yell in the comments, heh. I started with SD back in the closed beta days and got the slow drip of new features and tools over time. I imagine there is a fair amount of noise to sort through now.

  • @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    @AudioGus said:

    @kirmesteggno said:

    Applies for sure to epic and cinematic styles which won't age very well due to the overall complexity, the typical AI content that looks like it was made by AI at first glance.

    I've explored AI for simpler bread and butter graphic design stuff like logos and icons and wasn't that impressed. For this I'm much better off with a good font library and a library of shapes. I feel same is true for simpler forms of music like loop based Hip Hop beats or techno. Instead of generating 30 loops and keeping 2 or 3 that are somewhat ok I could pick a soul classic at random from a playlist and flip it into a better loop in the same amount of time.

    Current AI shines for stuff where the details drown in the overall complexity imo, remove the complexity like in modern graphics and results are often wack.

    Control Net and Loras in a Stable Diffusion UI that properly supports them (with the bonus of running locally for free on your own GPU) enable you to massively accelerate the production of any kind of static image at any level of minimal of maximal complexity. It really is limitless now. Not saying it is without a certain degree of human knowhow and taste though. There is currently nothing remotely close to this for music but it will happen.

    I'm better at graphics than music production. Are Stability Matrix and DrawThings good app options on the Mac? Something else that runs locally?

    Hmmm, I am not sure what Mac performance is like but Draw Things on my M1 Pro takes several minutes to render an image that would take about 15 seconds on my PC with a 3090. It does support Control Net and Lora so it does check those boxes though. Being able to render and mix between variations super fast is pretty critical for me.

    Thanks! I've found some Loras on Civit like this one: https://civitai.com/models/276981/20-osaka-metro-20-series-sd15

    My goal for this would be to generate flat images of train models from the sidewalk perspective. I don't mind waiting because I'd do something else in the meantime anyway.

    For that you could pretty much make or find a side view train image and use img2img and control net to make a ton of variations quite easily.

    Control net and Loras are separate things or do they work together, e.g img2img using the Lora and the image as reference?

    Control Net and Lora are two different things, but yes they can and do work together.

    I wouldn't think a Lora would really be necessary for the specific train example, a good model like Crystal Clear XL Prime would have it covered. Best to just stick with img2img initially, get to know how that works then play around with control net once img2img makes sense and you want more specific control.

    Allright. Gonna play a bit with stability matrix today. Here's what the train pics are for: YT Link

    It's not my channel but I was on a similar level when I was fully dedicated to it.

    I hope to get more environments out of it in the long run like bridge underpasses etc., maybe even train some models to mimic real world trains and their details if that's possible. In the past years I always played with the thought of getting into Blender for those train scenes, but AI seems the way to go now.

    For SDXL control net models I think there is mainly still Canny, Depth and iP Adapter (which kind of reduces my need for Lora) but really if you can art (sketch/paint/photoshop) you really don't need more than those models. SD1.5 has a lot more control net options but a lot of them are just experiments, most of which have very niche uses now if any. Maybe for some developers making super specific apps/tools they are useful.

    Lora are a whole other layer to the process. Most of the time I don't use them. Back in SD1.5 days I used (and trained) them a lot more but SDXL models are very robust now.

    The only image AIs I've used were Midjourney, Leonardo.ai and the one they've included in Canva. Leonardo was the best of them. My old Mac system couldn't run stuff locally, only recently switched to an M1 base model.

    For me the need for quick rendering is about how quickly I get within the ball park to start working. Sometimes it takes several dozen images of flailing around in failure-ville before I even begin.

    Dialing in the prompt I guess. Would love to watch such a process from end to end. Any YT creators you can recommend?

    Hmmm, not too sure of YT folks who show process. For the most part I just get my news from Reddit for new tools/features. Whenever I see YT folks stumble through I just get nerd rage and start wanting to yell in the comments, heh. I started with SD back in the closed beta days and got the slow drip of new features and tools over time. I imagine there is a fair amount of noise to sort through now.

    I know that nerd rage feeling, probably very common for autodidacts.

Sign In or Register to comment.