Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

"A.I." (Machine Learning Algorithms) To Generate Art

191012141523

Comments

  • @echoopera said:
    Time for some inner space diving :)

    >

    Those are all very cool.

  • @echoopera said:
    Time for some inner space diving :)

    These are quite different, remind me of fungal or coral growth.

  • Is a machine making images going to be able to create anything invoking joy? Let’s hope not… ;)

  • @Svetlovska said:
    Is a machine making images going to be able to create anything invoking joy? Let’s hope not… ;)

    I don’t see why this would not be possible. But a human will have to recognize/select the works that are successful at that.

    In many ways, the art is not in the mechanics of creation but the choice of selection.

    I took an interesting art history course about the social history of art. The professor (Fran’s Neckenig) was terrific. He introduced me to the notion that the artist’s (composer, poet….) skill is understanding/recognizing what is meaningful but that the receivers of art think of the work as the mechanics of production.

    He talked about how the advent of photography demonstrated this as well as modern “tape music” (by which he meant the early musique concrete pieces) , Warhol, the collages of Kurt Schwitters…how these were felt to be threats to art because they did not require the mechanical virtuosity we associate with art. But how they really showed where the art was.

    Even artists (or maybe particularly artists) are unaware that it is their choices rather than their virtuosity that creates the magic…paradoxically, the process of developing the mechanical skills is important to refining the sensibilities that select what communicates effectively.

    Years ago , I worked on generative art software. The idea initially was that it would empower just about anyone to make interesting abstract images and designs. What we ended up finding, was that with a few exceptions, the best work was always done by people that were already skilled artists..because they recognized which generated pieces were (for lack of a better word) “good” and what refinements (a tweak of contrast or proportion or hue/saturation) would make them “speak”.

  • I recently installed Stable Diffusion locally per the instructions here:
    https://www.howtogeek.com/830211/stable-diffusion-brings-local-ai-art-generation-to-your-pc/
    I wound up having to use a more vram efficient fork found here:
    https://github.com/basujindal/stable-diffusion
    Images take much longer to generate locally ( I only have an Nvida GTX 1060 with 6 gigs of vram) but using prompts with artists provides some very good images
    New York street level scene after a heavy rain, Jeremy Mann

  • edited September 2022

    @MadeofWax said:
    I recently installed Stable Diffusion locally per the instructions here:
    https://www.howtogeek.com/830211/stable-diffusion-brings-local-ai-art-generation-to-your-pc/
    I wound up having to use a more vram efficient fork found here:
    https://github.com/basujindal/stable-diffusion
    Images take much longer to generate locally ( I only have an Nvida GTX 1060 with 6 gigs of vram) but using prompts with artists provides some very good images
    New York street level scene after a heavy rain, Jeremy Mann

    How long are your render times? Yesterday in a Stable Diffusion discord chat, the founder mentioned that someone surprisingly now has SD running on M1 processors, taking only 15 seconds for an image. Pretty good for not even hitting a GPU. It seems they are really focusing on getting it working as universally as possible and then they will be working on the high end Stabler and Stablist Diffusion.

  • @AudioGus said:

    @MadeofWax said:
    I recently installed Stable Diffusion locally per the instructions here:
    https://www.howtogeek.com/830211/stable-diffusion-brings-local-ai-art-generation-to-your-pc/
    I wound up having to use a more vram efficient fork found here:
    https://github.com/basujindal/stable-diffusion
    Images take much longer to generate locally ( I only have an Nvida GTX 1060 with 6 gigs of vram) but using prompts with artists provides some very good images
    New York street level scene after a heavy rain, Jeremy Mann

    How long are your render times? Yesterday in a Stable Diffusion discord chat, the founder mentioned that someone surprisingly now has SD running on M1 processors, taking only 15 seconds for an image. Pretty good for not even hitting a GPU. It seems they are really focusing on getting it working as universally as possible and then they will be working on the high end Stabler and Stablist Diffusion.

    I use a script I found for the low vram fork. It defaults to 2 iterations of 10 image samples. So for 20 images it takes roughly 15 minutes. So about 45 seconds per image, depending on the size. I don't know much about python scripts so I'm going through different options slowly. I'm sure someone with more knowledge could do better.

  • I hope people are starting to understand that they should seriously call into question any image (still or video) they see these days. You basically cannot trust anything at face value.

  • edited September 2022

    A few more from last night. No post work done on these as of yet:


  • edited September 2022

    The advances we see today in machine learning will eventually give us expert systems capable of designing more efficient batteries and completely safe small scale nuclear reactors to power homes (such as pebble bed reactors) and new, more efficient ways to produce things. We are on the cusp of a new golden age. Art and design and language learning systems are just the leading edge.

  • @echoopera : would you care to share the prompts for the images that look like microscopic/fractal images?

  • @espiegel123 said:
    @echoopera : would you care to share the prompts for the images that look like microscopic/fractal images?

    You can find them on my MidJourney page. I go by the same name there as i do here.

  • @echoopera said:

    @espiegel123 said:
    @echoopera : would you care to share the prompts for the images that look like microscopic/fractal images?

    You can find them on my MidJourney page. I go by the same name there as i do here.

    tx. interesting

  • @NeuM said:
    The advances we see today in machine learning will eventually give us expert systems capable of designing more efficient batteries and completely safe small scale nuclear reactors to power homes (such as pebble bed reactors) and new, more efficient ways to produce things. We are on the cusp of a new golden age. Art and design and language learning systems are just the leading edge.

    This could come to fruition as long as the tail doesn’t wag the dog.

  • @MadeofWax said:

    @AudioGus said:

    @MadeofWax said:
    I recently installed Stable Diffusion locally per the instructions here:
    https://www.howtogeek.com/830211/stable-diffusion-brings-local-ai-art-generation-to-your-pc/
    I wound up having to use a more vram efficient fork found here:
    https://github.com/basujindal/stable-diffusion
    Images take much longer to generate locally ( I only have an Nvida GTX 1060 with 6 gigs of vram) but using prompts with artists provides some very good images
    New York street level scene after a heavy rain, Jeremy Mann

    How long are your render times? Yesterday in a Stable Diffusion discord chat, the founder mentioned that someone surprisingly now has SD running on M1 processors, taking only 15 seconds for an image. Pretty good for not even hitting a GPU. It seems they are really focusing on getting it working as universally as possible and then they will be working on the high end Stabler and Stablist Diffusion.

    I use a script I found for the low vram fork. It defaults to 2 iterations of 10 image samples. So for 20 images it takes roughly 15 minutes. So about 45 seconds per image, depending on the size. I don't know much about python scripts so I'm going through different options slowly. I'm sure someone with more knowledge could do better.

    Wow, 45 seconds per image is still pretty awesome for that hardware.

  • edited September 2022

    I need to try a stable diffusion build on my PC, but my GPU is like 8 years old.

    Got a build for m1 working on my MBAir. But it takes a lot of resources, around 11gb of ram and massive CPU for a 3 minutes 512x512 render. Still not liking the results compared to dalle or midjourney. I have lots of free credits on Nightcafe so trying SD there too.

  • @echoopera said:

    How about Nikola Tesla zapping a hamburger with electricity?

  • Deforum is a pretty sweet Stable Diffusion animation notebook...

  • @AudioGus said:
    Deforum is a pretty sweet Stable Diffusion animation notebook...

    This stuff is really the new psychedelia.

  • @auxmux said:
    I need to try a stable diffusion build on my PC, but my GPU is like 8 years old.

    Got a build for m1 working on my MBAir. But it takes a lot of resources, around 11gb of ram and massive CPU for a 3 minutes 512x512 render. Still not liking the results compared to dalle or midjourney. I have lots of free credits on Nightcafe so trying SD there too.

    Each one has different strengths. For me Stable Diffusion is best for video game / entertainment concept art and illustration while MJ and Disco Diffusion are best for artier abstract stuff.

  • @AudioGus said:

    @auxmux said:
    I need to try a stable diffusion build on my PC, but my GPU is like 8 years old.

    Got a build for m1 working on my MBAir. But it takes a lot of resources, around 11gb of ram and massive CPU for a 3 minutes 512x512 render. Still not liking the results compared to dalle or midjourney. I have lots of free credits on Nightcafe so trying SD there too.

    Each one has different strengths. For me Stable Diffusion is best for video game / entertainment concept art and illustration while MJ and Disco Diffusion are best for artier abstract stuff.

    I'd be curious what prompts / options are working for you with SD?

  • @auxmux said:

    @AudioGus said:

    @auxmux said:
    I need to try a stable diffusion build on my PC, but my GPU is like 8 years old.

    Got a build for m1 working on my MBAir. But it takes a lot of resources, around 11gb of ram and massive CPU for a 3 minutes 512x512 render. Still not liking the results compared to dalle or midjourney. I have lots of free credits on Nightcafe so trying SD there too.

    Each one has different strengths. For me Stable Diffusion is best for video game / entertainment concept art and illustration while MJ and Disco Diffusion are best for artier abstract stuff.

    I'd be curious what prompts / options are working for you with SD?

    If you join the discord you can search by my name and take a dig back through the stuff I made in the beta. https://discord.gg/stablediffusion

  • The really interesting work is going to come when we can have total control over the training of our own models easily and training is much quicker. Right now it's painfully slow and needs loads of computing power. So we have to rely on open source models like Stable Diffusion which is fun but based on pretty random stuff, so the quality is limiting and biased towards certain things.

  • @Carnbot said:
    The really interesting work is going to come when we can have total control over the training of our own models easily and training is much quicker. Right now it's painfully slow and needs loads of computing power. So we have to rely on open source models like Stable Diffusion which is fun but based on pretty random stuff, so the quality is limiting and biased towards certain things.

    It looks like some people get a lot of great results by just training extremely small 'special interest' addendum datasets of just a few dozen or hundred images that they can essentially append to existing ones to help steer things.

  • edited September 2022

    @auxmux
    @Carnbot

    What is it you are interested in getting it to do?

  • @AudioGus A bit diverse. I'm trying to mix styles from different artists into illustrative art but also some conceptual photography. Some examples of both that I've generated: https://www.instagram.com/auxmux.ai/

    Dalle is working best for creating photography, MJ and SD seem best suited for illustrative. I pulled some examples from the SD discord, and it seems like Euler model works best for me plus lots of modifiers. LMS or PMLS isn't working for what I want. MJ works as well, and doesn't seem to require as many modifiers.

  • @auxmux said:
    @AudioGus A bit diverse. I'm trying to mix styles from different artists into illustrative art but also some conceptual photography. Some examples of both that I've generated: https://www.instagram.com/auxmux.ai/

    Dalle is working best for creating photography, MJ and SD seem best suited for illustrative. I pulled some examples from the SD discord, and it seems like Euler model works best for me plus lots of modifiers. LMS or PMLS isn't working for what I want. MJ works as well, and doesn't seem to require as many modifiers.

    Yah MJ is definitely tweaked to fill the prompt gaps whereas I find SD needs a full on word salad.

  • All sorts of things :) but training models is the gateway to the most original material where you can filter out and create purer models, this is where next level stuff will be in the future I think.

    Also for moving image because that's my preferred discipline, it's better for still images right now. It doesn't animate that well yet imo, it needs more stability in the 3rd and 4th dimensions. Nvidia 3d model generation algorithms are looking very good too and will be pretty hot by next year....

    @AudioGus said:
    @auxmux
    @Carnbot

    What is it you are interested in getting it to do?

  • @Carnbot said:
    Nvidia 3d model generation algorithms are looking very good too and will be pretty hot by next year....

    Oh? Any links?

Sign In or Register to comment.