Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
I suppose if I became a minimalist drone artist, all I’d need is a monochrome still frame of a power pylon, and that’s it, that’s the gig.
In time to the music, or independent of it?
This maybe of interest
https://brightsidestudios.co.uk/projects/a-human-connection
I love visuals projected head on to a performing band. Very popular with industrial bands in the 80s and still looks cool today. Human league were doing lots of stuff with “slides” in the late 70s no doubt, bands were doing similar stuff even earlier.
I’ve been toying with the idea of getting a cheap projector to beam stuff on my studio wall ... for a bit of atmosphere whilst jamming
In the 70s I too was using slides with band even made a primative red and green spinning filter. If you hand create slides using only red and green shown through a projector with a spinning red/green filter you get a crude from of animation, design can spin or run in stripes, at the time it was pretty cool
Visuals are a big part of my practice
I recently did a performance in Krakow in the 360 gallery as artist in residence and part of Patchlab festival, and performed across 4 surfaces of 10 projectors which totalled 16K pixels width. That's a lot of visuals and the picture of me here is just a fraction of it:
Video over NDI with gigabit ethernet. The work in this case was a live audiovisual film performance. It was all performed from one laptop, in this case with Ableton and Resolume. (so it is possible from one, although yes I'd prefer using two laptops ideally, I made sure I got the cpu as low as reasonably possible since I'm also controlling and sequencing video simultaneously to audio with animation, layers and effects. Travelling with lots of gear is a pain too!)
My laptop has 32Gb Ram and nvme SSD.
I don't use the ipad live yet, It's not ready for live work for my needs, maybe one day. For this project I did start planning to include it as a midi sequencer and controller for Resolume but it was going to take too long to do what is much easier and more reliable to do on a laptop + midi controller. It just depends on the project though and the scale of it. However my music was all sketched out on the ipad in AUM and then recorded into Ableton and arranged for the live set.
I create the visual material in any means suitable for the work and it's a constant process of evaluating goals for a performance or sequence and potential tools and processes used. A mixture of video and animation for this project. A lot of it created in After effects first and then live manipulated in Resolume with layers synced to the music with midi.
My process in this project is similar to film making except I can change the sequence live, it even has an abstract narrative thread so there's a strong connection between the visuals and music/sound always. So in this way the visuals are as equally important to the audio, so it's a kind of performance video art with audio reactive elements.
Nice.
@Carnbot completely cool.
That’s more or less exactly the direction I’m pursuing, too, but preferably by avoiding complexity and expense. The strategic narrative with instant tactical choices is a good way, I believe that’s more or less how Jraftwerk do all their stuff.
I’m questioning whether I need to have a “backdrop” behind, with artist in front. I know that’s the traditional and logical way, because the wall behind is usually bigger than the artist, no matter how big the artist. I’m questioning that relationship though.
Thanks was fun but complicated to make. My AV film project was about climate change from a post-human perspective, with added irony that it was projected with loads of electricity of course....
Yeah it's definitely a thing to think about. Depends on the space completely for me and the work/tools. The difficulty I had with this space was that it was 4 walls and impossible to see it all from one position, so I had to rely on looking at the laptop display quite often to see if it was all working ok.
cough, yeah I'm still working, and should have something in the app store in about a week
Mucking around with Ableton, Resolume and Groboto about 10 years ago.....before I divested myself of my fixed chattels in favour of a more mobile life.....
Live AV is totally addictive.
Today I stripped out the vocals and guitar and a lot of other tracks from the international all-time mega-hit New Year No Mistake, and put back in a less finalised version of the whole iMS-20 set of tracks. I also did the same to the video, stripped out the head shots of me, the Rutt-Etra-ised frames of me, and all the titles. This, today’s edit, is the minimalised video-synthesis that underlies the video that was released in 2014.
Except it doesn’t show up. Wait, I’ll find another way.
Try this:
https://open.lbry.com/@IanKTindale:1/New-Year-No-Mistake-video-synthesis-minima:6
Ummm, Hello!
Great topic!!
Craftwife! Here is their twitter account.
https://twitter.com/Craftwife
Tagtool!
Does anyone play app "Quantum VJ HD"?
I've started to do more experiments with this on my Instagram channel:
Still learning but it's fun to add video layer to music. I'm following a few people that are doing similar. Glitch Clip is a great new app for this. I thought about Resolume before but I don't perform live and don't use computer at all these days for music. Also, for Instagram, limiting things to 1 minute helps for exploring concepts. Long form maybe someday.
@Carnbot That is absolutely amazing! Are there videos of your performance?
No unfortunately there aren't individual performance videos....You can see a few clips of me on the Patchlab aftermovie here but it's more of an ad for the festival.
This animation using STAELLA was my latest attempt. In theory it could be live. I want to try a more Glitch Clip style next. I'm not focusing on live at the moment, but I am interested in automating parts of the animation process.
(rightbrainrecordings on Instagram)
I don’t know if Mr. @MonzoPro is referring to this guy but he’s great and using Ebo Suite with a good result, I think.
Here he explains the process of the previous video I’ve posted.
Thanks, no the guy I saw was at a gig in 2007 - but actually that’s the effect he was getting, so looks like EboSuite would do the job.
I was thinking along these lines, of how to do mixed visual stuff with music, and found my preferred solution, QLab, on mac. Originally I was using it to trigger canned sounds for a theater production, but the software has all this visual stuff as well- video cues, lighting cues (via a USB to DMX dongle), midi cues, esoteric stuff like network cues for computer wizards.
The main idea is that you plan your show out as a list of cues, and can advance down them by hitting the space bar. But, it is really advanced as far as what that can look like, like, you put in “fade” cues to fade a sound, video, light, in or out, or partial- it amounts to these little automation ramps you can do, like I had some music playing in a scene, and used a fade cue to change the EQ to make it muffled when the scene switched to a different room. Same with video effects, lighting color changes, etc.
The main vibe is going down your list, but has lots of utility for making loops, or even loops of cues, and getting out of loops gracefully (“de-vamp”), but another way to trigger the cues is with hotkeys. Either a keyboard press or... MIDI! I did a thing for some friends’ music video where I hooked drum triggers up to trigger different lights from each piece of the kit. Stuff like that is pretty straightforward. Instead of drum triggers it could be like a midi foot controller, doing video cues or something. Could have when you gratuitously tweak on the cutoff knob, send the CC message to change the color of a stage light, interactive things like that.
The projector can be used as a stage light, like in NIN‘s performance on the new Twin Peaks. It looks like they used a couple small projectors and they’re like off to the sides or something.
( "https://m.youtube.com/watch?v=
There’s this free software called MusicBeam that does gobo type video patterns, the idea that it is the light escaping from the projector that looks cool, as opposed to the image. Can blur the lens out to make things fuzzier/less stark. I used that with two projectors, one of them mirrored, for a backdrop on both sides of the stage, for a show.
It sounds fun to just do visuals, for an act... to be honest, it is lot of shit to set up and extra head scratching for a musician, to set up and run projectors and lights and stuff along with their own gear and playing music. Unless your music gear is a real minimal setup and your musical parts are so easy to play you get bored (like if your music role is more akin to that of a technician), it might end up being too much for one person to do and have a good time.
That’s the thing, it’s a whole task really, not something that can happen ‘on the side’, if done properly.
I think the best way to embody the aim of the visual interpretation of meaning that the music is also doing is to imagine the visuals as a form of dance, to do the same things emotionally as the music is also trying to do, to tell the same story and relay the same feelings at the same time. Clip launching has to be considered a fair compromise. If it is entirely pre-done and set in stone from start to finish, then why not also just turn up with music gear and press one play button.
But, I must admit, I’m conflicted. Is it ‘allowable’ to compose the music and finish it, then come back another day and ‘do the visuals’ to the music? In the pure sense, I’ve always maintained the answer is no. In practice, I don’t think there’s ways to actually compose the visual side of a work at the same time as the music is composed. The best that happens is that one comes back later and ‘dances’ the technology to the already finished music.
That’s what I did to the New Year No Mistake video I showed in the link above (or here, again, why not)
https://lbry.tv/@IanKTindale:1/New-Year-No-Mistake-video-synthesis-minima:6
I used iPad apps (that probably don’t exist, I forget their names) on the old iPad 2, which I had no way of recording, so I played them on the telly with a DSLR recording video from the telly, as I played along to the music. I did this over and over then composited loads of layers of it in FCPX.
@u0421793 I like the clip. That would be really fun to see live.
I think you've created a kind of fragmented story in the images, along with colorful graphics that accentuate the music. While I have the sense that the music is in the lead, driving the graphics, my eye is searching for meaning, trying to piece together a story from the images -- which are mostly abstract and not referential. The repetition in the music gives me the sense that I'm seeing reenactments of the same scene, each tantalizingly just beyond comprehension.
In Laurie Anderson's Blue Lagoon (which I saw performed live 35 years ago), I like how the changing color palette and clips cause me to reinterpret the lyrics. In this example, I don't think the music is conveying any meaning. I think it provides forward momentum for the story, through rhythm and repetition.
Generative or Video Sample mangling aren't an option?
I like the idea of the visuals "matching" something in the music. That said I find it **much **harder to actually do.
My last live outing back in October involved quite a complex (for me) set of visuals and I had the "bright" idea of effectively having a visual to match each major movement in my 30 minute set. And because it was run from a .mp4 file playing from a laptop to a project it meant I had to hit certain cue points at certain points - especially for the last section which was very much designed to go with the music (120bpm equals 2 seconds per bar, so lets have 1 second video segments to make it really fast). It just added to the overall stress, and I doubt anyone really noticed what was happening.
On this one I used the app Hyperspektiv quite extensively to mangle the video, and I just ended up filming sections of the film "Metropolis" off youtube, walking around corridors at work, and some of the gadgets I was using for the live performance, with Hyperspektiv, then chopping up the video and assembling the segments using Microsoft Movie Maker (dont laugh: its all I have, and I haven't found anything better and free :-) )
My results are up on youtube here if anyone wants to experience it (minus the pain of using Movie Make for anything like this):
Previously I've used Timelapse and slow motion in my projects - seems to fit the general tone of what I do.