Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
It is a fascinating little world that many don't yet know about.
Great post.
It was also called a pill or bomb, not "Molly"
It was also called House Music, not "EDM"
We are called old my friend.............LOL
You are a persuasive salesman! Definitely interested to add this app, and use in conjunction with LumaFusion. Takete never appealed to me.
I'd be keener if I could add photos and videos, and the UI wasn't so terrifying, but loving some of the devs videos. Be interested to hear how you get on if you take the plunge.
Oh dear, I see I've misunderstood already. I was under the impression that user material could already be incorporated. So, this stays on the wish list for now.
You can create your own 'shaders', whatever they are, but it looks a bit laborious at the moment, but I'm guessing you can edit the ones already with it. Don't think you can import vids or photos yet, but you can record your shader action.
It's a niche thing, but I like what it does so it's on the list. Reminds me of Hacker Farm stuff.
@MonzoPro yes, thats what I understood and thats great for me. Ill still use Takete for video but this app is great for fast syncing vector patterns to beats. @RustiK agreed, never heard of shaders and Im indebted to the dev for changing that. Mod the online shaders you like to 'expose' k parameters to the app then play. If you want you can then make your own. Years of fun for the price of a couple of cigs
@richTowns Very well explained.
I know you have already seen it, but as it’s a good complement to your post, I take the opportunity to slip my little tutorial video showing how to import a shader from vertexshaderart.com to use it in the K Machine;)
There are already tons of great shaders there, and as @richTowns says, by making some small changes it is very easy to create you own. You can save your work online also.
And the plan is that very soon I can also either connect the K Machine in someway with this sites, or create a specific one, accessible from inside the K Machine, so users can create/store and share, these visuals online.
(A small tutorial on ‘How to derive your own shader is on its way also).
tutorial : Import glsl vertex shader files to the K Machine v2 from online shader site:
https://vimeo.com/217401343
Note that Takete and Lumafusion also made this crucial choice of adding an 'old school' pause button
Ok got you and Thank you for clarifying.
100%
@kolarg98 wow Thanks for such a thoughtful explanation and your responsiveness in general. The ‘How to derive your own shader' tutorial you mention will - I believe - be the gateway into this magic for many of us.
So is this mostly a beat machine of some sort?
Plus visuals.
It's very unique (in a good way). Kinda hard to call it "some sort" of anything.
Yeah you're right just I find it interesting.
Indeed it is.
Yup
Sequeuncers with massive parameter control with mucho LFOs
3 diff pattern areas in fact
1visual sequwencer
Sounds very interesting and awesome
Initially the app may look more complicated than it is. The in-app documentation tells you how to do everything.
To make the app your own isn't very straight forward at this point as you need to create GLSL (or use some existing code), modify the code to expose parameters to use in the app, import the GLSL file, import your audio (AudioShare works well), create an audio sequence, create a visual sequence. For someone who's comfortable with doing a little programming, all of this shouldn't be too bad. There are instructions and/or links on how to do all of this in the app.
Each pie slice has it's own visual loop which has a grid you can use to sequence the various parameters plus 3 audio loops each with their own grid for sequencing various audio parameters. The whole pie is one big loop as well or can be played by touching the various slices.
Importing GLSL files is currently an iTunes only process.
You can always use the GLSL files that come with the app if you're not comfortable making your own. The developers road map for updates should make doing all of the above more straightforward and allow you to do it all from the app.
The visuals are very nice as they are vector graphics so the potential to create very engaging animations/video which uses the characteristics of your audio is there.
Creating and customizing the GLSL files seems the most difficult aspect of the app. It's easy to select and use the GLSL files that comes with the app.
I don't see any way to set the screen size so you may have to edit the video with another app to have it play in a standard format (e.g. 16:9). You could always do a little math and create the adjustments in the app with your target format size in mind. With my iPad Air 2, the video and screenshots it creates are optimized for its screen 2048x1536 pixels.
You can use it as an audio source in Audiobus 3. If the app receives audio from Audiobus 3, you can set the mix level and record video in the app with the Audiobus input too.
There's a small bug with that. K Machine doesn't show up as an output selection unless it's already running.
I wish there was some parameter to alter the strength of the audio's effect on the shaders. Some of them just seem to be independent automation, unaffected by the beat or frequencies.
I'm still looking for LFOs too.
Aren't we all

@win
Hum, yes, I understand, but think you miss something here (but no problem may also be my bad description/explanation).
In fact, this depends on how the shader is coded. What the K Machine does, is make sure the sound information is available to the shader (it is done though a texture, with sound frequency history on a few second) and touch information (also done through a texture with a few second history) and build the controllers from the shader code. You (or your coder friend) can easily create a shader with controllers that would do what you are talking about. And your coder friend could send you the shader file that you could use in your own creation. I have only done a few shaders myself ( I love it but lack of time), and put them just as a demo in the K Machine. Some use the sound or touch infos, some not. But the real purpose of the K Machine is to offer a completely open environment. Many sound visualisers exists on ios, some with super sexy visuals. But, in my point of view, the limit is also that finally, all the users will have the same type of visuals. It is different with the K Machine: it has to be consider as a creation tool, and I think once you understand it, you will see it is, in fact, kind of really powerful. The shaders must really be apprehended as the audio samples. you can load your own audio samples and you can also load your own shaders. No limit. Otherwise, even with a super sexy ui and default shaders, this app would only be a nice demo (what is cool too, but not the purpose here) and then, to my point of view (or at least for what I personally need for a creation tool), very limited in term of creative potential (think of a sampler with only 8 samples, not possibility to change - samples may be super cool though -...or a photoshop where you could not import your image, only use the demo image...or a Final Cut where you could only use the default video ..:D). This said, I totally agree that the workflow has to evolve, but one step at a time;)
@wim Hum.. I think maybe what @Rustick was mentioning are the frequencies Filters on each audio track.
Ok I will have a look at this, thanks for feedback !
Happy to hear that;)
@kolarg98: since you are around here, let me thank you for this great work! Especially the customizability is just what I haven't found anywhere on ios before. I am also new to shaders, though, and really really hope you find a way to make the internal shaders also workable, inside k-machine..., it's a lot of fuss having to go back to the Mac each time.
Cheers, t
I played around with K Machine a tiny bit and liked what I heard for step based automation etc. but just barely scratched the surface potential. Would love to hear more examples of what people are achieving as far as range of sounds!
@animal Thanks for your feedback animal; yes I know it's a bit of pain to go back to the Mac each time; the editor should be there very soon. What I plan, is in release order :
1) - a first release (already in test phase, so should be available in a few days) witch:
2) second release for Ableton link
3) third release for shader editor
4) fourth release: a new performance panel for playing the samples live
I should have a bit of time this month, so I hope all these will come rapidly.
and only then will i go for midi/osc controllers, fragments shaders, dedicated edition online etc...