Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
me too ! Mr. One Way did a really cool one on Sound Cloud, but I didn't had so much productions to hear and see, ...I would really love to (and if you don't want to publish online you can just send me by mail or what
). very curious 
@kolarg98 I went through the 5 tutorials on vertexshaderart.com which were great but unfortunately although I was able to create some nice gsls code, I'm still not figuring out the relationship between the definition code needed for adjusting parameters with the video sequencer in K Machine and how to work that in with the code I've created. Any help closing the gap would be appreciated.
Way over my head.
I look forward to you sorting it out,
Safari and Chrome are pretty useless for creating gsls on their websites on iOS but work on Mac OS. You can get very tight integration between your music and the animation created with the gsls code.
@InfoCheck Hi, InfoCheck, glad you are playing with the shader part ! I am on a tutorial video for describing the process, hope I can finish it tomorrow or Saturday (I am also a bit busy with the upcoming release that's why it takes me such time, and as it has bug fixes it's a bit priority) but I may give you a quick tip that I hope can help. The main point here is to understand that, in a way:
the K Machine hacks comments
So, here it goes:
Once you shader is fine running in vertexshaderart.com, you can already upload it to the machine through Itune using using the process described here :
Normally, at this point, you should have your shader running happily in the K Machine...cool...but hey, no controllers !
)
And this is where the magic begins (well if no bug..else notify me
1) identify in your shader a parameter that you would like to control:
Let's take for a simple example the color. In vertex shaders on vertexshaderart.com (and in the K Machine), the color is always at the end of the code, you should have something like:
vec3 color(1., 0.,0.);
--> this create a 3 dimension vectorsv_color = vec4(color, 1.);
--> this use the previous 3 dimension vector to be used as a color ( the one at the end is fortransparency)
2) What you could do there, for example, is add a controller for the red component of the color. For this, what you would do, at the beginning of the file:
#define parameter0 0.//KParameter0 0.>>1.
--> here we define a controller as parameter0, with default value 0. and a range from 0. to 1. (color component must always be between 0. and 1. => 0 = no red, 1. = full red)and then just replace the old
vec3 color(1., 0.,0.);
by
vec3 color(parameter0., 0.,0.);
3) copy in a file this code, upload to the K Machine and you should have a controller for the red component applied by the shader on each vertex.
then if you want to control also green and blue components:
you would have
#define parameter0 0.//KParameter0 0.>>1.
#define parameter1 0.//KParameter1 0.>>1.
#define parameter2 0.//KParameter1 0.>>1.
and at the end
vec3 color(parameter0., parameter1, parameter2);
there you should have three controllers and be able to fully control the color by manipulating them.
.....and you can, in this way, add up to 8 parameters in the shader.
The only point to understand is the
//KParameter0 0.>>1.
that is normally a comment in the glsl code is understood by the K Machine as an instruction: 'create a controller with range 0. to 1.'so, your normal line
# define parameter0 0. //KParameter1 0.>>1.
which in conventional glsl code means "create a fixed parameter of name parameter0 and of value 0. , followed by a comment
//KParameter1 0.>>1.
That's why I say it is a sort of hacked in the comments system. The K machine will understand the comment when special Keywords are inserted.
This kind of technique could surely be considered by some programmers as a bad practice (comment should be ..well, comments), but the cool thing with this system is that, your code can run normally on site like vertexshaderart.com (which interpret the part
//KParameter1 0.>>1.
only as comment) so you can test and modify online, but will be interpreted differently by the K machine ( controllable parameters).I hope this help
let me know
...I 'm really on this tutorial video, hope it will be more understandable 
This part of the doc may also help:
http://kolargon.com/KMachineV2Doc/KMachineV2Documentation.html
B. Creating advanced vertex shaders
@InfoCheck I add a second example, as it is related to @win comment:
To control the sound amplitude, when sound is used in a shader, you should have somewhere an access to the sound texture (sounds are transmitted to the graphic card as textures, ie a big bloc of datas.
So somewhere you should have something like:
texture2D(sound
for example, in https://www.vertexshaderart.com/art/f24WmWznGNdEXEQTu
float sndFactor = texture2D(sound, vec2(relMasterCircleId, 0.)).a ;
Here, note that the texture contains all the frequencies data on a row (low freq on the left and high freq on the right) and each row represents a time, the first row (top) being present, and the last row (bottom), a few seconds before. (but I think the video by Greggman already cover this, so you should be fine)
Also, note the access to the data is a 2 dimension vector, here vec2(relMasterCircleId, 0.) ...which has to be understood as the 2 coordinates of a 'picker' you move on the texture (but here also i think the videos by Greggman already cover this, so you should be fine at this point)
So, now, if you want to add an 'amplitude controller' on the sound value, as mention by @win , you would do something like:
at the beginning of the shader:
#define parameter0 0.//KParameter0 0.>>100.
and then for example :
'float sndFactor = parameter0* texture2D(sound, vec2(relMasterCircleId, 0.)).a ;'
and then you can use you
sndFactor
parameter to do what you want in the code (use in the color, or in the vertex positions etc...) . Now when loading the shader in the K Machine you have a controller for this parameter.Let me know if any question or problem in the process
The cool things with this 'not so user friendly' controllers system, is that not only you can use the tons of shaders on the vertexshaderart.com website (and soon other websites with fragment shaders), but each one can be interpreted in tons of way depending what parameter you choose to control... an exponential source of diversity
@kolarg98 thanks for the instructions, I was able to add control parameters to a gsls file I created and use it in K Machine. It wasn't the most efficient code so the app has been crashing quite a lot trying to setup the sequencing. It would be nice to have some sort of resource monitor so we can get a sense of how much of our resources are used in our setups so we can reduce crashes.
Thanks, that answers my question completely. At the end of a long work day there isn't enough battery power left in me head to dive into shader code, but some time it will happen. When it does I hope to try to modify some existing shaders to utilize the audio information.
Just a suggestion ... take it or leave it ... it seems to me like including some demo shaders that both use audio and expose related controls with the app would help people connect with the potential that is there.
And another thought as maybe an easier, temporary, way to get shaders into the app without iTunes would be to use the open-in functionality of iOS. I don't know how that Is done programming-wise, but it works with many file types. In that way, no matter where it was, if one could get access to the file (ex. Through Dropbox, Safari, etc.) KMachine could be selected through the open-in functionality and the file would import to the app. There's probably some fairly easy to use call for this without having to write tons of code.
Along with that, if there was a way for you allow the source of the currently loaded shader to be copied to the clipboard, geeks like me could then paste that into a text editor like Textastic, edit away, then "open-in" the modified file for an instant onboard modification. All, with what seems to me like could be much less work than other routes.
This, coming from someone who has never written a line of iOS code, so take it for what it's worth.
I can't say what others meant, but what I expected when I heard lots of "LFOs" was that there were some free-running low frequency oscillators that could be linked to parameters to automate their movement. So, for-instance, you might have an oscillator that puts out a 1hz sine wave. This could be linked to one of the parameters to move it the values up and down slowly and smoothly. With no sequencing the movement of the parameter would just follow the sine wave. So maybe you could have a pulsing color, a smoothly shrinking and growing ball, a rotating smiley face, a sweeping filter, etc. Used in conjunction with the sequencer, you could have smooth movement punctuated by jumps. Usually LFO's come with some selectable wave forms, a speed control and some way to adjust the amplitude of the effect on the parameter.
Wikipedia has a good basic explanation of the type of LFO I was thinking of and how they're used with synth parameters.
That's not to say I'm dissapointed they're not there. I just wanted to explain what I thought @RustiK meant, and my puzzlement over where they might be found.
I think having K Machine be able to control animation that is both visual and audio using glsl and the sequencers would be great. A lot of the same logic and constraints are aspects of each so it'd be ideal for coupling them together. Learning about the glsl while doing the tutorials on the vertexshaderart.com site really brought that message home to me.
Having a certain number of robust shaders will certainly meet the needs of many people without really needing to dig down deep into code if there's a way to share presets so people are simply adjusting controls as in many other synth apps.
Being able to create text files on iOS and using open in as @wim has suggested would be great as doing the iTunes transfer is much more of a hassle.
In general there are quite a few music programming apps that have been brought to iOS from PC based user communities and very rarely do they provide a way to complete the whole creation cycle on iOS. Hopefully K Machine will be an exception as even with my very limited knowledge and experience, I've enjoyed it.
@InfoCheck happy if it helped. But concerning the crash, not good;) an inefficient code should only slow down the rendering, and in fact, the frame rate in top menu is a good monitoring of the ressource used by the shader (in fact there is no real other indicator as far as I know). Would it be possible for you to send me your code by MP so I can have a look ? thanks for the feedback !
Ok, yes I think I better understand what you were looking for (and the puzzlement not to find it
No there is no such thing presently. May be something to think about. In fact in some early prototypes I had tried something that was close to that, there where 3 additional controllers sending a sin value to the shader, and then in the glsl code i could used this values, to pulse a color or else. At the time the shaders were not opened for edition. But then when I made the controller's system, I have drop this idea as, for me, it was introducing too much complexity, for a redondant function, because I could do exactly the same behaviour using the controllers:
Now with controllers, i would preferably implement this behaviour in this the following way:
I would make a controller 0.>>1. that I will use for O = O hz and 1 = 1hz
#define parameter6 1.//KParameter6 0.>>1.
then, somewhere in the glsl code i would use the 'time' buitin parameter to create the behaviour I want
float timeOnSec = mod(time, 1.);
...and use it in a sin function or else in order to get the desired oscillation behaviour
So in fine, the system in some sort offers the possibility to construct an lfo but no ready to go lfos;).
I think what made be took this path, is that when prototyping with lfo, I first wanted one, then a second, then a third, and then maybe a non linear thing or else... and then everything was, firstly getting extremely complex, and secondly not flexible at all (what if the next day I want something else..). That's why I took the path of just trying to make the system opened for user shaders.
I hope I have correctly understood what you expected as lfos, and if so, I also understand this doesn't replace lfo as you expected. But it can be a flexible way to do what your want to do, visually at least, and maybe in the future at the sound level (as some sites like https://shadertoy.com also implement what they call gpu sound, which is basically sound generated from the gpu with shader codes, and I I really expect this can be a thing the K Machine can also accept in the future ).
Thanks to have taken time for feedback though, and I think making this sort of 'lfo' behaviour could be a good point for a little tutorial.
Ok thanks for the code, I have only tried it on vertexshaderart.com but even there it has difficulty to run with my laptop:D . I will test it, but won't surprise me if the task is purely shut down by ios as an non responsive one or something like that.
It is so much intensive in the glsl code, but as you have a lot of fullscreen screen plain triangles (you use GL_TRIANGLES mode), the gpu has to fill them all and it must be really really slow on an ipad (you would need gpu capability far beyong ipad to render this smoothly). but what you could do, unless this is really the design you want for your shader, is just replace the render mode:
replacing
//KDrawmode=GL_TRIANGLES
by one of the followings:
//KDrawmode=GL_POINTS
//KDrawmode=GL_LINE_STRIP
//KDrawmode=GL_LINE_LOOP
//KDrawmode=GL_LINES
as with the last one the gpu won't have to fill the triangles and it should run very smoothly.
This said, more than a ressource monitor (as, except if I didn't well understood what you mean, the fps are in fact the a ressource monitor) I should maybe add a functionality that, in case the fps drop under a certain level, the K Machine would just drop the shader and switch on a default one, or better switch to a basic GL_POINTS render mode. As you say this option could be configured in the preferences...
Thanks for the feedback ! let me know if it helped you.
@kolarg98 thanks for the examples of code, very helpful. Cant wait to spend more time with this app. The roadmap for future work seems well thought out. Although midi is my priority I see the sense in the order you have. Go for it!
On the lfo subject, Im thinking that some of your users want to use lfos on sound rather than the shaders so I dont think your example will help in this case. Lfos are really amazing but we can get the same results if the controllers are midi controllable (through third party apps) so when you program Midi please make the controllers midi learnable. Lfos always are always nice though.
@win Once midi is implemented we will be able to just point midiLFOs at K Machine to get the results we want.
@Richtowns really interesting indeed. Never heard about midiLFOs but that's sound just perfect here then ! thanks for the feedback !
Absolutely - don't bother building in any LFOs, merely make control aspects available over midi, such that midi control change messages from other ipad apps are able to be translated into useful parametric alterations (both sonically and visually (and visually either by response from shaders, or control of the actual viewport and camera position outside of shader level)). For this to be at its best, it'd need to appear hand in hand with Link synchronisation.
@wim @InfoCheck wow very interesting ! I didn't know at all this open in functionality, but if it works it could effectively be a very good move; I have been struggling with different test solutions, with a web browser inside the app which could aspire the glsl code (and apparently could no work for various reason, including the DOM look kind of generated on the fly when selecting the shader), but this 'open in' thing could be perfect, elegant and simple ! I will dig into this as soon as possible. Thanks !!
@u0421793
shader_TriangleSolarXYZPosControlled.glsl
, where controllers 5, 6 and 8 respectively control the x, y and z positions, or rotations I don't remember).Also some users may want to only work at the fragment level, so the 'outside shaders camera' solution would be useless (for sure there could be a switch or something)..
The only good point I could see is in term of computation: if some camera calculus are done on the cpu, outside the shader, and only transmitted once per rendering, it's as much calculation that is not repeated for each vertex calculus, so it may have a positive impact on performances. I think I should do some real performance test should be done to really know the answer.
The code I created was definitely designed for triangle mode; however, I understand that what ran on my MacBook Air won't necessarily fly on my iPad Air 2 with a lot less memory to work with. The app renders the code fine, but then it all of the sudden crashes. There is no slow painful winding down of rendering the animation.
I'll revise my code so it's doing fewer things and report back on my experiences.
Hum...How many vertices are you using on vertexshaderart.com ? This may explain maybe you had less than me.... maybe adding with the /
/KVerticesNumber=
parameter. If you say it is running smooth then crash this maybe a bug...I will try it in a few minutes.@kolarg98 here's a video done using the code I provide you. It ran okay before I started adding other loops to the visual sequencer. Since the output is formatted for the iPad screen rather than standard video formats, I had to run it through iMovie in order to post a higher quality video. The native video quality on the iPad is much better and very sharp.
Here's a half size screen shot from the video:

This thread is a true testament to the knowledge on this forum. Very impressed by all. Almost all anyway.
Yeah I can see it running fine on VSA at least once I have decrease the vertices number, going to load it on the I pad. Nice one by the way
@kolarg98 thank you, I was pretty much just following the tutorial lessons and your instructions while not trying to break anything.
Strange, I may have done an error previously, as now it run very smoothly on my IPad air 2, even when mixed with some other shaders on the same loop..I am going to have a better look inside the code...sometime it happends that there is a div/0. or something like this, but generally you can see the problem on VSA. really nice one indeed
I give you my mail in mp if you can send me you original file just to see?
I suppose one motivation for camera / viewport control is that if someone has no idea how to put together OpenGL / webGL content (and I'm more or less in that category, I started on it about 5 years ago, didn't understand any of it, and only recently have remembered it existed) then they can still, for example, find a simple box or pyramid of vertex primitives that does nothing dynamic by itself, and yet still make it 'dance' or 'jiggle' about a bit by waving the camera around in time to the tune.
Another motivation is to work on proper songs, where the verse and chorus are different views or scenes altogether, or even at the level of riffs, a particular riff or motif corresponds with a particular camera scene, and at other times you don't see that scene. In other words, the composition compositing is outside the shader code, the shaders correspond, kind of, with a track or instrument, but the camera views compose different scene combinations over the time axis. When a shader scene is not in view, I don't know - should it be killed or paused, as it'll still be calculating away frantically when off camera, which may be for some time until the next appearance.
I daren't go into the application itself, it is too noisy and frightening, like a caravan full of uncontrolled chimpanzees and ten drum kits. I shut the door and run away. My preference is for a lot slower more peaceful visual and sonic scenes - relaxing stuff, with meaningful juxtapositions.
"like a caravan full of uncontrolled chimpanzees and ten drum kits"
well I guess I would enter and close the door for a jam session 
more seriously, ok, I understand, and yes it would be very nice to be able to load models, create scenes and lights, and in this case I would perfectly see that a external camera could be useful. Maybe this will come, and it will be a lot of work, I doubt its the kind of project I could do all by myself, and also I must say for the moment I am more interested in the abstraction/generative thing, and to have a fast simple, open and performant tool that let me create the audio visual experimentations I imagine. It's maybe a bit the difference between https://sketchfab.com/ and vertexshaderart.com ...I don't put figuration out of the equation, nor what you call 'proper songs', and think your idea may be very interesting though (like the camera node thing), but I would have a lot and lot of work before I could create this kind of real time midi responsive 3D Studio max on ios
...and also well, people have different tastes 
I agree, it is a big thing. On the web, in SVG it'd be (in 2D, flat) just a matter of firtling around with the viewport vs viewbox and transformations (i.e., the bit that nobody understands), to emulate a pan and zoom. Again, this would be flat - pan north south east west and embiggen or ensmallen. But that's SVG - easy, because the user agent does the heavy work. If it were html5 canvas, the whole thing would need recalculating and redrawing, and that's down to the same level as the content code. I'd estimate it is similar with your thing, unless the Apple iOS viewport offers some handy handles to connect to midi CC, entirely outside of what goes in the viewport (gl or otherwise). I've no idea with app programming, not my area of understanding.
On an entirely unrelated and distinct note, and especially if you're into functional programming, what do you think of this:
https://en.m.wikipedia.org/wiki/Set_theory_(music)
(Don't ask, I won't know the answer)
So you haven't really tried to make peaceful stuff on it?
Because I have actually.
Just because you can get whole loop to play on it and with the BPM multipliers you can start low and really get some melodic stuff with Piano Scaper and Droneo.
I will try and post one.
ALSO- with a name like "K MACHINE", did you expect anything but BONKERS? I have seen the bottom of enought holes in my past life to know this app lives up to it's name, just as something like Glitch Breaks does.
@kolarg98, thanks for your answers on lfos, it will be interesting to try to embed an LFO in the shader code. For kicks, I think I'll try to make parameters for to adjust the speed and amplitude as well. Should be fun if I can ever find the time.
I wouldn't worry about adding LFOs, especially if you plan to add midi control. If all the adjustable parameters can be midi-learned and the app is Link enabled, just about anything can be done.