Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
It was the only ant looking in another direction, probably a different trap.
😂 Impressive! Made my morning. 😎
"weeeeeeeeeeeeeeeeeeh" slideemoji
My Presets #2: LocalVariables (to edit the Envelope and starting/ending Overtones)
I wanted to use a Random function to set these but the app doesn’t offer one. I’ll request one be added on the
Developers Discord and see if it’s accepted. Until then, learn to type in new values for the ADSR, Initial Volume, Starting Harmonic Number and Ending Harmonic Number. You can input some MIDI and change the values and Hit Parse to immediately hear the results of your changes.
I made the request on the Discord for VividSynth Apps:
https://discord.gg/DxA8cgPVYw
Wait, so in vividshaper you don't have the Lua standard libraries?
EG https://www.lua.org/manual/5.4/manual.html#6.7
Which says:
"math.random ([m [, n]])
When called without arguments, returns a pseudo-random float with uniform distribution in the range [0,1). When called with two integers m and n, math.random returns a pseudo-random integer with uniform distribution in the range [m, n]. The call math.random(n), for a positive n, is equivalent to math.random(1,n). The call math.random(0) produces an integer with all bits (pseudo)random.
This function uses the xoshiro256** algorithm to produce pseudo-random 64-bit integers, which are the results of calls with argument 0. Other results (ranges and floats) are unbiased extracted from these integers.
Lua initializes its pseudo-random generator with the equivalent of a call to math.randomseed with no arguments, so that math.random should generate different sequences of results each time the program runs.
math.randomseed ([x [, y]])
When called with at least one argument, the integer parameters x and y are joined into a 128-bit seed that is used to reinitialize the pseudo-random generator; equal seeds produce equal sequences of numbers. The default for y is zero.
When called with no arguments, Lua generates a seed with a weak attempt for randomness.
This function returns the two seed components that were effectively used, so that setting them again repeats the sequence.
To ensure a required level of randomness to the initial state (or contrarily, to have a deterministic sequence, for instance when debugging a program), you should call math.randomseed with explicit arguments. "
Edit: Someone correct me if I'm wrong but if you embed Lua in your app, it surely would be embedded with the standard libraries, thus I would have imagined you could call the Lua math functions and not just the vividshaper API functions, or no?
He uses random in some of the patches so there it works but maybe Mc has some other aim for random (over my head )
There is also a share your patch channel on his discord so join up folks. The 'updatfreq=xxxx' and 'generators=x' seem very important to reduce cpu load on pretty much any patch. I still can't get over that it's so cpu hungry.
Thanks for the code @McD ,works fine, I like the ADSR in the 'header' area, better overview and easier to edit.
Yes. The math library is there. My bad.
Deleted reply because I didn’t test for Math library correctly.
Alright, I'll find them. Your patch is very mean to my cpu load (m1) but reducing generators and setting updatfreq even higher reduced it some. Sound gets a bit thinner but ..
EDIT: Edited/deleted in light of McD's further info. Ignore this
Yes, it’s really easy to consume all the CPU available and I’m using an M1 iPad so older iPads will need to dial back the number of overtones computed to make sounds they can use. I’d stated with the Presets and see what you can get away with.
Great news! So hopefully that also gives you scope to expand your experiments over time and make use of some weird additional stuff in Lua maths
Good to know
You find math.random in the patch 'Random Walk in Latent Space'
I don't have vividshaper yet, it's on my shopping list, I spent a small fortune recently on pianoteq standard plus additional instruments, ff bundle, logic for the year and tomofon and some swam things and some effects, and and and,,, so now I'm biting my fingers and not hitting buy on anything just yet... so sort of backseat commenting without actually knowing for a fact!
I’m finding that I prefer editing variables in the same way I’d investigate a synth by Turning knobs. I put random and a few variables and I have no idea what actual settings are being used and frankly most of the Parsings are just terrible. Finding which parameters to randomize is key to making a useful script that randomizes. There’s a learning curve that I’m sure many will not be willing to climb. In the end something like Tera Pro produces better sounds that I’d never be able to approach with code. Still, there are some interesting sounds here for me since I tend to crave sine waves with various envelopes and overtones added. This does that style of synthesis really well.
I’m finding interesting sounds here:
3 VividShapers and 2 PianoTeq 8 instances.
Looking at the app documentation I noticed you can control variables with CC inputs so I mapped CC’s 20-27 to ADSR, Inital Volume, Frequency (CPU impact), Starting Harmonic, Ending Harmonic. I also found a way to output the Current CC values as text:
So then you just point an lfo or whatever set to one of those specific ccs at it using the aum modulation system, for example, even though the app has no parameters exposed, and it works anyway, is that right?
I have a Bluetooth Keyboard with 8 knobs that output CC’s 20-27 by default. I configured the keyboard as a MIDI input to a VividShaper instance running this script. I think I’ll standardize on ADSR CC’s because it’s great to dial in a nice envelope while the script is active.
Dialing in the number of overtones in the wave/oscillator is also great for dialing in a nice tone. VividShaper also has
Internal EQ FX that could be implemented and controlled externally.
Rozeta LFO and most LFO apps allow you to set the output CC value. I’ll look for useful variables to assign LFO’s as input.
NOTE: “ncc[20]” is a normalized value from 0 to 1. “cc[20]” forwards the actual 0-127 CC value.
I see there's a lot of discussion going on here!
There are a few things that I'm working on for v1.2 and one of them is to add parameters. Personally, I do prefer CC because then everything is setup for me already. I also have a BT keyboard with the 8 knobs set to CC 20-27. The one thing I wonder about is how may parameters to define?
Hi Lars! See, a lot of Bluetooth midi controllers need to be tweaked on desktop apps - Nanokontrol for example - which is why exposed parameters is a much more flexible solution. How many to expose? As many as possible I guess. You might want to look at pianoteq 8 for iOS (free version will show you all you need) - has custom parameters exposed for each patch. I have no idea about the tech side of implementing that though
Thanks for this. I will have a look. I did have two parameters exposed in an early version but removed them temporarily for the first release. But the core code is there, I just need to learn a little bit more about customisation.
Rozeta LFO outputs on CC’s 13, 15 and 17 by default so I assigned these CC’s to various ADSR envelope and Neural Network x/y axis parameters. The results are endlessly morphing a held note… nice.
The updatefreq can only take the values 128, 256, 512, 1024, 2048, 4096. If you give it anything else (in your case, 0, 127, 254, ...), it will default to 512.
I need to explain what this is. If you have a sample playback rate set to 48000 Hz and a frame buffer set to 512 samples in AUM, the audio rendering code will be called 48000/512 = 93.75 times per second. The updatefreq will in turn tell how often the Lua-code should be called. If you set the updatefreq = 256, it will update the Lua code twice during one frame buffer.
That means your Lua-code will be called 187.5 times per second and per generator. However, each generator has its own Lua-interpreter. If you have all 8 generators running, that means in effect that the Lua interpreters get called 1500 times per second to generate the wave tables.
This puts some demand on the CPU. In some cases, you may actually need higher update frequency than the default 512 value, but in most cases I believe you can even go up to 1024 without hearing a clear difference.
Hence, changing both the number of generators to e.g. four and the update frequency can have very positive effects on your CPU performance, but may of course affect the patch performance instead.
Cool. I need to make a video on neural networks. Actually, I realise I need to make many videos.
Here is an image of how it works:
This is a neural network that has been trained on over 4000 waves, each 128 samples long. It has been trained to give the input as output, but it has to go through a middle layer with only two neurons. This is the so called "latent space". This means that every wave will get mapped into a two dimensional space. Once the network has been trained, we can run the network (this is called "inference").
The function VSAutoencoder2 returns an array of two values for any wave you give it. For instance, a sawtooth wave will have the 2D coordinate (-0.3347, -0.0877). This is the coordinate for sawtooth in latent space.
The function VSAutodecoder2 takes a coordinate in latent space and returns the corresponding wave. As you can see, it won't be exactly the same, but close enough.
This enables us to generate new waves by randomly selecting coordinates or choosing coordinates for a known wave form as a starting point.
This kind of network has been used for many different things, like brain imaging where the purpose is to find different factors of brain morphology that can explain brain variability in the human population.
Thanks! I had one more knob and I surmised updatefrequency could help reduce the CPU load so I hacked
The last knob to change the frequency. It didn’t do much and now I can see why. :^)
This is a truly excellent piece of Musical Audio coding on a par with some of the great C-based Audio API’s but packaged for AUv3. I’m very impressed. I hope a million hackers discover your product and use it for teaching, composing and sound design.
The general readers here is NOT a programmer and frankly the math involved is challenging. But Preset sharing could provide an additional set of users but it needs to be pretty simple.
I have looked for local and icloud folders for my Presets and I can’t find any evidence there are any… I installed the
Mac OS version (which is free after an IOS purchase even tho’ it will ask you to buy it). The Mac version did NOT
See any icloud Presets so I suspect you intended to share icloud presets but haven’t finished that feature yet. Dropping Presets with a standard extension (*.VSP?) into a local or icloud folder sounds like the way to go.
Asking everyone to cut and paste text may be a limiter to adoption.
Thanks for all the kind words. Indeed, there are room for improvement and I love these kind of user interactions. The idea of a VSP patch format is excellent.
I need to take a look at why the iCloud folder isn't visible. It is visible for VividTracker and it should be for VividShaper as well, but maybe I need to change some settings for that to happen. In any case, if you save a patch in iCloud on your iPhone/iPad, you should see it on your Mac as well (and vice versa). It takes a little bit of time for it to arrive sometimes.
>
@VividSynths Can I just echo what @McD said here. As a programmer myself who to be honest until a day ago hadn’t used LUA, this is a pretty amazing app and I look forward to seeing what others can produce with this. Well done.