Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
By way of clarification - Mozaic is a 100% self contained environment that executes only its own scripting language and only within the context of the currently running instance of the plugin. No part of its ability to handle code placed into it has any analogy to how iOS app development and deployment works.
Audulus 4 seems to be heading in the way of making it possible to write declarative GUI elements using LUA. That might be closer to the mark.
But no ... there is nothing like a plugin compatible architecture in iOS today. Something like that would be completely new territory and likely extremely difficult to integrate into the App Store / App Store Review ecosystem. It's a great vision, but almost a quantum leap from what's possible today.
Yeah, just like this. Its got its own mini language, right? Give it some keywords to control adding and placing ui. Allow it to reference a stylesheet. Whatever is easiest to adapt to how he already does his themes and layouts.
But I at first wondered if it would be possible to integrate popular audio scripting engines like faust and supercollider.
If possible to wrap these in the deployment and have them available in the xequence scripting environment, then adding a pre-parse to incorporate the xequence ui control handling.
Deleted comment
Haha, I wonder what gave it away to you as a hybrid app? It can't be because it's sluggish or a resource hog because it is neither 😜
Xequence has both an unfinished, but working AU host and an internal audio engine a la NS1. As far as iOS is concerned, a WKWebView is really like any other UIView and doesn't have any "special" requirements, nor is it a particular resource hog if you implement your web-based stuff carefully (which I try to do, and I think mostly successfully).
It is mostly because right now I cannot justify spending the time on Xequence it deserves to move it into the future. The economic return is just too low. At the same time, I also don't want it to rot away silently on my SSD. It's a good product and it has loads of potential (including being a full DAW that runs in a browser -- it is already almost there, just no audio tracks. Sounds familiar eh? 😂)
Also I find the prospect of giving away a huge amount of code to the community and see it (hopefully) grow exciting! That would obviously include me still being involved, especially with helping new contributors find their way around the codebase.
I have my own in-house Cordova-esque framework which Xequence uses. It's not as full-featured, but probably more efficient and smaller, which is preferable to me.
And yes, moving to JS first would probably be a good idea. The output from the CS1 compiler is workable but not great -- I found a tool called "decaffeinate" that attempts to compile CS to JS more cleanly, and while it's better, it's still not amazing. But this should not be a major obstacle.
Yes, definitely. 95% of Xequence are developed with the by far most wide-spread software stack there is.
Technically, it would be almost trivial to integrate a scripting system that lets you do literally anything anywhere in Xequence. But yes, I'm not sure if this would be allowed by Apple, as I think one of the rules is that any "code" or code-esque "thing" may not be modified anymore after publishing. But I'm not sure if this also applies to scripting systems or "plugins".
The proper thing would be to define a well-designed API for scripts to use to interact with Xequence, and they should probably be run in a Worker thread anyway for performance and isolation reasons. That might also relax Apple reviewers (if they even understand the technical difference...)
I think I mentioned this idea before and not sure if it’s appropriate to this discussion, but instead of having to remake Xequence completely, why not just build a kind of auv3 ‘bridge’ that would route easier and pain-free connections to the ins and outs of Xequence?
So, for example, in AUM, you could insert instances of this ‘ Xeq Bridge’ on a midi channel and specify the ins or outs of tracks in Xequence. In Xequence you’d see these and easily jump back and forth to AUM instruments or whatever.
Is this more doable?
My problem has been not wanting to maintain 2 disconnected projects ie save/load in xequence, save/load in aum.
I think your idea makes things simpler, though.
Here's a quick demo of Xequence's internal audio engine, which is entirely based on WebAudio. All of the following can run entirely in a browser:
Video briefly shows:
I shall make a more thorough demo at some point!
NB: Please mind the occasional audio glitches. My newest iPad is from 2017 and it's having a hard time keeping up with anything but playing cat videos 🥴
If you use Audiobus to host X2 and AUM, Audiobus is able to save and restore the state of both apps ... until it doesn't. This has worked stably for me about 95% of the time. Unfortunately that 5% means saving projects in both apps just in case. Setup time is fast though when you don't have to resort to the 5% backup.
(Offered just as a point of reference. Not a suggestion to change workflow or project goals.)
Searching for js projects with faust/supercollider integration turns up these projects:
(among others)
https://github.com/grz0zrg/fsynth
And
https://github.com/grz0zrg/fas
Wow i had no idea that was a thing. The one mixer app i dont have. I wonder if thats possible in apeMatrix.
No, it's a special Audiobus API that has to be specifically integrated into the app (Xequence).
Incredible!
I had no idea you were so far along!
Should i link this video in the first post?
Yes, my "90% Finished Projects" list now extends to the orbit of Mars and back 😂 maybe I need a manager!
Here's a track produced entirely with Xequence 2's internal modular synth / FX:
Sure that might be a good idea!
As mentioned, I've started a proper webpage on seven.systems dedicated to this whole project, but maybe this thread is a good public place to dump, discuss and collect material first.
It seems like the last 10% take a new 100% of work to finish, like some exponential/sisyphus moment. At the very least you will need a lot of encouragement!
Fantastic track! Really dig it!
Your daw system certainly surpassed my expectations haha. So happy you want to open this up to the public.
Yes it seems this is kinda the norm with most larger projects.
🙏
It's not a DAW! It doesn't have audio tracks! 😉
Lol right right. doesnt have audio tracks yet (insert homer to bart pic)
Midi daw, then.
So what do you think about development before release? In its current state, without auv hosting, etc, it’s certainly a capable app. I guess we can wait for wim for the market analysis.
Stacking more features on top of what you already have will certainly bring more attention, but how does that relate to the amount of effort you want to devote before and after the kickstarter and where do want to pass the baton?
That video was an excellent motivator for interest in this project!
Trite. But here's a sincere question. What other than MPE editing significantly differentiates X2 (Midi) DAW from NS2 if it doesn't have audio tracks? Not to mention the lack of a tempo track and the work needed to come up with anything close to Obsidian and Slate.
Sorry guys. I'm still not seeing the benefit of this line of thinking. Put 1/10th the effort into an AUv3 plugin version and then you really have something attractive.
I know ... I said I'd shut up. I'm trying. 😬
Among others. It coud be extended by anyone who knows web development, and runs in a web browser:
It also has a modular synth which may be a bit harder to use than Obsidian, but is pretty flexible. Here's the "Add Module" dropdown:
You remind of another beta-tester-turned-skeptic 😂 but that's okay.
It's all good, I'm just testing the waters.
Fair enough.
But I'll just leave a final data point - meant to be helpful (really!) and then leave you all alone for real this time. It sure has been an interesting discussion though, and I sincerely wish you success.
I would not participate in a kick starter for this. I just don't see it as a practical direction.
Sick! I bet you get some excellent performance on desktop
My desktop is from 2016 😉 it's about the same as on my iPad regarding maximum number of synths / Plug-ins.
Omg hahah, great for testing, probably a pain for development . You need some new hardware for sure
Speaking of, . Whats the story on multi core? Youd need to use webworkers or something?
I'm 44, I need new hardware in general!
It's currently using standard WebAudio nodes, so it's up to the WebAudio implementation in whatever browser / web view it's running in. I think currently, all implementations are single-threaded, unfortunately.
GPT thinks so too!