Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Xequence as open source project (Kickstarter)

13»

Comments

  • @SevenSystems said:

    @5k3105 said:

    @SevenSystems said:

    @5k3105 said:

    @SevenSystems said:.

    Among others. It coud be extended by anyone who knows web development, and runs in a web browser:

    It also has a modular synth which may be a bit harder to use than Obsidian, but is pretty flexible. Here's the "Add Module" dropdown:

    Sick! I bet you get some excellent performance on desktop

    My desktop is from 2016 😉 it's about the same as on my iPad regarding maximum number of synths / Plug-ins.

    Omg hahah, great for testing, probably a pain for development . You need some new hardware for sure

    I'm 44, I need new hardware in general!

    @5k3105 said:
    Speaking of, . Whats the story on multi core? Youd need to use webworkers or something?

    It's currently using standard WebAudio nodes, so it's up to the WebAudio implementation in whatever browser / web view it's running in. I think currently, all implementations are single-threaded, unfortunately.

    GPT thinks so too!

    I was afraid of that.

    What package manager do you use? npm, eslint, deno? (Im a go dev, just passing familiarity with js)

    44 is still young

  • edited December 2023

    @5k3105 said:

    @SevenSystems said:

    @5k3105 said:

    @SevenSystems said:

    @5k3105 said:

    @SevenSystems said:.

    Among others. It coud be extended by anyone who knows web development, and runs in a web browser:

    It also has a modular synth which may be a bit harder to use than Obsidian, but is pretty flexible. Here's the "Add Module" dropdown:

    Sick! I bet you get some excellent performance on desktop

    My desktop is from 2016 😉 it's about the same as on my iPad regarding maximum number of synths / Plug-ins.

    Omg hahah, great for testing, probably a pain for development . You need some new hardware for sure

    I'm 44, I need new hardware in general!

    @5k3105 said:
    Speaking of, . Whats the story on multi core? Youd need to use webworkers or something?

    It's currently using standard WebAudio nodes, so it's up to the WebAudio implementation in whatever browser / web view it's running in. I think currently, all implementations are single-threaded, unfortunately.

    GPT thinks so too!

    I was afraid of that.

    What package manager do you use? npm, eslint, deno? (Im a go dev, just passing familiarity with js)

    Oh, I use the highly sophisticated <script> package manager 😁

    Jokes aside: During development, scripts are just sitting there and included individually with <script> tags. For deployment in the App Store, my IDE has a top secret feature to indeed package all compiled and minified and multiply encrypted JS code with the app binary. So yeah, no standard package manager used (in fact, I do not use any standard software whatsoever for anything, apart from the browser and the e-mail client...)

  • @SevenSystems said:

    @5k3105 said:

    @SevenSystems said:

    @5k3105 said:

    @SevenSystems said:

    @5k3105 said:

    @SevenSystems said:.

    Among others. It coud be extended by anyone who knows web development, and runs in a web browser:

    It also has a modular synth which may be a bit harder to use than Obsidian, but is pretty flexible. Here's the "Add Module" dropdown:

    Sick! I bet you get some excellent performance on desktop

    My desktop is from 2016 😉 it's about the same as on my iPad regarding maximum number of synths / Plug-ins.

    Omg hahah, great for testing, probably a pain for development . You need some new hardware for sure

    I'm 44, I need new hardware in general!

    @5k3105 said:
    Speaking of, . Whats the story on multi core? Youd need to use webworkers or something?

    It's currently using standard WebAudio nodes, so it's up to the WebAudio implementation in whatever browser / web view it's running in. I think currently, all implementations are single-threaded, unfortunately.

    GPT thinks so too!

    I was afraid of that.

    What package manager do you use? npm, eslint, deno? (Im a go dev, just passing familiarity with js)

    Oh, I use the highly sophisticated <script> package manager 😁

    Jokes aside: During development, scripts are just sitting there and included individually with <script> tags. For deployment in the App Store, my IDE has a top secret feature to indeed package all compiled and minified and multiply encrypted JS code with the app binary. So yeah, no standard package manager used (in fact, I do not use any standard software whatsoever for anything, apart from the browser and the e-mail client...)

    Nice! Much respect!

  • edited December 2023

    @SevenSystems said:
    Haha, I wonder what gave it away to you as a hybrid app? It can't be because it's sluggish or a resource hog because it is neither 😜

    just overall layout, the style of UI, probably sixth sense more than anything else lol .. i work with this tech for 25 years so i can feel when it his hidden behind :-)

    Wondering if touch latency when sensing messages to Objective C layer was solved - last time i tried WKWebView there was like 40 ms latency (when sending messages from WKWebView to underlying ObjectiveC code)

    Xequence has both an unfinished, but working AU host and an internal audio engine a la NS1. As far as iOS is concerned, a WKWebView is really like any other UIView and doesn't have any "special" requirements, nor is it a particular resource hog if you implement your web-based stuff carefully (which I try to do, and I think mostly successfully).

    Interesting you tried this … I always somehow thought using WebAudio for this is not good idea .. first of all you are locked just to generic nodes (not sure how JS inside WKWebView is performant for own low level DSP code but i am afraid it will be not ideal - did you try some performance tests for this ?) ...

    i was thinking about using WKWebView just for UI and then use C libraries for DSP - but that WKWebView <> Native code huge lag back then stopped me in continuting this way and i gave up on my "i am going to make daw" tryout :-)) If this was fixed, great !

    It is mostly because right now I cannot justify spending the time on Xequence it deserves to move it into the future. The economic return is just too low. At the same time, I also don't want it to rot away silently on my SSD. It's a good product and it has loads of potential (including being a full DAW that runs in a browser -- it is already almost there, just no audio tracks. Sounds familiar eh? 😂)

    I think i understand. But to have this sustainable, you still need invest significant time to manage this. I can see only way how this may work - it's the way how Synthstrom Audio did it with Deluge. So you will be the one who reviews and approves all commits to github and at some point releases it as update .. which would still need significant time of your involvement, so i am afraid it would not solve your main problem.

    Don't get me wrong.. it's nice idea with huge potential, but devil is always hidden i details.. So be sure you get from it exactly what you want, otherwise it may just add to your frustration and co sume ever more free time.

  • @dendy said:

    @SevenSystems said:
    Haha, I wonder what gave it away to you as a hybrid app? It can't be because it's sluggish or a resource hog because it is neither 😜

    just overall layout, the style of UI, probably sixth sense more than anything else lol .. i work with this tech for 25 years so i can feel when it his hidden behind :-)

    Probably your 6th sense exclusively, because layout and style obviously has nothing to do with tech ;)

    Wondering if touch latency when sensing messages to Objective C layer was solved - last time i tried WKWebView there was like 40 ms latency (when sending messages from WKWebView to underlying ObjectiveC code)

    Yes that's been solved long ago -- iOS used to create a new JSContext every time a message was sent. This is now kept alive and thus the latency is neglegible (in the low single-digit milliseconds).

    Xequence has both an unfinished, but working AU host and an internal audio engine a la NS1. As far as iOS is concerned, a WKWebView is really like any other UIView and doesn't have any "special" requirements, nor is it a particular resource hog if you implement your web-based stuff carefully (which I try to do, and I think mostly successfully).

    Interesting you tried this … I always somehow thought using WebAudio for this is not good idea .. first of all you are locked just to generic nodes (not sure how JS inside WKWebView is performant for own low level DSP code but i am afraid it will be not ideal - did you try some performance tests for this ?) ...

    It's complicated ™. The generic nodes combined together are good enough to create pretty much anything apart from very few niche cases. Actually I can only think of decimation / decimator (reducing the sampling rate to produce aliasing) that has no straightforward way to be implemented using nodes (that's why Xequence's audio engine doesn't have a decimator 😄). This could however indeed be solved by using an AudioWorklet. I haven't worked with these a lot, but for a few decimator FX in a project that should definitely be performant enough. Also note that JS engines are not the same as decades ago. It is now for many tasks on par with C regarding performance!

    i was thinking about using WKWebView just for UI and then use C libraries for DSP - but that WKWebView <> Native code huge lag back then stopped me in continuting this way and i gave up on my "i am going to make daw" tryout :-)) If this was fixed, great !

    Yeah. Even back then there were workarounds, like adding an ObjC event handler for the JS prompt function. You could then just send messages to ObjC using prompt('message'), which was much more performant than using postMessage.

    It is mostly because right now I cannot justify spending the time on Xequence it deserves to move it into the future. The economic return is just too low. At the same time, I also don't want it to rot away silently on my SSD. It's a good product and it has loads of potential (including being a full DAW that runs in a browser -- it is already almost there, just no audio tracks. Sounds familiar eh? 😂)

    I think i understand. But to have this sustainable, you still need invest significant time to manage this. I can see only way how this may work - it's the way how Synthstrom Audio did it with Deluge. So you will be the one who reviews and approves all commits to github and at some point releases it as update .. which would still need significant time of your involvement, so i am afraid it would not solve your main problem.

    Don't get me wrong.. it's nice idea with huge potential, but devil is always hidden i details.. So be sure you get from it exactly what you want, otherwise it may just add to your frustration and co sume ever more free time.

    Thanks, those are good points of course. Yes, the only way this would possibly work is if my involvement is limited to answering questions and helping with people getting acquainted to the code base in the first few weeks.

  • @SevenSystems
    Yes that's been solved long ago -- iOS used to create a new JSContext every time a message was sent. This is now kept alive and thus the latency is neglegible (in the low single-digit milliseconds).

    That’s really cool !! Now this reignited my interest :-)

    It's complicated ™. The generic nodes combined together are good enough to create pretty much anything apart from very few niche cases.

    well i am thinking for example about custom filters (default ones are “not great, not terrible”:)), or custom oscillators (is there for example advaced waverabke oscillator ? .. ehn ok i should check docs, again it’s 2-3 years since in played with it last time) .. stuff like that.. but yeah for generic stuff it’s probably ok ..

    Also note that JS engines are not the same as decades ago. It is now for many tasks on par with C regarding performance!

    wow that is really great news !

  • edited December 2023

    @dendy said:

    @SevenSystems
    It's complicated ™. The generic nodes combined together are good enough to create pretty much anything apart from very few niche cases.

    well i am thinking for example about custom filters (default ones are “not great, not terrible”:)), or custom oscillators (is there for example advaced waverabke oscillator ? .. ehn ok i should check docs, again it’s 2-3 years since in played with it last time) .. stuff like that.. but yeah for generic stuff it’s probably ok ..

    You can define custom IIR filters:

    https://developer.mozilla.org/en-US/docs/Web/API/IIRFilterNode

    However last time I tried this, it was not possible to change the coefficients once created, so you had to create a new node every time you wanted to change anything about your filter (!). This together with apparent memory leaks in the implementation (after removing and creating a few hundred nodes, the browser / webview crashes) isn't great 😉 But you can also create better / custom filters by combining several BiQuadFilter nodes and combining their frequency / phase responses in a creative way.

    Yes you can create wavetable oscillators by using WaveShaper nodes creatively (sounds strange, but it's true!). Not sure if they're "advanced" but they are wavetable 😄

    @SevenSystems
    Also note that JS engines are not the same as decades ago. It is now for many tasks on par with C regarding performance!

    wow that is really great news !

    Example:

    https://stackoverflow.com/questions/27432973/why-is-this-nodejs-2x-faster-than-native-c

    Of course I still love C and the ability to manually handle everything including memory allocation etc., but for quick, "cheap" development of large applications, the performance tradeoffs between C and JS are really not crucial anymore. Memory footprint is more problematic, but even that can be handled properly in JS if you're careful.

  • very interesting, will definitely play with it, looks like it went a nice path forward since i checked it last time, thanks for info 👍🤝

  • @dendy said:
    very interesting, will definitely play with it, looks like it went a nice path forward since i checked it last time, thanks for info 👍🤝

    Sure. And regarding sound quality etc., the following track has been produced completely with standard Web Audio nodes, including mastering 😉

  • @SevenSystems said:

    @dendy said:
    very interesting, will definitely play with it, looks like it went a nice path forward since i checked it last time, thanks for info 👍🤝

    Sure. And regarding sound quality etc., the following track has been produced completely with standard Web Audio nodes, including mastering 😉

    sounds definitely good 👍👍👍

  • edited January 2

    .

  • wimwim
    edited January 4

    @SevenSystems said:
    I've researched this a bit, and it seems like GPLv3 is actually (now) compatible with the App Store. So no problems there!

    If it's not too much trouble, I'd love to see a link or two to the source(s) for that info. That's really good news.

  • @wim said:

    @SevenSystems said:
    I've researched this a bit, and it seems like GPLv3 is actually (now) compatible with the App Store. So no problems there!

    If it's not too much trouble, I'd love to see a link or two to the source(s) for that info. That's really good news.

    I think it’s not the license per se but that it requires the permission of all contributors, that’s what prevents blender from the App Store, just too much of a logistical headache to sign every last one of hundreds up…

  • @wim said:

    @SevenSystems said:
    I've researched this a bit, and it seems like GPLv3 is actually (now) compatible with the App Store. So no problems there!

    If it's not too much trouble, I'd love to see a link or two to the source(s) for that info. That's really good news.

    I'd be interested, too as all past legal analysis indicated that GPLv3 is not compatible with the App Store and that the reason Apple doesn't allow the apps is to avoid legal hassles from being sued for violating the license.

  • edited January 4

    @Krupa said:

    @wim said:

    @SevenSystems said:
    I've researched this a bit, and it seems like GPLv3 is actually (now) compatible with the App Store. So no problems there!

    If it's not too much trouble, I'd love to see a link or two to the source(s) for that info. That's really good news.

    I think it’s not the license per se but that it requires the permission of all contributors, that’s what prevents blender from the App Store, just too much of a logistical headache to sign every last one of hundreds up…

    That's how I understood the issue with other projects like Surge and SuperCollider. How would it work in this case, though? At the beginning, all the code has a single contributor that would give permission, but how would the project lead(s) ensure that future contributors consent as well? Is it a one-by-one kind of thing or would a disclaimer in the source code/repository be enough? Also: is that permission being granted effectively a double-licensing of the software? I've asked these questions before in the forum but I've never fully understood.

  • @SevenSystems Personally, what I am most interested in, for society, is open source software that can be easily adapted for educational purposes.

    If a novice programmer wanted to contribute to the project, or use its components to build their own DAW, they would probably need a lot of guidance.

    So, if two people agree to maintain it, and then life gets in the way...Apple updates will eventually stop the code from working, right?

    But on the other hand...
    if you have most of the components of a DAW,
    and the code is designed to be somewhat modular,
    and any novice could build their own DAW using markup and a tiny bit of code...

    Then you have something that could appeal to a wider audience.

    If it's already running in a web browser, that's great for publicity. When a new web browser DAW raises the bar, I always see videos about it on my YouTube feed.

    Rather than focus on adding more features, I would reduce it to the most minimalistic v0.1 program possible, and then add features until you get to 1.0. This would turn it into something like a "How to Draw Owls" book. The intermediate versions would act as educational resources, rather than representing the actual earlier versions. People would then be donating money for music tech education.

    Personally, I want to see the design of mobile DAWs continue to evolve. If you ask people what they want, they'll probably name features from desktop DAWs. But when something comes along with a much faster workflow, they'll probably make the jump. Maybe that program could have Xequence 2's DNA in it. Possible, or no?

  • @wim said:

    @SevenSystems said:
    I've researched this a bit, and it seems like GPLv3 is actually (now) compatible with the App Store. So no problems there!

    If it's not too much trouble, I'd love to see a link or two to the source(s) for that info. That's really good news.

    Sorry, I don't remember the sources anymore, but I did research it for half an hour or something and found several posts which seemed to confirm that GPLv3 is compatible with the App Store.

  • The user and all related content has been deleted.
  • @tja said:

    @SevenSystems said:

    @wim said:

    @SevenSystems said:
    I've researched this a bit, and it seems like GPLv3 is actually (now) compatible with the App Store. So no problems there!

    If it's not too much trouble, I'd love to see a link or two to the source(s) for that info. That's really good news.

    Sorry, I don't remember the sources anymore, but I did research it for half an hour or something and found several posts which seemed to confirm that GPLv3 is compatible with the App Store.

    A discussion about this, with links.

    https://opensource.stackexchange.com/questions/9500/is-apple-allowed-to-distribute-gplv3-licensed-software-through-its-ios-app-store

    Notice that GPL’s primary author says in that discussion that the App Store and GPLv3 are incompatible.

    Somewhere I read a detailed analysis a few years ago by a lawyer in the field who explained the issues and why Apple wanted to avoid landing in court over the issue.

  • edited January 8

    Conveniently, any doubt regarding this issue has just been accidentally and authoritatively been resolved:

    https://forum.audiob.us/discussion/59156/ot-scummvm-is-on-the-appstore#latest

    (OT: can't wait to play Loom in it!)

  • @tja said:

    @SevenSystems said:

    @wim said:

    @SevenSystems said:
    I've researched this a bit, and it seems like GPLv3 is actually (now) compatible with the App Store. So no problems there!

    If it's not too much trouble, I'd love to see a link or two to the source(s) for that info. That's really good news.

    Sorry, I don't remember the sources anymore, but I did research it for half an hour or something and found several posts which seemed to confirm that GPLv3 is compatible with the App Store.

    A discussion about this, with links.

    https://opensource.stackexchange.com/questions/9500/is-apple-allowed-to-distribute-gplv3-licensed-software-through-its-ios-app-store

    Hah, that's my primary source actually I think. But see below. There is now an actual GPLv3 app on the App Store, in flesh and blood (or whatever you say in English). The issue is settled.

  • @5k3105 . I would like to reinforce what I sense is your main desire: a self contained DAW like NS2 but with continuing development and support.

    There are 2 things that NS2 provides that make it the only iOS DAW that is of use to me.
    1. Outstanding and intuitive midi editing
    2. Outstanding built in synths.

    Xequence already has #1. The lack of #2 keeps me from using it. I have tried many times to make it work for me but all its benefits as a world class sequencer are negated by this. The steps needed to add synths destroys the creative workflow. I often want to quickly add another sound and move on. Instead I have to stop, figure out if there are any midi channels left, go over to AUM, add a track and synth set a patch, set the midi channel, then wire everything up. This is buzz-kill for me. It also severely limits the number of tracks. I can easily have 40 or 50 tracks in NS2 without straining cpu. I am told that’s because of the inherent efficiency of built in synths.

    @SevenSystems says he’s already almost at NS1. Releasing that with a flexible and efficient built in synth (like Obsidian) might bootstrap enough interest for other devs to jump in and assist in some way. What nobody wants is more abandonware because life intrudes on one developer.

    You can put me down to buy into a go fund me or one of the other solutions discussed here.

  • edited January 12

    @boomer said:
    @5k3105 . I would like to reinforce what I sense is your main desire: a self contained DAW like NS2 but with continuing development and support.

    There are 2 things that NS2 provides that make it the only iOS DAW that is of use to me.
    1. Outstanding and intuitive midi editing
    2. Outstanding built in synths.

    Xequence already has #1. The lack of #2 keeps me from using it. I have tried many times to make it work for me but all its benefits as a world class sequencer are negated by this. The steps needed to add synths destroys the creative workflow. I often want to quickly add another sound and move on. Instead I have to stop, figure out if there are any midi channels left, go over to AUM, add a track and synth set a patch, set the midi channel, then wire everything up. This is buzz-kill for me. It also severely limits the number of tracks. I can easily have 40 or 50 tracks in NS2 without straining cpu. I am told that’s because of the inherent efficiency of built in synths.

    You really should look into working with templates.

    Make a Xequence project template that already has 16 instruments, set to MIDI channel 1 to 16, all already pointing at AUM.

    In AUM, make a project template with 16 channels, all having the MIDI filtering set up already to channel 1 to 16, and set to receive MIDI from 'AUM Destination'.

    Start every project by loading those two templates.

    All you have to do then is when you need a new sound, head over to AUM (one tap on the "AUM" button in Xequence if it's loaded as IAA into AUM), long-tap the top "circle" of an unused AUM channel, and choose "Replace". Tap on the Xequence icon in AUM to go back to Xequence. Boom.

    @SevenSystems says he’s already almost at NS1. Releasing that with a flexible and efficient built in synth (like Obsidian) might bootstrap enough interest for other devs to jump in and assist in some way. What nobody wants is more abandonware because life intrudes on one developer.

    You can put me down to buy into a go fund me or one of the other solutions discussed here.

    Thank you 🙏 also for the compliments.

  • @SevenSystems said:

    @boomer said:

    You really should look into working with templates.

    Make a Xequence project template that already has 16 instruments, set to MIDI channel 1 to 16, all already pointing at AUM.

    In AUM, make a project template with 16 channels, all having the MIDI filtering set up already to channel 1 to 16, and set to receive MIDI from 'AUM Destination'.

    Start every project by loading those two templates.

    All you have to do then is when you need a new sound, head over to AUM (one tap on the "AUM" button in Xequence if it's loaded as IAA into AUM), long-tap the top "circle" of an unused AUM channel, and choose "Replace". Tap on the Xequence icon in AUM to go back to Xequence. Boom.

    I didnt know about this. Works well.

  • @5k3105 said:

    @SevenSystems said:

    @boomer said:

    You really should look into working with templates.

    Make a Xequence project template that already has 16 instruments, set to MIDI channel 1 to 16, all already pointing at AUM.

    In AUM, make a project template with 16 channels, all having the MIDI filtering set up already to channel 1 to 16, and set to receive MIDI from 'AUM Destination'.

    Start every project by loading those two templates.

    All you have to do then is when you need a new sound, head over to AUM (one tap on the "AUM" button in Xequence if it's loaded as IAA into AUM), long-tap the top "circle" of an unused AUM channel, and choose "Replace". Tap on the Xequence icon in AUM to go back to Xequence. Boom.

    I didnt know about this. Works well.

    Yes, in a setup like this, having all the technicalities ready-to-go from the start is a huge help.

  • edited February 20

    So I've been using Xequence for two years.
    I have a use case for it that may be quite different to others. I use it to control mostly hardware and it is my recorder of all midi data coming from other hardware sequencers, synths and midi keyboard.

    I don't wanna advocate for my own case alone. Still, I feel the app should stay focused on what it is. An audio engine for me would just add features - and thus menus and buttons - that I don't need. Hope that's not sounding discouraging. It being a very focused app is part of it's strength imo.

    As a midi sequencer alone there is still quite some features that could be added or existing ones to be optimized to make it perfect.
    My Top Ideas for that:

    • A Note-sequencer mode. With that I mean being able to enter a sequence step-by-step with a cursor moving forward for each midi note
    • Control of transport and the editor through either midi/qwerty-shortcuts. For example I would love to nudge notes in a fashion of ctrl+left/right to move notes on the grid within the piano roll. And ctrl+shift+left/right for nudging in microtiming.
    • A "Drums to own tracks" feature, similar as the current "controllers to own tracks". The idea would be to split a track linked to a drum map into different tracks ¨Kick", "Snare", etc

    I'd have much more if a future or present community wants to know :)

  • As for the open source idea, I honestly don't know what to say. As others here I'm a bit skeptic having seen projects being open-sourced and barely carrying on. But there are enough positive examples as well, as mentioned blender, ags...

    It going to other platforms sounds like a very good idea to me. Ios is niche and maintenance of apps seems to be more time-consuming than on other platforms I feel, not saying that from a dev experience tho.

Sign In or Register to comment.