Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

VividShaper by VividSynths (Released)

2456

Comments

  • Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

  • This looks like a total hoot, and a much quicker way to get something interesting going with Lua, than in Audulus.

  • edited July 2023

    @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    Absolutely, be nice to have sonic-pi on iOS. (https://sonic-pi.net/)

  • @ik2000 said:

    @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    Absolutely, be nice to have sonic-pi on iOS. (https://sonic-pi.net/)

    Wasn't there an iOS version of chuck at some point in the past or did I dream that?

  • There was a port of CSound on iOS called RunLoopSound, but it is no longer available for some reason. I really want SuperCollider on my iPad and Android!

  • @Gavinski said:

    @McD said:

    @Gavinski said:

    @McD said:
    Looking at the dev’s homepage, I noticed he’s running a Discord group to share code, ask questions, etc.

    It looks like a side hobby for the code savvy but it’s so much more powerful to turn knobs and follow the history and trends of modern synthesis and not create data structures and routines. Excepting the mangling of MIDI for me with Mozaic.

    Still, I will track this effort until I hear a demo that moves me.

    Do you have to program every note? Or does it accept MIDI? Who wants to fix the output in code?

    You don't have to program every note! You can program particular patches for particular notes if you want, but no, not necessary at all. It looks like an efficient and simple coding language

    Thanks… I watched your video demo and took the plunge.

    Not able to listen to that at the moment but will later. As a Mozaic guy, Lua will be ridiculously easy for you.

    I started by hitting the HELP button and editing the ADSR vales to see if I could make some "popping sounds"... I got some short sounds but haven't found the right settings for a true "pop". It makes me wish I could start from an upload sample.
    I suspect overtime the developer will add that as a feature. I haven't found any kind of roadmap or approved feature request list yet but I suspect that kind of planning will emerge on the Discord channel.

    Still... there's something compelling in the delivered range of presets that validate the tech here. I love a synth that morphs and that's what's being demo'ed in these presets... sound that deliver an evolving soundscape over many seconds. Something you won't get from samples typically due to storage costs but some synths are also good at this and they are standouts for
    going beyond the typical preset landscape. Its something that Korg has been so good at over many years.

    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    Time to looking for a Lua YouTube to see what's possible for looping, data structures and modularity/functions.

    I'll report back.

  • @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    FWIW, Audulus is fully capable of note-related programming and processing.

  • wimwim
    edited July 2023

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

  • @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”. I found a Lua tutorial on YouTube and pasted the code right into VividShaper to see what might happen and it errored out so it’s a subset of Lua behaviors. No text output for example using “print”. I was hoping there might be a window for exchange of text prompts and input replies.

    I will wait for @espiegel123 and @_Ki. I think we can pull you into a look see. It’s similar to Mozaic in that you can be a user without writing a single line of code. The Presets are compelling. Maybe @thesoundtestroom will do a Preset survey video to show the power of the AUv3 Synth functionality.

    I tried setting up Preset switching in AUM but it didn’t seem to work. Maybe someone can crack that nut… I think AUM Parameter functionality is needed in another update.

  • @McD said:

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”. I found a Lua tutorial on YouTube and pasted the code right into VividShaper to see what might happen and it errored out so it’s a subset of Lua behaviors. No text output for example using “print”. I was hoping there might be a window for exchange of text prompts and input replies.

    I will wait for @espiegel123 and @_Ki. I think we can pull you into a look see. It’s similar to Mozaic in that you can be a user without writing a single line of code. The Presets are compelling. Maybe @thesoundtestroom will do a Preset survey video to show the power of the AUv3 Synth functionality.

    I tried setting up Preset switching in AUM but it didn’t seem to work. Maybe someone can crack that nut… I think AUM Parameter functionality is needed in another update.

    The dev watched my vid and in response to some of my observations has said that he will a) improve the preset system, at least adding batch import / export and tagging, and will expose the auv3 parameters 🔥

  • @mcD: I learned lua this year since Audulus can use it for creating custom graphic u.I. elements. They recently added the ability to use it for DSP coding, too.

    I’ve got my hands full with the projects I am already working on.

  • wimwim
    edited July 2023

    @McD said:

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”.

    I program but I'm not a programmer. If programming is the only way to accomplish something, even if it means learning a new language, then I'll obsess over it until I do. Twiddling synth knobs doesn't fall into that category. 😉

  • @wim said:

    @McD said:

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”.

    I program but I'm not a programmer. If programming is the only way to accomplish something, even if it means learning a new language, then I'll obsess over it until I do. Twiddling synth knobs doesn't fall into that category. 😉

    Still… you put out Mozaic code at a truly impressive rate when prompted with something worth writing. I think this might be a similar situation as people request interesting “waveforms” and would like a little help. It’s only $6.

  • wimwim
    edited July 2023

    @McD said:

    @wim said:

    @McD said:

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”.

    I program but I'm not a programmer. If programming is the only way to accomplish something, even if it means learning a new language, then I'll obsess over it until I do. Twiddling synth knobs doesn't fall into that category. 😉

    Still… you put out Mozaic code at a truly impressive rate when prompted with something worth writing. I think this might be a similar situation as people request interesting “waveforms” and would like a little help. It’s only $6.

    Sorry, I just can't see it. There ain't nothing about a waveform that needs to be created from custom code. Helping people to do things that are (IMO) of little use isn't my thing. I can't imagine there's anything that essential or unique about a waveform just because it was created with LUA. I'd sooner use a synth that you can draw your own waveforms than bury my head in code to do it.

    And besides, how the heck would anyone describe what the heck they wanted in terms that someone else could turn it into code?? 😂

    Enough my friend. Good try. But no one needs to read about why I'm not interested in an app.

  • FYI: Lua has 21 reserved keywords… I have no idea how many will work in VividShaper but that’s a good p;ace to start learning how to create functions and data structures, I think.

  • McDMcD
    edited July 2023

    My first Preset is a simple modification of the 1st Factory Preset to add a little Pop to the notes and add more overtones to the spectrum. Changes to numHarmonics and VSADSRE parameters only.

    -- Patch: Modified Additive Harmonics Evolution
    wave[1] = VSSin(1, 0)
    
    -- Try different values of numHarmonics to add or reduce overtone complexity
    local numHarmonics = 64
    local time = gatetimeon + gatetimeoff
    
    for i = 2, numHarmonics do
        local frequency = i
        local harmonic = VSSin(frequency, 0)
        local weight = (gatetimeon+gatetimeoff)/(i)
        wave[1] = VSAdd(wave[1], VSMul(harmonic, weight))
    end
    
    wave[1] = VSNorm(wave[1],0.8,0.8)
    vol[1] = velocity*VSADSRE(0.01, 0.1, 0.1, 0, 0.5, gatetimeon, gatetimeoff)
    updatefreq = 1024
    gvol = 0.5
    

    AUM set up for the recording - Oops! I snapped the screen with VividSynth in active. Don’t do that!

    The Reverb used is AudioReverb “Chambers: Dark Chamber” preset.

    The MIDI Generators are 3 instances of Riffler all feeding one VividSynth. I spread the Rifflers across different octaves:

    Resulting audio recording:

  • heshes
    edited July 2023

    @wim said:

    @McD said:

    @wim said:

    @McD said:

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”.

    I program but I'm not a programmer. If programming is the only way to accomplish something, even if it means learning a new language, then I'll obsess over it until I do. Twiddling synth knobs doesn't fall into that category. 😉

    Still… you put out Mozaic code at a truly impressive rate when prompted with something worth writing. I think this might be a similar situation as people request interesting “waveforms” and would like a little help. It’s only $6.

    Sorry, I just can't see it. There ain't nothing about a waveform that needs to be created from custom code. . . .

    Yeah, seems like the waveforms themselves might be thing of least interest. What would be more interesting would be full programmatic control over creation and modulation of envelopes, lfos, filters. Vividshaper gives you this, right? You can use Mozaic to do modulation, at least, of those, but I can see how having programmatic control inside the synth could be useful. For example, I assume there's then no problem with being limited to 128 values of midi. Using Mozaic to control synth modulation rather than having it built-in might start to feel downright clunky. (I guess Mozaic already feels clunky for this, but feel lucky to have it available at all.)

    I wonder if the programmatic synth interface might be the kind of thing that makes some simple things harder than with gui, but makes some things that are hard with a gui much easier.

  • edited July 2023

    @hes said:

    @wim said:

    @McD said:

    @wim said:

    @McD said:

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”.

    I program but I'm not a programmer. If programming is the only way to accomplish something, even if it means learning a new language, then I'll obsess over it until I do. Twiddling synth knobs doesn't fall into that category. 😉

    Still… you put out Mozaic code at a truly impressive rate when prompted with something worth writing. I think this might be a similar situation as people request interesting “waveforms” and would like a little help. It’s only $6.

    Sorry, I just can't see it. There ain't nothing about a waveform that needs to be created from custom code. . . .

    Yeah, seems like the waveforms themselves might be thing of least interest. What would be more interesting would be full programmatic control over creation and modulation of envelopes, lfos, filters. Vividshaper gives you this, right? You can use Mozaic to do modulation, at least, of those, but I can see how having programmatic control inside the synth could be useful. For example, I assume there's then no problem with being limited to 128 values of midi. Using Mozaic to control synth modulation rather than having it built-in might start to feel downright clunky. (I guess Mozaic already feels clunky for this, but feel lucky to have it available at all.)

    I wonder if the programmatic synth interface might be the kind of thing that makes some simple things harder than with gui, but makes some things that are hard with a gui much easier.

    At a fairly basic and fundamental level, a line of code that executes a maths equation over a series of frames to produce a varying number is about the most elegant (quick) way possible to produce infinitely differing shapes. Take for example the Daniel Shiffman book for Processing, (countless other examples), as a starting point, EG the oscillations chapter, https://natureofcode.com/book/chapter-3-oscillation/
    Which is to say, the programmatic way to "draw", geometric yet potentially complex shapes, certainly when repeated a mundanely large number of times, or recursive times, is going to be literally factors easier in code,, assuming you've grappled with this maths equationey approach of plugging numbers in to formulas and seeing what they do.
    In this example https://natureofcode.com/book/chapter-8-fractals/ we see some wonderful fractals that result from a recursive function, now, admittedly if these were played by a wavetable they might be horrible, and that's probably not the shape that I would draw as a wavetable, but the point is still applicable, an infinite variety of forms can be made from very little code/maths.

    What might be harder,, perhaps still life sketching?

    PS as an aside for anyone interested, Lua, having been designed as a small and thus embeddable language in other applications, is also in Reaper, Davinci Resolve, and every geek's favourite code editor, neovim, amongst others.

    (EDIT:) I'm pretty sure this is not meant to be for everyone though, I actually found this amount of attention for it to be surprising, and not the least bit surprising if most people are not interested in coding of wavetables. Why the hell would you if you could be doing something else :smiley: I do really like the idea personally though, of this specifically even more than other coding platforms, a very pure experiment in drawing all kinds of wave shapes

  • @Bruques said:

    @hes said:

    @wim said:

    @McD said:

    @wim said:

    @McD said:

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”.

    I program but I'm not a programmer. If programming is the only way to accomplish something, even if it means learning a new language, then I'll obsess over it until I do. Twiddling synth knobs doesn't fall into that category. 😉

    Still… you put out Mozaic code at a truly impressive rate when prompted with something worth writing. I think this might be a similar situation as people request interesting “waveforms” and would like a little help. It’s only $6.

    Sorry, I just can't see it. There ain't nothing about a waveform that needs to be created from custom code. . . .

    Yeah, seems like the waveforms themselves might be thing of least interest. What would be more interesting would be full programmatic control over creation and modulation of envelopes, lfos, filters. Vividshaper gives you this, right? You can use Mozaic to do modulation, at least, of those, but I can see how having programmatic control inside the synth could be useful. For example, I assume there's then no problem with being limited to 128 values of midi. Using Mozaic to control synth modulation rather than having it built-in might start to feel downright clunky. (I guess Mozaic already feels clunky for this, but feel lucky to have it available at all.)

    I wonder if the programmatic synth interface might be the kind of thing that makes some simple things harder than with gui, but makes some things that are hard with a gui much easier.

    At a fairly basic and fundamental level, a line of code that executes a maths equation over a series of frames to produce a varying number is about the most elegant (quick) way possible to produce infinitely differing shapes. Take for example the Daniel Shiffman book for Processing, (countless other examples), as a starting point, EG the oscillations chapter, https://natureofcode.com/book/chapter-3-oscillation/
    Which is to say, the programmatic way to "draw", geometric yet potentially complex shapes, certainly when repeated a mundanely large number of times, or recursive times, is going to be literally factors easier in code,, assuming you've grappled with this maths equationey approach of plugging numbers in to formulas and seeing what they do.
    In this example https://natureofcode.com/book/chapter-8-fractals/ we see some wonderful fractals that result from a recursive function, now, admittedly if these were played by a wavetable they might be horrible, and that's probably not the shape that I would draw as a wavetable, but the point is still applicable, an infinite variety of forms can be made from very little code/maths.

    What might be harder,, perhaps still life sketching?

    PS as an aside for anyone interested, Lua, having been designed as a small and thus embeddable language in other applications, is also in Reaper, Davinci Resolve, and every geek's favourite code editor, neovim, amongst others.

    (EDIT:) I'm pretty sure this is not meant to be for everyone though, I actually found this amount of attention for it to be surprising, and not the least bit surprising if most people are not interested in coding of wavetables. Why the hell would you if you could be doing something else :smiley: I do really like the idea personally though, of this specifically even more than other coding platforms, a very pure experiment in drawing all kinds of wave shapes

    Very interesting post!

    I'm also surprised by the level of interest. Wonder why!

  • edited July 2023

    @espiegel123 said:

    @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    FWIW, Audulus is fully capable of note-related programming and processing.

    I have not done a lot with Audulus but I am aware of the DSP module for sound processing. But is it possible to program algorithm based sequences in Audulus ? And where would you start ?

  • This is really Neato. I think this is the year of the LUA!
    Kontakt 7.5 just added Lua script support.
    Already in Falcon.
    I must learn.
    I wonder if you can build in this app and export code to continue in Kontakt it Falcon?
    I must learn.

  • heshes
    edited July 2023

    @Bruques said:

    @hes said:

    @wim said:

    @McD said:

    @wim said:

    @McD said:

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    @wim said:

    @McD said:
    I hope the tech savvy regulars here get on this train and show us the way to mastering this approach to synth "sound engineering". @wim, @espiegel123, @_Ki... are you checking this new VividSynth out and writing some "presets"?

    eh. No way. I'm not touchin' that one. The last thing I need is yet another geek distraction to keep me from actually, you know, uh ... making music. 🧐

    I can’t believe you won’t test the waters. Programmers should learn a new language every year… this could be the year of “Lua”.

    I program but I'm not a programmer. If programming is the only way to accomplish something, even if it means learning a new language, then I'll obsess over it until I do. Twiddling synth knobs doesn't fall into that category. 😉

    Still… you put out Mozaic code at a truly impressive rate when prompted with something worth writing. I think this might be a similar situation as people request interesting “waveforms” and would like a little help. It’s only $6.

    Sorry, I just can't see it. There ain't nothing about a waveform that needs to be created from custom code. . . .

    Yeah, seems like the waveforms themselves might be thing of least interest. What would be more interesting would be full programmatic control over creation and modulation of envelopes, lfos, filters. Vividshaper gives you this, right? You can use Mozaic to do modulation, at least, of those, but I can see how having programmatic control inside the synth could be useful. For example, I assume there's then no problem with being limited to 128 values of midi. Using Mozaic to control synth modulation rather than having it built-in might start to feel downright clunky. (I guess Mozaic already feels clunky for this, but feel lucky to have it available at all.)

    I wonder if the programmatic synth interface might be the kind of thing that makes some simple things harder than with gui, but makes some things that are hard with a gui much easier.

    At a fairly basic and fundamental level, a line of code that executes a maths equation over a series of frames to produce a varying number is about the most elegant (quick) way possible to produce infinitely differing shapes.

    However, as you allude to, moving from the basic and fundamental level to the practical one: it's pretty elegant (quick) to produce different waveshapes by twiddling a knob (which may itself be tied to lines of code behind it that produce infinitely differing shapes). :)

  • @catherder said:

    @espiegel123 said:

    @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    FWIW, Audulus is fully capable of note-related programming and processing.

    I have not done a lot with Audulus but I am aware of the DSP module for sound processing. But is it possible to program algorithm based sequences in Audulus ? And where would you start ?

    Audulus is essentially a visual programming environment for music/audio. I don’t know how to describe his to start other than to explore the tutorials, manual and examples and watch some of the explanatory videos.

    The lua DSP is not required for sound processing. It was implemented because there are certain types of audio processing that are easier. 99.9% of the people using Audulus probably don’t use it.

  • edited July 2023

    @espiegel123 said:

    @catherder said:

    @espiegel123 said:

    @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    FWIW, Audulus is fully capable of note-related programming and processing.

    I have not done a lot with Audulus but I am aware of the DSP module for sound processing. But is it possible to program algorithm based sequences in Audulus ? And where would you start ?

    Audulus is essentially a visual programming environment for music/audio. I don’t know how to describe his to start other than to explore the tutorials, manual and examples and watch some of the explanatory videos.

    The lua DSP is not required for sound processing. It was implemented because there are certain types of audio processing that are easier. 99.9% of the people using Audulus probably don’t use it.

    @espiegel123 thanks for the explanation. I am familiar with the modular structure of Audulus. What I am looking for is something as close to sonic-pi as possible. This means writing code, and ideally use the same environment for both the synthesis (audio DSP) and the composition / sequencing / arrangement (note event manipulation) part. Ideally such an environment would allow the code to be modified live.

    I already contacted the developers of two iOS Python IDEs and asked if they would like to include audio and midi libraries in their software. Although they sounded enthusiastic, I see no progress and that their focus is mainly on the implementation of AI libraries.

  • @catherder said:

    @espiegel123 said:

    @catherder said:

    @espiegel123 said:

    @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    FWIW, Audulus is fully capable of note-related programming and processing.

    I have not done a lot with Audulus but I am aware of the DSP module for sound processing. But is it possible to program algorithm based sequences in Audulus ? And where would you start ?

    Audulus is essentially a visual programming environment for music/audio. I don’t know how to describe his to start other than to explore the tutorials, manual and examples and watch some of the explanatory videos.

    The lua DSP is not required for sound processing. It was implemented because there are certain types of audio processing that are easier. 99.9% of the people using Audulus probably don’t use it.

    @espiegel123 thanks for the explanation. I am familiar with the modular structure of Audulus. What I am looking for is something as close to sonic-pi as possible. This means writing code, and ideally use the same environment for both the synthesis (audio DSP) and the composition / sequencing / arrangement (note event manipulation) part.

    I already contacted the developers of two iOS Python IDEs and asked if they would like to include audio and midi libraries in their software. Although they sounded enthusiastic, I see no progress and that their focus is mainly on the implementation of AI libraries.

    Although likely to be tons of work to add a usable text interface, https://github.com/wdkk/iSuperColliderKit enables bundling of supercollider (which of course sonic pi is under the hood) in to an iOS app,,, albeit, it's more the engine than the front end to that engine.
    Here's the conference paper presentation on it:
    https://quod.lib.umich.edu/cgi/p/pod/dod-idx/isupercolliderkit-a-toolkit-for-ios-using-an-internal.pdf?c=icmc;idno=bbp2372.2015.047;format=pdf

    I know, that's probably not much help unless you embark on building on top of it yourself, but it definitely has potential. Maybe we could crowdfund paying a dev, end result would be sonic pi on ios.

  • @catherder said:

    @espiegel123 said:

    @catherder said:

    @espiegel123 said:

    @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    FWIW, Audulus is fully capable of note-related programming and processing.

    I have not done a lot with Audulus but I am aware of the DSP module for sound processing. But is it possible to program algorithm based sequences in Audulus ? And where would you start ?

    Audulus is essentially a visual programming environment for music/audio. I don’t know how to describe his to start other than to explore the tutorials, manual and examples and watch some of the explanatory videos.

    The lua DSP is not required for sound processing. It was implemented because there are certain types of audio processing that are easier. 99.9% of the people using Audulus probably don’t use it.

    @espiegel123 thanks for the explanation. I am familiar with the modular structure of Audulus. What I am looking for is something as close to sonic-pi as possible. This means writing code, and ideally use the same environment for both the synthesis (audio DSP) and the composition / sequencing / arrangement (note event manipulation) part. Ideally such an environment would allow the code to be modified live.

    I already contacted the developers of two iOS Python IDEs and asked if they would like to include audio and midi libraries in their software. Although they sounded enthusiastic, I see no progress and that their focus is mainly on the implementation of AI libraries.

    If by writing code you mean using a text-based language, Audulus won’t be a fit for you. Audulus is a visual coding environment.

  • heshes
    edited July 2023

    @catherder said:

    @espiegel123 said:

    @catherder said:

    @espiegel123 said:

    @catherder said:
    Very interesting.I’m still waiting for a full “algorave” environment like supercollider but for iOS. But maybe it’s a matter of combining apps (Vividshaper or Audulus for synthesis and Mozaic or Wotjas scripting engine for the composition.

    FWIW, Audulus is fully capable of note-related programming and processing.

    I have not done a lot with Audulus but I am aware of the DSP module for sound processing. But is it possible to program algorithm based sequences in Audulus ? And where would you start ?

    Audulus is essentially a visual programming environment for music/audio. I don’t know how to describe his to start other than to explore the tutorials, manual and examples and watch some of the explanatory videos.

    The lua DSP is not required for sound processing. It was implemented because there are certain types of audio processing that are easier. 99.9% of the people using Audulus probably don’t use it.

    @espiegel123 thanks for the explanation. I am familiar with the modular structure of Audulus. What I am looking for is something as close to sonic-pi as possible. This means writing code, and ideally use the same environment for both the synthesis (audio DSP) and the composition / sequencing / arrangement (note event manipulation) part. Ideally such an environment would allow the code to be modified live.

    Seems to me like you'd get a pretty cool live coding environment by using Mozaic (for midi notes and cc control) in combination with Vividshaper (for the audio). Two different tools, but they're both pretty optimized for their specific functions. Both AUv3.

    I already contacted the developers of two iOS Python IDEs and asked if they would like to include audio and midi libraries in their software. Although they sounded enthusiastic, I see no progress and that their focus is mainly on the implementation of AI libraries.

    I do know Pythonista, at least, used to support midiutil (midi) and wavebender (audio) modules, and at least as I remember, you can install any modules you want, so long as they use Python code only. Even if those were what you wanted, there are issues integrating Python solutions with rest of iOS audio ecosystem. E.g., good luck getting anything as AUv3, and Pythonista scripts have issues with running in background, since Pythonista isn't recognized as audio app and will be terminated by iOS.

  • Somehow I just got this urge to learn lua. It’s the portuguese word for moon, so there’s that. But it looks like it’s being adopted a lot recently. I’m excited to see it flourish

  • You are all warned! 😂

  • @Luxthor said:
    You are all warned! 😂

    Haha, that's pretty neat, nice to see my channel logo in there 👻

Sign In or Register to comment.