Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

(Binaural) Reverb App Features

Hi everyone,

I'm planning to develop an Audiobus-enabled reverb app based on an advanced version of my binaural rendering algorithm used in the VirtualRoom app (https://appsto.re/ch/65IKE.i). Since I don't want to develop features that nobody is using, I thought I should do a little bit of market research here and ask you Audiobus users what kind of features are important to you in a reverb app (both traditional and binaural reverb).

Here are some of the features I have been thinking about lately:

  • Combining 3D positioning with traditional reverb types like Spring or Plate.
  • Built-in Chorus/Flanger
  • Parameter modulation
  • Presets tuned by a sound engineer
  • Multichannel I/O

What do you think about these features? What are your most desired features for a reverb app?

Thanks a lot for your input and feedback,
Fritz

Comments

  • Hi Fritz, sounds great.
    My most desired feature of any audio based effect these days is that it is an audio unit AUv3 plugin. After that a standalone Audiobus/IAA compatible version is a bonus too.

  • Convolution, audio unit ...Im not an expert in reverb but there are already a lot of reverb on the store actually ...

  • I'm not sure if it's what you were thinking of, but a convolution reverb using actual binaural Impulse responses would be great. I've always felt that this would be a most interesting approach to creating a real sense of spatial ambience by using a reverb. I'd have a go at creating the required IRs myself, but I don't have the gear to do so.

    Perhaps more back on topic, modulation in a verb is nice, but not (in my view) the primary thing. If you're combining spatialisation with the verb (HRTFs ?), that would be most cool, but as I said earlier an actual 'binaural verb' would be even cooler. Anyway, I would be very interested to hear what you come up with !

  • I concur that au support would be greatly appreciated.

  • Seems like Audio Unit support is very popular. I'll definitely look into it. Which hosts are you using? Cubasis? GarageBand? It seems there is only a small number of possible hosts, or am I missing something?

    I can say already now that my next app/plugin will not be a convolution reverb. The fun with binaural audio really starts when you can move sources around. With convolution reverb, that's nearly impossible (unless you have the time to record room impulse responses for any position you want to be able to place a source at, which even for a simple movement amounts to hundreds or thousands of positions).

    However, my goal is of course to achieve a sound that will be comparable to that of a convolution reverb, and I have already put quite some effort into improving the BR1 algorithm used in VirtualRoom to make it sound like some existing, bigger rooms. VirtualRoom itself just simulates a relatively small room, but from the feedback I got from audio professionals (including David Griesinger, the former Lexicon principal scientist, to whom I once showed this app at an AES convention), it does that pretty well. BTW, feel free to try it out, it's available for free on the App Store.

  • Hi Fritz,

    I agree that moving sounds around in a 'binaural space' is interesting - and your app does this very well, but wouldn't it be even more interesting if the reverberant element was also binaural ? To add to an immersive spatial effect ? We naturally always hear a sound in it's acoustic environment.

    AU seem to be gaining popularity, as you can use an effect in a multiple manner.

  • will try, many thanks

  • @Igneous1 said:
    Hi Fritz,

    I agree that moving sounds around in a 'binaural space' is interesting - and your app does this very well, but wouldn't it be even more interesting if the reverberant element was also binaural ? To add to an immersive spatial effect ? We naturally always hear a sound in it's acoustic environment.

    AU seem to be gaining popularity, as you can use an effect in a multiple manner.

    Hi Igneous1,

    My algorithm simulates 3 components of a natural reverberation: direct sound, early reflections, diffuse reverberation. All of them are designed in a way that together they emulate a binaural room impulse response (BRIR), which is of course the "real thing". The decisions which aspects of a BRIR are modeled, are based on psychoacoustics research I had done previously (see https://www.researchgate.net/profile/Fritz_Menzer/publication/230757772_Investigations_on_an_Early-Reflection-Free_Model_for_BRIRs/links/0fcfd503f2e2eec7a8000000.pdf).

    So already with VirtualRoom, the reverb itself is binaural, and one of my goals is to integrate some classic reverbs like spring reverb into this binaural framework. As far as I know, this has never be done before, not on iOS, not on Mac or PC.

  • @Fritz - AU is the latest craze and very useful for things like EQ and compression, but how useful would it be to have for binaural reverb? Would someone really want two, three or more of these reverbs going at once? I know what binaural is (kinda sorta) and I know reverb, but the two together is new to me. I already think running multiple (regular) reverbs can make things messy quickly, would binaural ones be different? I don't see the point in adding AU just to be in on the latest thing if it isn't suitable.

  • @MrNezumi said:
    @Fritz - AU is the latest craze and very useful for things like EQ and compression, but how useful would it be to have for binaural reverb? Would someone really want two, three or more of these reverbs going at once? I know what binaural is (kinda sorta) and I know reverb, but the two together is new to me. I already think running multiple (regular) reverbs can make things messy quickly, would binaural ones be different? I don't see the point in adding AU just to be in on the latest thing if it isn't suitable.

    Interesting point. Yes, in most use cases you only want one reverb/binaural renderer, but in the binaural case you'd probably like to have multiple inputs so you can place multiple sources at different locations. If AU is more flexible in that regard than Audiobus/IAA, then that's a reason to support it.

  • edited June 2016

    @Fritz said:

    @MrNezumi said:
    @Fritz - AU is the latest craze and very useful for things like EQ and compression, but how useful would it be to have for binaural reverb? Would someone really want two, three or more of these reverbs going at once? I know what binaural is (kinda sorta) and I know reverb, but the two together is new to me. I already think running multiple (regular) reverbs can make things messy quickly, would binaural ones be different? I don't see the point in adding AU just to be in on the latest thing if it isn't suitable.

    Interesting point. Yes, in most use cases you only want one reverb/binaural renderer, but in the binaural case you'd probably like to have multiple inputs so you can place multiple sources at different locations. If AU is more flexible in that regard than Audiobus/IAA, then that's a reason to support it.

    You can design an AU extension app to support the positioning of reverbs. It is better than selling reverbs app. This new app and it's interface should give users a chance to move other tracks/channels positions. This app could become a very welcoming stereo spatial mechanism AU app to many users of Modstep, AUM, GB, MTS and Cubasis.

  • @Fritz said:

    Hi Igneous1,

    My algorithm simulates 3 components of a natural reverberation: direct sound, early reflections, diffuse reverberation. All of them are designed in a way that together they emulate a binaural room impulse response (BRIR), which is of course the "real thing". The decisions which aspects of a BRIR are modeled, are based on psychoacoustics research I had done previously (see https://www.researchgate.net/profile/Fritz_Menzer/publication/230757772_Investigations_on_an_Early-Reflection-Free_Model_for_BRIRs/links/0fcfd503f2e2eec7a8000000.pdf).

    So already with VirtualRoom, the reverb itself is binaural, and one of my goals is to integrate some classic reverbs like spring reverb into this binaural framework. As far as I know, this has never be done before, not on iOS, not on Mac or PC.

    >

    I look forward to the release of your app. Larger spaces would be most interesting - springs / plates (?) . I think some of my comments were a little off-topic in that I was referring to the utilsation of binaurally recorded IRs of real spaces to provide a 'binaural reverb'.

  • The JVC recording binaural headphones...
    1977?

  • edited June 2016

    AUx please. and link support. And could we please load our own impulse responses ? I like using Diego stuccos Rythmic convolution impulses to mangle audio... So I guess I'm actually asking for a convolution reverb mode too D:

  • @gonekrazy3000 said:
    AUx please. and link support. And could we please load our own impulse responses ? I like using Diego stuccos Rythmic convolution impulses to mangle audio... So I guess I'm actually asking for a convolution reverb mode too D:

    Ok, I see the interest in convolution reverb. But that would really be another product. I'll first concentrate on getting my current reverb technology out as a product, but if afterwards I can use parts of it to do something based on sampled impulse responses, I'll certainly look into it.

    Just a question: how important is Ableton Link for reverbs? For some rhythmic modulation I can see a use, but otherwise?

  • @Nathan said:

    @SpaceDog said:
    The JVC recording binaural headphones...
    1977?

    Yeah, it must be about that far back. Not sure what happened to them. Hopefully still in storage. If so I must look them out to use with my iTrack Dock, or straight into the iPad via the special cable. They were a lot of fun for recording, but heavy to wear for extended listening.

    Interesting. I've never heard about these headphones, but used KEMAR maniquins a lot. How do they compare? Is the spatial image you get from a recording made with the JVC headphones realistic (distortion of left/right azimuth angle, front/back confusions, elevation perception)?

  • @Fritz - It is hard to say if Link is important or not. If there aren't any LFOs or other bpm featues then it isn't necessary. People are in the habit now of asking for Link and Au even if they aren't appropriate for he app. If there is a way to move sound locations in tempo then Link would be great.

  • I suppose I should put my Quasonama page here: https://u0421793.github.io/quasonama/ just in case it helps influence anything, although there’s absolutely no predicting or control over conceptual association as time goes on. Cognisant life forms develop sensory models of related attributes, and mix them up, yet perhaps this was the flaw all along. Life would be far more glorious and happy with no mingling or superimposition or leakage or imprinting of parts or attributes of one concept upon another, that’s what’s responsible for everything incorrect in life.

  • @MrNezumi said:
    @Fritz - It is hard to say if Link is important or not. If there aren't any LFOs or other bpm featues then it isn't necessary. People are in the habit now of asking for Link and Au even if they aren't appropriate for he app. If there is a way to move sound locations in tempo then Link would be great.

    In the desktop world, I'd say that moving sound locations in tempo would be the job of the host's parameter automatization. But does this concept even exist in current iOS hosts?

  • @Nathan said:
    I've never tried KEMAR maniquins, and so cannot comment as to comparison. I'm trying to find what the hell I did with my JVC's, and if I do I'll post a couple of photos. As I recall, the headphones had what looked like vaguely ear-shaped enclosures for twin microphones built-in on either side, over the speakers. These did a pretty decent job of recording binaural sound. I could record part of a conversation and play it back, and it would sound very like the person recorded was speaking, naturally.

    Here's a link with photos I found on the web for the JVC recording headphones: http://www.hifiengine.com/manual_library/jvc/hm-200e.shtml
    It seems to be a pretty smart design which should amplify front/back differences due to the size of the "outer ear", which probably houses the speaker of the headphone part.

  • @u0421793 said:
    I suppose I should put my Quasonama page here: https://u0421793.github.io/quasonama/ just in case it helps influence anything, although there’s absolutely no predicting or control over conceptual association as time goes on. Cognisant life forms develop sensory models of related attributes, and mix them up, yet perhaps this was the flaw all along. Life would be far more glorious and happy with no mingling or superimposition or leakage or imprinting of parts or attributes of one concept upon another, that’s what’s responsible for everything incorrect in life.

    Not sure what to make of this. I mean, yes, all reproduction systems are flawed in a way. And in many ways, we've adapted to their flaws (especially to amplitude panned stereo over loudspeakers, which is so common and so flawed if you think about it, yet for most people that's their reference). However, I do think that the 5.1 setup is there for a reason (among other things, it's clearly an improvement over stereo due to the presence of a center channel, even if one is not interested in presenting sources from the back or the sides), and I doubt that any other setup with a similar number of speakers would have a chance to succeed. What is still open though is how to arrange speakers in 3 dimensions ("surround+height"). However, thanks for pointing it out. If I'll ever make a multichannel reverb, I'll think of how to support your speaker layout.

  • @Fritz
    Tried out your 'running music' app - that could become a very interesting spatial panner with the means to get audio in and out of it (Audiobus / audioshare)

    Link seems to be the flavour of the month recently, I don't use it myself, but I would imagine it may be less appropriate for what you may have in mind (?)

    I was much more informed about this whole area years ago when I was doing my dissertation of the compositional usage of spatial audio and going to concerts of electro-acoustic compositions being 'diffused' (I think this was the term used :) through these huge ambisonic speaker rigs and the like.

    As I say, I look forward to what you create.

  • edited June 2016

    @Fritz said:

    @u0421793 said:
    I suppose I should put my Quasonama page here: https://u0421793.github.io/quasonama/ just in case it helps influence anything, although there’s absolutely no predicting or control over conceptual association as time goes on. Cognisant life forms develop sensory models of related attributes, and mix them up, yet perhaps this was the flaw all along. Life would be far more glorious and happy with no mingling or superimposition or leakage or imprinting of parts or attributes of one concept upon another, that’s what’s responsible for everything incorrect in life.

    Not sure what to make of this. I mean, yes, all reproduction systems are flawed in a way. And in many ways, we've adapted to their flaws (especially to amplitude panned stereo over loudspeakers, which is so common and so flawed if you think about it, yet for most people that's their reference). However, I do think that the 5.1 setup is there for a reason (among other things, it's clearly an improvement over stereo due to the presence of a center channel, even if one is not interested in presenting sources from the back or the sides), and I doubt that any other setup with a similar number of speakers would have a chance to succeed. What is still open though is how to arrange speakers in 3 dimensions ("surround+height"). However, thanks for pointing it out. If I'll ever make a multichannel reverb, I'll think of how to support your speaker layout.

    Excellent, thanks — that’d be useful for the future. 5.1 is really only with us because of the motion picture film industry. What I’m more concerned with is the VR industry of the future. The same things are in question regarding stereoscopic 360° video capture — what’s the best way to do that? Lots of people are capturing with ‘cubes’ of cameras, and if you had cubes of stereo pairs of cameras, you can capture simultaneous left-right eye video, up, down, round and around. However, I suspect as the VR industry matures, we won’t need to be looking directly up at the zenith or down at the nadir all the time, so I suspect we’ll ramp down to cylindrical stereoscopic video capture. Similarly, we won’t be too concerned with stuff happening behind, so although the capture could be 360°, the user-agent needn’t present it that way — all it needs is an ‘immersive’ rather than panoramic viewport, so really we can ignore that bit behind us and just dynamically (using motion) map the 240° arc from the 360° pano source, and save a lot of bandwidth.

    For spatial cues, reverb is the thing.

Sign In or Register to comment.