Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Embarrassing things you don't know (synthesis/music related!)

13

Comments

  • I over think it all the time. Way too much, wrecking perfectly good sounds in the process. :)

    I've also had good luck with 'nope, start over' method and in exactly the sort of scenario you describe. Like spending a bunch of time on the drums, basing the rest of the mix on that sound and then getting frustrated/stuck. Turns out the original sonic decision was wrong. Funny, this was easier in analog land; 3 minutes to reset the board vs deleting a bunch of plugins and automation.

  • edited February 2015

    musikmachine, — I’m not aware that there are any frequencies that sound incorrect together, they all sound like what they sound like. Further, it’s not really about which frequencies are played together, it’s more a question of “what’s next”? If I’m sounding a certain frequency, where do I go next?

    How long should this note last, should it end on the same frequency as I started? How much of a gap should I leave? Upon which frequency should the next tone start? When? Should it follow a gradient to a different frequency? Upon which interval of time? Should I even select the same frequency as this note? Does it always have to be a different one, to make it sound musical?

    Is it simply my opinion that all combinations of frequencies sound valid consecutively? Is everyone else incorrect? Why haven’t we abolished culture and belief by now so that we may live according to free will? Punk ethos? Pah! If I go from one frequency to another and it produces an emotive effect, isn’t this just cultural bollocks learned from a formative experience reinforced by that transition having structural analogy with traumatic events in early life? Is there another frequency I should then go to to make it all better and relaxed, like a plagal cadence?

    Why is there an expectation that analogies of fear, euphoria, positivism, trepidation, uncertainty and joy in simply stepping from one frequency to another, then another are shared among everyone? Maybe those perceptual differences demonstrate a difference between caesarian and vaginal birth setting up the initial sensory state?

    Or maybe it’s just gullibility. I think we’re told that these are the ways to feel, and it’s simply not true, but it’s plausible enough to never contest it, weigh it, test it, measure it and quantify it. Where’s the evidence? Same as religion. Same as colours — we’re lied to when we are young, that red is a warm colour and blue is a cool colour. What rubbish, this is incorrect. We’re lied to that the Jaws two-note sequence is scary, but it needn’t be if sharks were not in context (i.e., substitute the imagery with a Saturn V lift-off).

  • @u0421793 here's an article about the neurophysiology of how we perceive sound.

  • edited February 2015

    Thanks, that looks like a useful abstract. I’ll see if I can find a way of getting the full paper without it costing (I’m no longer university teaching staff this year). It implies, though, that the rate of sensory input plays a part, if there is such a thing as consonance and dissonance in an absolute rather than learned sense.

    It’s similar to our reaction to pain. We fear pain. We gravitate to pleasure. Why is this, I wonder? I suppose it’s just as well it’s that way round, and we probably descend from the earliest lifeforms that evolved that way round, rather than the presumably short-lived earliest lifeforms that were attracted to pain and repelled by pleasure, and probably didn’t last long. So inherently we have a neurological bias toward interpreting complexity. Perhaps pain is merely “more information than we can comfortably process within the timeframe before even more comes in”, and nothing more than that. We perceive the output of a noise generator, such as white noise, pink noise, etc as a big complex sea of uninterpretable signal. Perhaps this is only what pain is — noise.

    Perhaps if we experienced pain (try this next dentist visit) as merely an overload of complexity and rate and quantity of signal, it isn’t necessarily something that has an immediate effect of repelling, but rather, affords analysis (although interpretation is a long shot), there may be this thing or illusion we label as “beauty” within the pain signal. Or at least it becomes a tiny bit more “interesting” and consequently slightly less repellent, to the intellectually-prone patient in the dentists chair. It wasn’t some gigantic syringe to fear, merely an analysable brief overload of sensory signal — I felt a prick.

    Well, it’s likely that our attitude to what we call dissonance is simply the same mechanism as the repellant response to what we call pain, at work again. But the signal or stimulus before it is perceived has no qualitative value in itself (we supply that bit), it simply is what it is. Any pair of consecutive frequencies is valid if we take the time to appreciate it, but often the circumstances of presentation compound the delivery such that time is not available, the senses overload, the impression is to go away from rather than attract to.

    Back on topic, though, how do I get scalegen to give me the notes in between the notes in between the notes? And how do I put the result into another synth without it falling back into a keyboard’s worth of distances again? The pntchbending thing is quite a kludge. I must investigate OSC more (or at all).

  • @u0421793 if you want to learn more about creating scala (.scl) files, this site provides the details as well as how to map the scale you create to midi notes.

  • That looks technical. I tried putting the output of scalegen into sunrizer but the pitchbending became very evident, and I still ended up with what sounded like normal distances between the frequencies, rather than the in-between ones I'd chosen. I’d better start a new thread of my own for this, it’s deviating from the original thread intent.

  • @u0421793 there is a report from a user that Sunrizer doesn't fully support the scala format. It could explain what you experienced. If you have Thumjam, you might want to try importing a scala file into it.

  • @solador78 said:

    What a wonderful clip!

  • @PhilW said:
    What a wonderful clip!

    +1

    Very inspiring... That whole channel is awesome! Thanks for sharing

  • edited February 2015

    Yes, the Bobby McFerrin performance sounds convincing, but I disagree with it. There’s nothing innate or magically automatic about the pentatonic scale, in fact, it’s a particularly odd and uncomfortable progression of frequencies with huge holes in where the natural expectation would be to not jump so far away for the following frequency. I personally think that the McFerrin piece (which I first saw many years ago, and believed it) is actually not a demonstration of how we’re all wired to do the pentatonic progression, but rather, how the majority in a crowd can oppress and diminish individuality.

    I mean, listen to all the people in the crowd that did not sing out those particular notes, but instead, the notes that they thought would be the correct ones. Exactly — you can’t hear their individual voices, all you can hear is a majority opinion, and as usual in such situations, the incorrect one. Francis Galton was famous for among other things, professing the notion of The wisdom of crowds. The essential point being that a crowd could, in the given example, guess the weight of a cow, more accurately than any single member of the crowd did. The power of a crowd in forcing behaviour norm deviations is significant. Take this example here. Loren Carpenter performed a very interesting and personally influential experiment, in which the crowd was able to work out how to play a game of pong, and actually play it as a single crowd.

    Of course, the majority is often incorrect. The Nazi party was voted in voluntarily by a majority of free-thinking public. Most machines and clothing are designed backwards due to the ridiculously domineering selfish rude influence of so many right-handed people. Most people seem to be religious, yet this is clearly ludicrous and indicative of the majority of contemporary human beings being mentally deficient. Workplaces are formed from teams. Clearly the individual is not only threatened by culture and tradition, but is oppressed and diminished. There is no room for the individual this century, it seems.

  • THE TRAGEDY OF THE COMMONS!

  • edited February 2015

    @JohnnyGoodyear said:
    THE TRAGEDY OF THE COMMONS!

    Yep, see, I mentioned Nazis without ever mentioning David Cameron. Oops, sorry if I offended anyone by referring to the Conservatives.

  • There is something innate about most common musical scales, in that they comprise notes that sound good together and this is due to having overtones from the harmonic series in common. Notes that do not have this sound dissonant and clashing - that is not to say they cannot be used, but the tension they create usually needs to be resolved by moving to a more harmonious interval. In western music, we have become used to hearing an equal temperament scaling, which adjusts the natural pitches by a small amount to allow different scales to be played by the same instrument.

  • Does anyone consistently use a Limiter and/or a gate/expander during their mixing phase? Like Fabfilter or another?

    This isn't exactly OP context, but kinda.

    Or do you reserve these two things strictly for Mastering?

    Reason is, for some songs my wife recorded, her voice is really really really quiet. (Now this is nothing professional, just for a Spanish class she is teaching to kids). I needed to bump up her volume, a lot. To be able to even start to hear it to be able to put some eq on it.

    Any other routes? Thanks!

  • But what is “good”? What is this “beauty” thing? Is it a norm? Is it taught? Is dogshit ugly or beautiful? Does it have inherent elegance and structure? Is it valid even without any consideration from a human being — even when the fridge door is shut? As I hinted at above when I was talking about perceiving pain as a sensory overload, I think the things people think of as ugly or evil or repelling or dissonant, upon further and more analytical reflection, can be perceived as, if not beautiful or positive or harmonious, if not that, at least, what they are. There is dogshit in the universe — it’s what it is. It’s valid.

  • edited February 2015

    @High5denied said:
    Does anyone consistently use a Limiter and/or a gate/expander during their mixing phase?

    Reason is, for some songs my wife recorded, her voice is really really really quiet. (Now this is nothing professional, just for a Spanish class she is teaching to kids). I needed to bump up her volume, a lot. To be able to even start to hear it to be able to put some eq on it.

    Last year when I was audio annotating my blog posts for my motivational speaking speaking, I found myself (on the Mac, not on the iPad but the same would apply) recording the whole thing into Audacity, then taking phrases and normalising each line independently, then afterwards, compressing the whole run. Sometimes I’d even pick out syllables or words to make the entire passage sound equal. I wouldn’t do an entire paragraph like this, but rather, work on phrases within sentences as the unit to apply normalisation to. I would also chop out my intakes of breath, but that’s only because I had the graph in front of me so why not.

    I bought Hokusai to try the same on the iPad, but despite taking very good care of my iPad2 for years, Hokusai has compelled me to throw the iPad against a wall a total of four times thus far. There’s pretty much no option. Oh, and swearing, too. It is the single most annoying piece of software ever manufactured (although, I also don’t use microsoft products, so maybe I’m sheltered from the worst). Fortunately, iPad has survived well in each case, because of the case being closed. but still.

  • @u0421793 said:
    Yes, the Bobby McFerrin performance sounds convincing, but I disagree with it. The Nazi party was voted in voluntarily by a majority of free-thinking public.

    Just wondering if you have any your own tracks that I can listen to. I'm having a hard time taking your perspective seriously without a frame of reference. Thanks!

  • The user and all related content has been deleted.
  • edited February 2015

    @solador78 said:
    Just wondering if you have any your own tracks that I can listen to. I'm having a hard time taking your perspective seriously without a frame of reference. Thanks!

    Don’t accuse me of being serious. But yes, you raise a good point. Here’s a couple from recently:

    — a surprising amount of this was done on iPad

    — a lot of this video synthesis was also done on iPad

  • @High5denied said:
    Does anyone consistently use a Limiter and/or a gate/expander during their mixing phase? Like Fabfilter or another?

    This isn't exactly OP context, but kinda.

    Or do you reserve these two things strictly for Mastering?

    Reason is, for some songs my wife recorded, her voice is really really really quiet. (Now this is nothing professional, just for a Spanish class she is teaching to kids). I needed to bump up her volume, a lot. To be able to even start to hear it to be able to put some eq on it.

    Any other routes? Thanks!

    -
    Yeah, for what you are doing (low input audio) I turn up the mic pre and run the mic through a hardware compressor/limiter. Play with the threshold setting on the compression side so it kicks in early.

  • edited February 2015

    @u0421793 I have watched your videos and decided that you're as mad as a hatter. I admire that in a man.

  • @u0421793 said:
    But yes, you raise a good point. Here’s a couple from recently:

    OMG LOLZ!! Now I understand why you're asking the software to bend to you and not the other way around. Don't ever change. That was great.

  • edited February 2015

    @serosin said:
    Modular synth tweaking and building, patch bay like on the ims20 and that kinda jazz.

    The real reason it is embarrassing is that there's countless tutorials and other resources on this. I've just been lazy.

    One perspective I might add would be to consider that a lot of synths we remember from the 80s onward featured what was called a “normalised” path. In other words, they were effectively pre-patched. The very early modulars (true modulars, where each module is a separate and unconnected, er, module, independently mounted in the cage, and sharing a power bus only) were not connected together — you had to do that.

    This meant that each combination was possible, even stupid combinations that don’t do anything useful. After a while, it was noticed that most people, for most common sounds, tended to patch the modules in a similar way, so the second generation of synths still had patch cables and jack sockets, but also had a “pre-patched” internal set of connections (like my Arp 2600 (which I must get down from the attic and fix (it’s got “Roger Glover” stencilled on the back of the case tolex))).

    This pre-patched connection set hooked up the output of any VC oscillators, noise generators, external input amplifiers, ring modulators, into a mixer. That was wired into the VC filter, that was wired into the VC attenuator (or amplifier). That was pretty much the audio path that most people wanted most times, and this saved time and patch cables.

    Similarly, with the control voltage making things, like low frequency oscillators (a repeated transition of some shape or other), envelope generators (which are a kind of single-shot transient, but with variable symmetry shaping) and other trinkets like ramp generators (just goes up, or down, when triggered) and sample + hold (take an input, take a regular clock pulse, and whenever the clock pulse occurs, take a measurement of the input voltage and hold that value until the next clock pulse, and that held value is your output).

    Those CV generating modules were often prepatched. An envelope gen would go into the VCA because you mostly always wanted it that way. Maybe another one would go into the VCF modulation input, to shove the centre frequency of the filter up a bit and then glide back down, upon each note trigger. The LFOs were often prepatched to the VCF and VCO (to make a wobbly sound, before dubstep was invented).

    On iMS-20 (and indeed, the MS-20), it can run entirely unpatched, because most of the useful connections are prepatched in a normalised layout. However, as all the sockets are breaking jacks (i.e., when you plug something in, you break the original prepatched connection it had and replace it with your new choice from the patch cable), you still have a lot of flexibility in terms of what you patch where. Not total flexibility, but quite reasonable.

    Anything that is a CV output, if you can get at it from a jack output, can go into a CV modulation input, if there’s a jack for that parameter. Similarly with audio. If a module generates audio, it can be patched into something that accepts audio as an input. In the old original synths, the audio levels were often ±10V or so, and so were the control voltages, so in a true sense, you could mix up audio and CV signals without harm, and modulate something with an audio frequency if you wanted to. Hence, FM, and cross-modulated VCOs (or just plain ring modulation, which is simply the product of one (audio frequency) signal (x) and another (y), modulating each other. The result is frequency X, frequency Y, sum of frequency X+Y, difference of frequency X-Y. A clanging sound.

  • @Martygras said:
    Yeah, for what you are doing (low input audio) I turn up the mic pre and run the mic through a hardware compressor/limiter. Play with the threshold setting on the compression side so it kicks in early.

    Thanks Martygras. I don't have any hardware compressor/limiter equipment. My scarlett 2i4 I think allows me to turn the mic pre up.

  • Is there another frequency I should then go to to make it all better and relaxed, like a plagal cadence?

    I realize this is a prodding, rhetorical question, particularly with the choice of 'should' over 'would/could', but my answer is: it depends. If you want it to sound more relaxed there are notes and timing that would produce or suggest that feeling. Yes, some of it is cultural. Some of it is just science and math though.

    While you may feel musically constrained by cultural musical memories, or even science, I'd suggest it would be quite difficult to convey emotion without them. Not that I think conveying emotion is music's sole purpose.

    Also, punk rock used the 12 tone system (+/- the musician's tuning ability). :)

  • @syrupcore said:
    Also, punk rock used the 12 tone system (+/- the musician's tuning ability). :)

    Do you mean The Stranglers were devotees of Schoenberg et al or that they used the same 12 notes we've all been using for the last 300 odd years?

  • Ha, yes. I meant Schoenberg, naturally. The original tonal punk rocker.

  • So, here's another hole in my jeans:

    I see this feature in the new AUFX thingie from JLil:

    Hard or soft knee with adjustable radius

    and realize/remember that while I nod at the concept of 'knees' I don't really understand what they are or why something like this is of use/benefit. Any (simplistic, remember your audience :) thoughts?

  • @JohnnyGoodyear said:
    So, here's another hole in my jeans:

    I see this feature in the new AUFX thingie from JLil:

    Hard or soft knee with adjustable radius

    and realize/remember that while I nod at the concept of 'knees' I don't really understand what they are or why something like this is of use/benefit. Any (simplistic, remember your audience :) thoughts?

    It’s to do with the gradient. A hard knee is simply an increasing ramp, which, as soon as it hits the set point, becomes that set-point value. A soft knee adjusts the gradient when it gets near the set-point to flatten out. The nearer, the flatter, until it blends into the set-point value.

    It’s probably easier to imagine it as a thermostat. I walk into a freezing room, turn the heating on. The heater is set to 21°. The heater is on, and keeps heating. The temperature rises from 0° linearly and goes up. 5° arrives, I take my coat off; 10° arrives, I take my fleece off; 15° arrives, I take my socks off; 20°, I take my shirt off. In a short while, we’ll hit 21° and I can consider taking my trousers off. What actually happens is that in a simple bimetallic switch-based thermostat, it hits 21°, switches off, but the latent heat carries on going past the set point and it probably takes getting to 25° or so before it stabilises, and there’s danger of me having to take my underpants off. The temp will drop back in duke horse, but goes back past the 21° set-point, which switches the heating back on, but the time delay is insufficient and it drops even further before it reverses and starts to climb back to 21° and overshoot yet again, but not quite as much this time.

    This up and down overshooting will eventually settle down and stabilise as an equilibrium. Minor shifts in temperature such as opening the fridge will be easily rectified. Major shifts, such as opening the back door and all the freezing cold wind rushes in will cause the same frantic linear heating then overshooting. This overshooting is technically called “ringing”. It’s what happens when you send a sharp impulse through a VCF when the filter Q is set just fractionally below oscillation. It becomes a resonant system.

    In food industries and other critical areas of process control, there may be no allowance to overshoot and then allow an unstabilised up-down-up ringing. It has to reach the set-point temp smoothly and then stay there precisely. For this, we can’t have the ‘hard knee’ of simply ramping up linearly and then abruptly switching off the heater when the setpoint is reached. We use a more intelligent control system (often a PID — proportional integral differential, featuring maths I simply don’t understand (not any particular maths, just maths, which I don’t understand)). As the set point is neared, the gradient is proportionately decreased. The nearer, the shallower, until at the point that the set point is hit, we’re almost going in a straight line anyway. There’s no ringing, it‘s a system that has far more stable equilibrium, and disturbances hopefully won’t do anything unpredictable. The soft-knee is a more complex damped achievement, but won’t introduce extraneous harmonics.

  • A simpler explanantion, although that was thorough, is that a soft knee compressor kicks in gradually as it approaches the threshold and continues gradually over the threshold, so you're not actually getting the ration you choose (4:1, etc) at the threshold point, but a little bit over. a hard kneee does nothing until the threshold is reached, so "all or nothing" if that makes sense. If you look at the shapes they explain it well.

    DBX's "over easy" compression is a soft knee compression. A limiter is not soft knee at all, it kicks in and stays there (with the very high ratio) when you hit the threshold but not before. if you have the fabfilter pro-q in auria it gives you a nice visual representation of what it's doing as it works.

Sign In or Register to comment.