Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
A way to use specific frequencies as triggers?
Was just wondering if there's a way via iOS sound apps to have a recording, and specify either unique sounds or frequencies within the regarding, to trigger other sounds to play in sync via midi? i.e. I load a recoded sound, specify something unique or a specific portion of the waveform, and tell it to trigger to play similar notes and pitch in an external synth, or possible similar sounds within another recording?
Comments
Or, maybe a way to scan a recording... map out the frequencies, notes, and pitch of the overall composition, then write a midi script that could be used to target a synth?
Does anything like this already exist for iOS or the desktop? If not, it'd be a lot cooler if it did.![:) :)](https://forum.loopypro.com/resources/emoji/smile.png)
You might want to look into audio to MIDI apps like Jam Synth. There is also Voxkit which can use sounds to trigger specific MIDI notes you designate. The developer recently incorporated the audio to MIDI functionality into his Infinite Looper app.
Never quite took to Infinite Looper (I know it's great and beloved, but never quite got into it) but I do have Aleph. Theres a feature called "pitch to midi" is that what you're talking about? Would it be better to use Voxkit? Or maybe his other one Midimorphosis?
To clarify, I talking about taking a field recording of realworld sounds, and breaking that down into data that can be assigned to trigger other sounds via midi instruction. Not guitar, or other musical instrument-based recordings or sounds. For example, a recording of several children laughing... scan the recording to break down and identify the various notes, pitches, etc. and write that data into midi instructions that could be read by a synth, or possibly even another field recording that's also been mapped out with midi data. ie. when a certain sound plays from the children laughing composition, it triggers a similar quality sound in a second recording of dogs barking, that's also been mapped out to identify all of the separate notes/pitches, etc.?
MIDIMorphosis from @SecretBaseDesign has the ability to import a file, run an analysis on it, and produce MIDI notes. But, it works best for instruments that produce a stable, clear signal, not for something like a soundscape. You could load up a file and see what happens. Aleph has the same functionality, but I don't think opens up files, so you'd need to find a way to feed it audio. That's all based on pitch detection, so not likely to be consistent in any way. I somewhat doubt the results would be what you want.
Voxkit by the same developer might work after a limited fashion. I think it's based more on transient detection. So, it's better at distinguishing something like a clap from a spoon hitting on a glass. That might suit your needs better (say for distinguishing a dog bark from a child laughing). Setup would be tedious though.
I'm not sure of the developer's name, but I've heard of a piece of hardware called "the ear" I think. Some people hook this up to an obtuse, but sometimes effective piece of software called "the brain" (same developer), which they then route to up to 10 triggers (fingers? Or maybe toes? I forget, and I'm too lazy to google it). Ymmv
It would be intriguing to see how any of @SecretBaseDesign's apps manage, but I can't say that I think it'll work very well.
I can't think of an app or set of apps that would reliably turn something like children laughing into a predictable set of pitches. You could point it at ThumbJam or Secret Base audio to midi converts but with material that complex you might get a C4 for a given moment on one pass and an E5 the second time. You'll almost certainly get something interesting, just doubtful it'd be reliable enough to build a track around. You could just run the audio through 5-10 times, capture the midi from each pass and then edit the midi into something you like.
If you're up for some programming, something like PD might give you enough flexibility to adjust and constrain the input audio and output midi.
@syrupcore @wim
Thumbjam also can import an audio recording and subsequent midi conversion?
The children laughing and dogs barking was just an example.
Let's say something more complex, like a field recording in the jungle. At first, it sounds like a chaotic cacophony of sound, but if you listen closely, you can sometimes hear something like structural composition. Now, some of this is likely my brain trying to find structure within chaos... as human nature is inclined, but I swear sometimes I can hear structure and familiar composition within some ambient recordings. Not always, but sometimes it sounds like everything is briefly syncing to something.
It'd be cool to find those moments, remap them into midi instructions, and see what the found composition sounds like with different, but similar quality synth sounds triggered. Then, possibly tweak and refine, or sculpt the found composition into something for familiar or relatable.
This same concept is how Different Trains was created and performed. Field recordings of sounds from trains and interviews with people about trains were put together as a single recording and then the notes were mapped out on top of it. Don't think he had access to any tools to automatically convert it though—just a lot of labor in doing it by ear and noting it down so that Kronos could play along to it. Like, a whole lot of work. Outcome is really beautiful though! To me anyway. If you hear it in your field recordings, please go for it!
Full Kronos side here: (The above is mislabeled—Pat Metheny is actually solo on the B-side)
Fun fact: the Pat Metheny side is probably most famous because it was sampled by the Orb for Little Fluffy Clouds. Jump to about 10:20.![](https://img.youtube.com/vi/plL2VDAoThU/0.jpg)
I think @syrupcore is realistic about the difficulty of getting MIDI pitch information from something as variable as children laughing. You might want to look into some sort of vocoder option alternative for transforming children’s laughter into dogs barking. If you learn more about creating soundfonts, you might get more insight into the many issues you're up against.
You could use something like the editor in Caustic to find the root note for a sample. If you take various pieces of your field recording, you could find the root of the sample and the relative loudness of the various samples which could then be assigned to MIDI pitch, note length, and velocity information. The process would be very tedious.
@syrupcore @InfoCheck
The Steve Reich stuff is somewhat in the direction that I was heading. I wasn't familiar with the trains recording. That's cool. I'm a fan of Steve Reich.
I just messed around with Aleph Looper (has the same Pitch to Midi feature as IL) and was getting some fun results. Took me a few minutes to sort out how it works and how to turn on the Pitch to Midi feature, etc.
Wasted some time trying to figure out how to pipe a recording into it. Looks like it listens to any input/output, be it playback of a recording in AudioShare, or the builtin mic, or a quitar, etc hooked up via external interface.
I had been trying to feed it directly via AUM and that wasn't working out so well. Kinda worked, but kept losing all audio.
Finally, tried looping a recording from AudioShare with headphones on, Aleph Looper running, looking for midi and and recording the pitches it picked up. Because it loops, you can change the transpose settings on the fly via pitch to midi settings, ie. low/mid/treb. Also, -36, -24, -12 - - +12.
It doesn't pick up subtle changes in pitch, but I loaded several different recordings and let them loop playback one-by-one. First, a basketball dribbling, Aleph picked up a pitch and recorded the beat. Let Aleph continue pitch to midi recording as I loaded up different field recordings. It grabbed pitches from others to build into the same track.
At the end, I'd built up a midi composition based on field recordings.
It's not exactly what I was after, because it doesn't really sense more subtle differences in pitch, but it'll be interesting to experiment with.
http://stevereichiscalling.com
Don't forget that you could have the same field recording duplicated. One is the original that may or may not be a part of the final output. The other can be heavily heavily processed in an attempt accentuate the particular noises you're hoping Aleph will pick up. Extreme EQ, gating, whatever works.
This is why the internet exists.
It's my favorite piece by Reich. Lovely.
I absolutely love the Electric Counterpoint. NAILS the train
I don't think my apps are a good fit for this -- MIDImorphosis/Infinite/Aleph are looking for a direct pitch-to-MIDI note conversion, and something like laughing just doesn't have the consistent pitch that I'd need.
Voxkit looks for a volume spike, and then does a short term frequency sample -- and then does a best-fit match to one of four user-defined tones. It works for beat boxing, and a few different tones to trigger samples, but it wouldn't handle the laughter thing either.
Interesting idea for generating audio (and Steve Reich is a trip!); I'm not aware of an easy way to do it.
Skip
Sounds like a pretty cool idea you got going...
I think the challenging thing is that as far as the iOS pitch to midi apps go, they have to look for something close to a pitch. Have a glance at this chart;
http://www.phy.mtu.edu/~suits/notefreqs.html
As you can see, presuming a pure sine wave, the standard 128 midi notes cover between 16.35Hz and 7902.13Hz.
The challenge in pitch to midi is deciphering where the actual pitch is; while modern western music absolutely agrees A4 is 440Hz, we can also know whether that same A was played on a guitar, a piano, a violin, a sax - because of all the other frequencies that that instrument generates. And these apps must weed out all the characteristics of the instrument and decide what the dominate frequency/note is.
All the more challenging in your case - not much time for me in jungles but plenty of time in the Appalachians. And it is some full spectrum sound! I'd have to say, especially near water, the 128 note spectrum from 16k to 8k is pegged; and I totally agree with you it's the human mind that can decipher and isolate rhythm, timbre and music out of that - in a way that software (may) never will....
So all that said -
My guess for making your project work would be in processing your original files to narrow the bandwidth of what you would process in a pitch to midi program. You mentioned laughter, so that could be EQing everything down except something between 500 and 1k.
I can't recall if you jumped on Auria, but another option could be using the fab filter multiband compressor to narrow what you want to use as triggers. I'm guessing that may, especially in the case of the jungle thing, boost what need and kill the rest...
Intriguing idea, best of luck with it!
@SecretBaseDesign yes, I'm finding that's indeed the case. But I might be able to get close. Looks like I was mistaken about Aleph using any output, etc. to listen for the pitch to midi operation. Though, now its acting as if the ONLY way it's listening for audio to pitch/midi convert, is from the built in mic, or possibly an interface connected for input? Is that correct? Or, is there a way to force it to analyze only the audio within a recording?
@syrupcore @wigglelights yes, I was thinking this earlier today, ie processing the audio first so that more specific sounds are more isolated. I was also thinking of trying a recording of birdsong. I was recently in a place in Ecuador called Mindo. This place is very popular for adventure sports, and equally famous for its vast varieties of exotic birds. I'm not that into birds personally, but when I was recording, the various qualities of bird sounds was impressive. I was thinking of trying to isolating and trying to map the cacophony of birdsong into some kind of mapped midi data that could be redirected to target synths. Then mixing that back with the original recording. Kind of like what Reich is doing with those train recordings I think.
something like this?![](https://img.youtube.com/vi/C8VeYWRBBZY/0.jpg)
That's cool. And kind of like that but less accompaniment. Going to watch the others too for more ideas. Thx!
MIDImorphosis will process an audio file (but AFAIK, no one uses that feature!). Aleph/Infinite only do live processing of incoming audio.
Is MIDImorphosis 64bit and safe from being cut from the store with iOS11, etc? I see it's last update was in 2014. Is it more or less "retired"? Or, will it see an update?
AFAIK, MIDImorphosis is in no danger with the upcoming iOS -- but if it does run into trouble, I'll push out an update. It's more-or-less retired, though -- there's a bundle for folks to upgrade to Infinite, and I'm migrating the functionality that's in MIDImorphosis into Infinite.
I do!
This weekend I used it to convert a Moodscaper piece into MIDI data for other synths. Pretty damn good results.
I love your MIDImorphosis. For me, it is almost as much a Swiss Army knife as AudioShare.
+1
There's just something about playing Model 15 with my Strat that makes me grin like an idiot every time. I hope you keep the file processing. I just like knowing it's there as a possibility.
Love diffrent trains, really powerful and moving bit of music
Only thing I can think of that might work for what you are trying to acheive is Audiostretch. You can load the sound file in and slow down and analyse what notes it is perceiving in your source material, the analysed audio notes appear on a keyboard interface.
Triggering the notes and getting the rhythmn to trigger the notes could be trickier. You could maybe put a gate on the audio to let just peaks through, use midimorphosis to transpose that to midi then midi edit the notes to match what you anaysed in audiostretch.
So not simple and prob not perfect but some of best sonic results can come from imperfect experimentations trying to create one thing and making something else unique
While we are steve reiching, this is a remix I did of 2x5 in 2010
http://redskylullaby.com/track/2x5-red-sky-lullaby-remix
AudioStretch might be worth a try. About Midimorphosis, does that work more accurately than the Pitch to Midi feature in Infinite Looper & Aleph looper? I've tried a lot of experimenting with Aleph Looper's Pitch to Midi and the results are all over the place. Even in a quiet room with a tap on the table generates a bunch of random notes.
Wondering if Midimorphosis' audio file processing works a little better?
I don't have the other 2 you mentioned. In the scenario you are looking to acheive I was just talking about using midimorphosis to generate midi notes that hit at the peaks of the sounds and then midi editing that midi data to match analysed notes you identify using audiostretch. AUdiostrecth was designed for guitarists to slow down songs to figure out how to play them so that concept could work even if a bit labour intensive