Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
What and where exactly is MIDI, when we look inside an iPad?
There’s no actual cables and 5-pin DIN sockets inside an iPad. That’d be silly. iPads are too flat. But what constitutes midi inside an iPad? Is it the same series of note-on / note-off bytes that would have been sent through the cables and optoisolators of the conventional instruments of olde? Or does iOS and OS X turn it into some other form of signal? Does it even ‘think’ in the same midi as the midi spec? Is there such a concept as MTC or something that timestamps the events? Do these live somewhere or do they disappear as soon as the event is sent to whatever is on the other end of the ‘wire’ internally? I can imagine a sequencer stores the pile of midi messages as a file, but it’s hard to imagine what form the midi takes when it’s taken from the file and something is done with it.
Comments
I was just thinking along the same line yesterday. I would like to know the answer as well....at least at a high level.
I tend to see Midi inside the iPad as a 'communication protocol' or 'language' handled by 'General Core Midi' who keeps track of the messages. Apps can query or give the 'General' list of instructions on how to proceed
Not an expert, but imagining Virtual MIDI relies on Core MIDI.
And since that is used to talk with non-apple hardware,
I'm thinking it must be closely related to actual MIDI,
even at a very low level of machine interface.
Yes, the midi byte stream is received by apps (or sent by them) with the same data that hardware devices would see, with the exception that timestamps are available alongside each midi message packet. A given virtual midi port is basically equivalent to a hardware port. Obviously there is a lot more flexibility in terms of connections, though.
MIDI in an iPad is simulated via software emulation not unsimilar to the process of using software to emulate hardware synths. I would also add that there are variations on how this MIDI is implemented in various apps especially with MIDI sync. As to how close these emulations are to the official MIDI standards (there are a number of MIDI standards that have been developed over the years in response to new technology), that must vary too. Since MIDI is a serial protocol where it's one message after another versus iOS which has parallel processing with several things happening at the same time, these differences are a major consideration when trying to emulate MIDI on iOS
Apps also vary on how much MIDI they recognize or respond to, some don't have clock, some don't do CC messages, Sysex, or even note off. Hardware can also vary in how much MIDI it recognizes or responds to as well.
Quite a few developers have been using the MIDI software library created by the developer of MidiBus to implement MIDI in their apps.
If curious, you can use an app like Midiflow to see what MIDI messages are being sent to and between apps or MIDI hardware connected to your iOS device.
I'm no way an expert, and maybe this doesn't even address the question adequately, but I think inside an iPad is the same as any digital device with a CPU? MIDI cables are just one way of getting the digital data (bits and bytes put together as messages that conform to the MIDI spec) from one digital device to another. It's just a different cable/adapter to connect to an iPad, but the same messages. "Inside" the iPad, an app must be programmed to operate on, send, and receive MIDI messages that conform to the spec, or it won't be compatible with other apps using MIDI. Otherwise, the programmer is working with data like any other, and can process, store, retrieve it as they wish. Internally, an app can do whatever it wants to do, not limited to having to communicate with other apps/devices. So no limit of channels or parameters unknown to MIDI, like for instance GeoShred with its unique expression.
^^
as @lovadamusic says
MIDI is the interface, by that I mean it is the definition of the messages and what those messages can contain.
How that interface is implemented is irrelevant, as long as any midi connection point (MIDI Interface, Virtual MIDI port, Core MIDI) uses message that conform to the MIDI standard.
@dwarman is the person to comment definitively on this, I'd say.
One of the mysteries is that GarageBand seems to outwardly eschew any acknowledgement that midi ever existed — it’s as if it does what it does using some other superior and different invented-at-Apple form of note description. That suspicion may not be the case, I now see.
Is it really a situation of each app having to send midi down a virtual piece of string to another app also connected to that piece of string? Or is there somewhere inside the iPad a kind of “atmosphere” or “gestalt” or “central nervous system” that “is” all the midi that is currently happening anywhere to anything inside the iPad at that particular slice in time. All the apps have to do is reach up and look at that atmosphere and see what’s going on as far as midi is concerned at this current tick of the clock. Or maybe use a periscope, or listen through straws, or smell the current midi aroma inside the iPad.
A midi cloud?
It would be easy for me to write a 300 page book about MIDI. Trouble is, nobody would enjoy reading it enough .... I'm thinking how to get it down to a couple paragraphs here for you. Like coding, removing stuff is harder than creating it in the first place.
My question about MIDI on iPads is simple: why is it generally(*) so poor, given that it's a protocol that's existed for well over 30 years?
I'm still not getting much joy with MIDI sync between apps and DAWs, which is pretty frustrating given the quality of the individual apps
(*) the obvious exception being Ableton Link, which is really great but seems to me to be much more designed for live jamming rather than actually composing whole tracks....
Or am I missing something?
Ah, actually a social question, not a technical one.
Very briefly, in the iPad synth and controller app ecosystem, we are seeng the results of about 20 years of loss of insitutional knowledge about MIDI. Especially loss of what used to be a rich set of use cases.
Today, the worst offenders appear to view the iPad as an ultra-configurable virtual box, ie a single synth or controller - but not both! - attached to other synths and controllers out in the Real World studio. True, the iPad 1 was capable of little more, but iPad capabilities have advanced by at least an order of magnitude.
Perhaps due to my idiosyncratic background, I viewed the iPad as being much more - a single iPad can contain an entire virtual studio. So the use cases - from starting point of using this particular box now up to iPad containing multiple boxes and communication networks between them - have grown exponentially to cover what, how, and why, all of the possible interconnects and interactions would look like, be useful for, have this set of properties, and this set of user interaction interfaces.
And the box perspective pays little attention to how their box will behave in a network - if any.
Wonderful sound, high quality work, all the devs are way past my capabilities when it comes to the audio bit. But some of them clearly gave little thought to how their creation woud be used in an actual band or studio environment. At the same time, there are others who really really get it, and code their virtual boxes accordingly.
Unfortunately, at least with current cpu and gpu and dsp architectures, there is no way of anyone but the dev to add communication capabilities after ship. If they missed something, they missed it. Some do pay attention to critiques and make the change, even go on to invent way beyond what that took. And some do not. There are well respected synths out there which shipped with defective MIDI designs and / or implementation years ago that are still unfixed. Critical ones, from a ecological perspective.
I think this more than enough for now. I need to eat, anyway. This is very hand-wavey; if you want for lack of concrete examples, I will unhappily provide them.