Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
My current method for using headphones for mixing with iOS (Sonarworks - requires Mac/PC)
So, seeing requests for headphone recommendations set me toward sharing my current setup, with the hope that more people support this solution so that we can have it ported over to iOS with the new AU capability coming in iOS 9.
A couple months ago I came across a company called Sonarworks through a post on SonicState I found while searching for an FRFR approach to mixing with headphones.
What Sonarworks recently offered is a plugin for Mac and PC DAWs which applies a different EQ curve to each channel (L/R) of a headphone. Sonarworks has some sort of calibration method to derive these measurements, and when you buy their plugin (making sure your headphone is in their list), you can use a reference EQ curve which is an average based on all of the headphones of that types they have measured thus far, or you can pay a fee and send your headphones in to be individually calibrated, or now they are offering to sell you a new pair of headphones which they have already calibrated.
I haven't decided between those last two options yet for my ATH-M50X's (either to send them in or buy a pre calibrated pair from them and resell mine), so I'm currently using them with their average EQ calibration for the ATH-M50x.
So, this plug sits on the master track as the last item in your chain and is meant to only be turned on while mixing, or when critically listening to something (as reference). You would turn this off before you actually print the track, as this is a compensation for your own headphones.
I've been using this for a few weeks now after buying the plug immediately and have really been impressed with it. One extra standout feature is that once you reach their estimation of a FRFR environment, you can then simulate other headphones or speakers such as the Yamaha NS10.
So, currently I'm using this plugin within Logic X > Apogee One > ATH-M50x.
Now that Music IO has an AU, I've been able to bring that into Logic and for the first time have an FRFR view of sounds it produces. Let me say, that process has been an emotional rollercoaster with some apps I swore sounded amazing coming out awful while other apps rose from the ashes as complete warriors. I don't necessarily want to make a complete list of who is who in that showdown, but I can only suggest it is wildly enlightening. One universal thing I must say though is that every iOS Music App needs its own master volume!
Ok, so Sonarworks originally had a short list of supported headphones, but they are expanding it constantly, and apparently people are sending in many different types of headphones, so they are gathering more data. Now they have got to the point where they are beginning to review headphones for what they offer and the amount of "correction" which can be applied.
So far they've done the Audio-Technica ATH-M50x and the Sony MDR-7506 which I'll link here:
So, now that I'm an owner/user of the plugin, I've now asked them on twitter if they would look at supporting an AU for iOS 9 so that I can use an iOS mixing chain of for example Auria > Apogee One > ATH-M50x when I want to with the same amount of confidence in results I'm getting with my Logic chain now.
So, this is merely a request that those of you with Mac/PC DAWs test out the trial of the plugin, and if you like it, help me request it for iOS 9 as politely as possible.
They also offer a system (plugin and mic) for calibrating stereo monitors which is the basis of this headphone version of their reference plugin.
Their main site is here:
Twitter is here:
https://twitter.com/Sonar_Works
My original tweet to them:
Comments
Interesting, anything that makes mixing on headphones more reliable is certainly something I would like to see.
They would need to support more headphones though, those ATs are too bright for me (without calibration that is). If they could support Focal Spirit Pros or Senn HD 650s that would suit me. The video also mentions Beyer DT770s, but you would have to be out of your mind to mix on those, they are very far from neutral.
dt 770's have the closed cans and are more ment for recording live instruments,but the 990's with the open cans have a very natural sounding and one of the best headphones for mixing purposes.
Couldn't they just make a standalone app that goes in the audiobus output (or FX) slot?
Then you could use audiobus to route everything through their app before it hits your headphones?
Well, I believe since they have an AU already it would be much easier for them to bring that over to the iOS AU spec than it would be for them to create a brand new iOS app with all the fixings we enjoy.
I do believe that iOS 9 will need a simple app that can host plugins at different places throughout an Audiobus chain, and the best app already set up to do that is … Audiobus.
The reason is any app attempting to do this already needs to be a host type platform itself since iOS doesn't allow multiple instances of the same app.
If Audiobus can be a host which would allow multiple instances of AU plugins, that would be more ideal for that approach.
Not that a standalone version of the app wouldn't be interesting, but hosting options are usually superior for flexibility. On a Mac, you currently would use something like Audio Hijack to put this at the end of all audio chains if you want to use it systemwide (not just a DAW), which is annoying since Audio Hijack is not really designed for that. It would be great to have a better general host app for the Mac platform as well.
Does this modify only the frequency spectrum of each channel? To simulate speakers with a headphone, I think it is necessary, to introduce some crosstalk between the channels, cause otherwise the perfect channel-seperation would lead to an unnatural stereo-experience.
They do support Sennheiser HD650, but not Focal Spirit Pros yet it seems, though people have definitely been requesting them on gearslutz and kvr, so hopefully they add those soon.
Here's their current compatibility list:
http://sonarworks.com/headphones/supported-headphones/
And I agree, without the calibration the ATH-M50x is definitely hyped for me on both the high end and low end. I was having an issue finding mids which originally lead me on the search for calibration methods.
Also, I was looking to create a setup where I could derive the same audio from both the iOS and Mac AU version of Sunrizer, since I was having a very hard time matching the two until I came up with this approach where they match in an A/B comparison.
Though a tip there is that when you bring MusicIO into Logic and set up an audio track, be aware Logic automatically creates two sends on all Audio tracks which hit Delays/Reverb, which will attempt to foil your attempts to A/B the two.
My approach there was to set up a system where I could make patches on the iPad version which could be faithfully represented in the AU version when I bring them over, and I'm glad I've achieved that, but it would be much more ideal to be able to do that without connecting the iPad to my Mac.
Since these iOS AU's will likely usher in multiple instances of music apps like Sunrizer for us, this might soon create the possibility where most iOS music apps begin to offer AUs for both iOS and Mac, and Mac plugins doing the same for iOS. I'm quite excited by the possibilities with this depending on how it goes.
I believe it does only put a specific EQ curve to your headphones. The simulation of the sound is only going to give you the general sound of that speaker, probably determined from an average of those speaker responses as derived from their speaker referencing kit.
It doesn't emulate the experience of listening to the speaker, but I feel as a mixer you should always be A/B-ing your Mono and Stereo (which this plugin does offer as a quick toggle) to isolate issues susceptible to cross-talk anyway, which would most likely be phase cancellation.
And of course check your mix on any other speakers you have around.
So the speaker emulation in this case for headphones simply lets you focus your mixing on certain traditional frequency ranges useful for mixing/mastering, such as what you'd expect from an NS10.
I think to be clear I should have written "simulate" rather than "emulate" if I did anywhere on this page.
For whatever reason, this question brings to mind this recent video by Ian Shepherd:
Thanks for the link. Watching it currently. I've heard already, that cause of the physiologic properties of the human ear, a headphone will never being able to replace monitor-speakers. Perhaps im missing something fundamental, but i think, in the end it is all about the sonic waves that are hitting our eardrums, regardless if they have been produced by headphones or speakers. Should it not be possible to calculate the headphone output in a way that the eardrum would be triggered exactly in the same way as if they came from monitors, via some kind of room-simulation, which also takes care of direct sonic waves, hitting the other ear? As far as I've understood, the geometry of our inner and outer ears are responsible for our ability to get a picture of a room and the placement of instruments on x,y and z-axis. But this imagination is finally only rendered by the movements of our eardrum, which could be stimulated in exactly the same way by headphones with some clever processing, or is this a false assumption?
p.s.: I would understand, that this is not possible right now, in a perfect way, because of limitations of algorithms and perhaps our knowledge about psychoacoustics. But I don't get, why it should be physicaly impossible.
Hey guys,
We're looking into making our plug-in available to iOS apps. I can't promise anything so far, because it isn't an easy port, UI will need to be optimized for touch interaction.
Currently our plug-in deals with AFR optimization, so HRTF needs to be handled by something else. The trick, however is that you absolutely need to know the AFR of your transducer for proper HRTF implementation. Otherwise it works very unpredictably with various headphones, as you've probably heard. Currently we recommend using Isone TB for HRTF stuff and will dedicate some R&D for it in the future.
Ask me, if you have any further questions!
All the best,
Rudolfs
I see your point but I think you are giving it far too much weight for practical reality. First, to create a setup with speakers that would completely eliminate issues projecting sound in a room creates for your ears is very expensive and difficult to do for the average person compared to using a decent set of calibrated headphones.
So, while there are advantages along the lines you mention, you would need an absolutely perfect setup to take advantage of them without the disadvantages that will likely counter them in any other setup.
If you are suggesting that this plugin maker might consider "emulating" the perfect recording environment with headphones with the best possible setup, that sounds like a cool vision for their future development, though I don't know if they could afford it. Emulation is a very different beast than frequency curves.
The function of an NS10 isn't to sound "good" anyway, it is to sound as bad as the average consumer audio boombox of its time would, but in as neutral a way possible, allowing you to focus on the frequencies which translate to those devices.
And my take is, I'm not sure a perfect speaker setup with perfect room treatment or emulation of such will cause you to make a radically different mixing or mastering decision than calibrated headphones will if you are using the many tools within DAWs we have today from loudness meters to stereo analyzers, among many others, to verify your mix. There are international standards for loudness now if you want your work to be commercially available to certain markets.
The option given in this plugin is to give you a quick reference to NS10 like frequencies, and it works awesomely for my mixes. It is simply a reference point for ensuring your mix is translatable to lower end consumer devices. Considering the age of Most NS10s and what they need in order to be maintained, and thus the practical reality that there is likely a lot of variance between each aged NS10, so I have to wonder what Sonarworks would really be trying to emulate other than the frequencies themselves.
Not that I don't get your point that there is some kind of "magic" with a room and a speaker, but the purpose in mixing/mastering is to reach a baseline which is translatable to all audio devices and their wildly different speakers of various quality, and I believe headphones are more than capable of offering that translatability.
And it is becoming easier every year to test these things with Bluetooth connections in cars to check your headphone referenced mix in a vehicle, or many other consumer devices really. The focus is translation, not universal perfection for eardrum "magic" (which is likely completely unattainable for the vast majority of consumer speakers and headphones anyway).
That's simply my personal take though. Just try not to fall into the corporation driven notion that you need a $25,000 to $100,000 setup in order to "properly" mix and master music.
@AQ808 Thats understandable. I only wondered, if there would be in fact some physical reason, which prevents such a simulation. I've recently bought a pair of Yamaha NS7's and am really happy with them. Don't even want to exchange them with permanent headphone-mixing, if it were possible at all.
But one thing came often to my mind - as I've already mentioned, I think with a little calculated crosstalk the stereo-experience with headphones would be much more realistic, even if this crosstalk isn't calculated with highest fidelity. Of course - this could be done with some diy - but I would welcome it in a ready plugin, like that, introduced by Sonarworks.
Oh wow, somehow didn't see that Rudolphs of Sonarworks had posted while I wrote that last comment.
I'm very appreciative of your efforts!
iOS 9 is a few months away anyway, and new versions usually take some time to stabilize before iOS musicians can trust them, so you definitely have time, but I'm glad that you are sincerely investing time into investigating it. Thank you!
I'm actually very glad you brought this up, as I have not fully realized its usefulness. Sounds like Sonarworks wants to R&D a solution in the future, so that is something to look forward to.
But, definitely don't feel like you have to exchange from one to the other. I simply want the best super portable mixing environment for allowing me to do more work in difficult situations, such as relatively noisier situations, so I can do more work on the go. I can get annoyed mixing in the same room all the time. I personally like to have a personal studio and an equally capable fully portable solution to mix up my life as well.
Yes thats a field with interesting perspectives, I think. Imo. it is somewhat strange, that we are used to feed our headphones with the same signal as our speaker over decades. Even in analogue times it was quite possible to adjust the headphone signal a little bit to their different sonic properties. Seams nearly like some traditional ignorance.
I bought sonarworks a few weeks ago and I'm using it with my ATH-M50x and it has resulted in my mixes sounding really really good. Love it.
What Sonarworks does is to kill the default ambience of your headphones so that you can mix like a pro, that is, you start with a flat curve. Meaning the amount of bass or highs that you mix with are what comes out in the mix.
To understand this, for example, if you were mixing with a Beats headphones that has a bias for heavy bass, you may end up with a bass heavy mix because bass is enhanced in Beats headphones (at least, to my ears). With Sonarworks activated, your Beats headphones will have more of a flat response so that you can add bass or highs as necessary.
I am not interested in the technology or physics of this system, all I know in layman's language is that Sonarworks removes coloration of your speakers or headphones so that you can decide how much of what frequencies you desire in a mix. Trade secrets All the best studios in the world have calibrated monitoring systems and this is what Sonarworks brings to headphones.
BTW, I'm using Sonarworks in Logic Pro X. All my iOS sounds go into Logic. This is how I work. My Alesis monitors might benefit from Sonarworks but as I love mixing with cans, I will not pay to have them calibrated.
It would actually be the other way around, if you mix with bass-heavy headphones the exaggerated bass response would fool you into creating a mix that is light on bass when played on other systems.
I tend to like warm-sounding headphones personally, not Beats crazy level of bass response, but warm like Sennheiser, rather than bright like Audio Technica or AKG. So I've suffered from this problem: a mix that sounds good on my Senn 650's is too bright when played on speakers. I switched to Focal Spirit Pros to try and overcome this and they are a little more reliable, but I still have to make sure my mixes sound very warm on the phones to ensure a balanced mix when played on speakers.
It's shame that everything always sounds better on headphones though. Maybe that's why neutral (ie boring) monitors are so effective, if a mix sounds good on them then it will be really awesome on a nice pair of headphones.