Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Is this possible with AudioBus (or at all on iOS)?
I'm trying to build an audio visualizer that listens to the device's output mix rather than a microphone. I achieved this relatively easily in Android using the following:
https://developer.android.com/reference/android/media/audiofx/Visualizer.html
The intended use / workflow is something like this:
1: User presses a "Start Visualization" button in my app.
2: User switches over to the YouTube App, Spotify, whatever - any app at all that generates any audio. There is no restriction at all on what's generating the sound. If the phone is playing it, my app is visualizing it.
3: My app uses the audio currently playing on the phone to perform the visualization.
The visualization runs on physical lights (Philips Hue) in the user's house. I do support the microphone on Android, but I consider it more of a gimmick / extra feature. The good visualization comes from Android's internal audio output mix.
I'm new to mobile development in general, so I figured I'd start with the Android side of things. I think the ease at which I achieved everything I was after left me completely unprepared for handling iOS. It has been a massive challenge at every step. I've been searching on and off for several days now and can't even accurately determine if this same functionality is possible at all on iOS. After doing a bunch of reading and ending up here, I'm getting the distinct impression that the answer is "no".
Honestly - I'm just looking for a 100% definitive answer to this question:
Is there any way to visualize the phone's audio output mix on iOS? By visualize I mean "get FFT magnitudes, even if low quality and unsuitable for recording".
Bonus - since I'm becoming certain that the above is just not possible on iOS, could I use AudioBus to perform roughly the same workflow as described above with the "big time apps" like YouTube, Spotify, etc? Do the "big apps" support AudioBus? The "compatible apps" page seems to suggest "no" as well.
I'm thinking that my particular app just isn't well suited to iOS, which is a huge bummer. I've been going back and forth with just doing a microphone implementation for iOS, but my Android users would be getting a totally different (and better) experience.
Can anyone help?
Comments
Hello!
I know there is a developer.audiob.us forum where the techie types will have more details. Your post here will be read mostly by end users of Audiobus. Some of us have happily been using Audiobus on iOS for several years.
I can give you a few answers to your questions, but this is not the definitive answer, just my understanding having followed the iOS music scene for quite a while.
in no particular order
1. this is a GREAT idea and would be very well-received if it can be done right. I don't believe there is much out there that will create visuals based on audio input real-time
2. Rather than iOS system audio being the input, most of us around here would prefer Audiobus or AUv3, meaning the music we are creating live would be the sound generator. (others may disagree but if is designed to work live, it can also work on a pre-recorded track played back live)
3. Please, from the very beginning make sure MIDI control is implemented well. Musicians will take the app a lot more seriously if the video effects can be controlled or automated by external gear.
Thats my thoughts on this. Any more detailed discussion will have to come from some of our brilliant iOS developers.
I really hope you are able to get this going! And keep us "users/addicts" posted as you make progress if you don't mind.
I'll post a few examples:
Wizibel - this app creates visuals non-realtime. You enter your audio track and select the visual, then a video is created that can be viewed/saved/ uploaded to the webz
https://itunes.apple.com/us/app/wizibel-audio-visualizer/id1169944763?mt=8
https://itunes.apple.com/us/app/quantum-vj-hd/id1188572371?mt=8
Quantum VJ HD. I don't know a thing about this... but looks similar to your idea. I don't love the output that much, would prefer waveforms and more organic looking stuff.
Idk if there is an app ATM since I’m ditching iOS as everyday gone (keeping very few apps in my workflow/setup) but time ago I proposed something similar and even I asked @SecretBaseDesign about it.
https://forum.audiob.us/discussion/19113/appletv-audiovideo-fx-app
Thanks for the replies everyone. Gives me somewhere to start
Also, thanks for the developer link - that's where this post should have gone!
K-Machine is another app that can do this and you can even create your own code for visualization in addition to the ones it comes with. It supports IAA and Audiobus.
In terms of being able to integrate audio input from social media sources, you might want to look at the Workflow app by Apple as Apple bought the company out a couple of years and its designed to allow users to connect various services into the workflows they have (similar to IFFT). Your visualization app as part of the Workflow ecosystem would seem to be a good fit and would help your visibility on the App Store especially if you integrate AR kit functionality.