Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Ipad mini (and iPolysix) troubles
This discussion has been closed.
Comments
For me, another problem with the "256 fits all" solution is that the more powerful synths such as Animoog, Magellan & PPG Wave Generator have for more sophisticated algorithms delivering their synthesis engines, which in turn have a far bigger hit on the CPU. Some Magellan patches wipe out an Ipad 3's CPU in isolation without any other background tasks (never mind earlier hardware). It's not a matter of those developers optimising their code; if you want that type of sound quality the CPU has to do the math to deliver it!
The iPad has been criticised by 'serious' music people as being nothing more than a toy but the innovative nature of an independent IOS development community has delivered instruments that have made the traditionalists sit up and take notice. The expressive nature of the likes of Animoog, Magellan & the PPG rivals that of modular synsesis systems costing many thousands of dollars; especially those patches that rely on the unique touch interface of IOS Devices. Incidentaly, at the moment, the only IOS Sequencer capable of capturing a performance from something like Animoog is Genome as it's the only one that can capture all the controler information (cc's) that are needed to play back the performance accurately. There's far more going on underneath the hood than simple note messages & velocity information. The truth of he matter is that music technology products have layers of complexity by their very nature and those people that make an effort to understand what's going on underneath the hood get the most back from their investments.
On my desktop system I moved from Logic to Ableton because I appreciate the simplicity of Abletons approach to delivering powerful solutions. I hope that Audiobus takes a lead from Abletons approach to simplicity rather than being overly simplistic, following the design/functionality cues of something like GarageBand. Audiobus customers have far more sophisticated needs than the average GarageBand user, even those that are new to the world of music applications; that sophistication is inherent in possibilities delivered by Audiobus.
jm
http://soundcloud.com/leftside-wobble
This reminds me of the early days when the first Mac came out in the early 1980s when you had to keep inserting a floppy into the drive to do even some of the most basic things because there was only so much you could do with 128k of ram and no hard drive. People didn't complain because it was the only game in town. The users quickly formed user groups where they'd share information on how to leverage the most out of their Macs. A significant portion of this activity revolved around how to customize the OS to their taste such as modifying system icons and messages in addition to the more practical issues of how to run apps with limited resources without crashing the system.
I can understand why the audiobus developers would want the music app developers who come on board to have a commitment to low latency as it serves as a lowest common denominator for how audiobus functions. Apps that don't meet these latency requirements can gum up the whole works and cause sound drop outs crashing apps and other highly undesirable behaviors.
By the same token, their must be some recognition of who will be using audiobus and the nature of their culture. As others have pointed out, the basic nature of creating some audio requires more resources than others. Users desire their functionality and will go to great lengths to find work arounds to accomplish their goals. I think music creators would prefer not to be burdened by having to take more time to figure out how to navigate these work arounds, but will do so if there aren't other viable options.
I think the audiobus development team and their music app partners would be wise to recognize and accommodate the trade offs between complexity and high resource use with ease of use. In these situations it's nice to not have to delve into the intricacies of an app if you don't need to in order to create your music but it's nice to have that option if you do. Having to do lots of research to figure out how to get around developer imposed limitations or dealing with competing standards that aren't compatible can be really annoying.
In some respects audiobus attempts to be all things to all people while keeping the user experience as simple as possible so they can focus on creating music rather than learning how to manipulate software. Perhaps they can look to the development and implementation of MIDI as an example of how challenging it can be to meet these goals?
If the audiobus team can't find a way to accommodate the diversity of iOS musicians, competing apps and standards will be put forth to satisfy user needs. This can get awfully messy, so hopefully places like this forum as well as feedback from users who just want it to work, will lead to solutions that meet the needs of each.
Hey, wow, sooo many, comments after some few hours!
I do not want to step into the AudioBus discussion since I'm too busy with some development stuff ;-) Just a few additional notes in regards to NLog which may help:
There was a request to let the user tweak the buffer size in NLog. This is something which I am considering for an update.
Another question was, if NLog can be stopped from running after it set the buffer size. You can do that, but according to my tests, then the buffer size could potentially changed by other apps again. So to be safe, you need it kept running. Of course, NLog's master effects will eat some CPU. A good idea is then to make a patch where all of them are set to bypass.
The buffer size logic in NLog was not meant to cheat AudioBus, but to make NLog running best on each device. The new recording mode of NLog made it necessary to have a good default for each device. Since with recording mode other apps cannot change it later on, I felt some new 'responsibility' I have in contrast to before, where NLog just was a playback app.
All this was discussed with Michael while testing. And of course, it's upon him and Sebastian to decide. However, I think that 512 isn't too bad in terms of latency. An option here wouldn't be a bad thing.
All opinions have been noted, but this public forum is not the place to solve this since there are certain things that we can't discuss in public (future features of Audiobus, limitations of sound engines of certain apps, etc.).
If you have any questions, please email me at sebastian at audiob dot us
If you're a developer please comment on the developer forum if you feel like you didn't get the chance to voice your opinion.
I'd like to assure everyone that we're listening to what has been said and that we are going to look to find the best solution.
Finally let me state that I've spoken with KORG about this since this is an issue caused by iPolysix. Sadly, we didn't have a chance to beta test iPolysix before its release to avoid this. But we're in contact with KORG to see what can be done about it. So far they seem to be a bunch of pretty awesome people.
I'm going to close this thread here since I feel like I would need to keep up with what's been said and there simply isn't enough time left in the day to do it properly.