Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

A Hypothetical AB Forum Synth

As an experiment....

A Hypothetical imaginary developer is going to build a new synth, and is asking the AB Forum members to provide ideas for the features the new synth will have.

Every idea will be implemented in the final App.

The purpose of this thread is to test the boundaries of "Feature Creep".

No idea is too insane... go nuts.

The new synth will have....

A live audio, auto-capture, waveform/Sample oscillator... Play sounds from other synths or sources of audio input. Hold down a capture button in the oscillator editor for the length desired while the sound is playing, and it becomes the waveform played by the oscillator. This oscillator can capture single wave cycles for simple oscillator waves. Auto build wavetables from a series of captured single wave cycles. It can auto adjust end points to create perfect loops. The capture engine can be programed to sample and arrange waves using to all sorts of capture algorithms, including taking wave samples from multiple different synths and sound sources, and combining them as the user defines..... The captured waves are playable instantly after being captured. Full waveform editing is available if desired too.

The new synth will have....

Retrospective midi recording so the user can save any ideas they may have played, without having to press Rec and play it again.

«1

Comments

  • The best synths I’ve used have a focused feature set. They do one set of things really, really well.

  • @NeuM said:
    The best synths I’ve used have a focused feature set. They do one set of things really, really well.

    In that case. The new synth will have....

    A modular mode where the user can include modules to create a focused feature set.

    The user will have full control over interface design for their focused feature set synths.

    The synth will be able to save focused feature set synths as presets. The one App can be any number of synths.

  • @horsetrainer said:

    @NeuM said:
    The best synths I’ve used have a focused feature set. They do one set of things really, really well.

    In that case. The new synth will have....

    A modular mode where the user can include modules to create a focused feature set.

    The user will have full control over interface design for their focused feature set synths.

    The synth will be able to save focused feature set synths as presets. The one App can be any number of synths.

    Already have that. It’s the miRack eurorack system app or Drambo app.

  • Because design-by-committee has never produced an inferior product!

  • The new synth will have....

    A built in programming language and compiler with a WYSIWYG drag+drop gui interface builder.. So people who want to experiment with programing their own effects, modules, and features, can do that all within the one synth.

  • @NeuM said:

    @horsetrainer said:

    @NeuM said:
    The best synths I’ve used have a focused feature set. They do one set of things really, really well.

    In that case. The new synth will have....

    A modular mode where the user can include modules to create a focused feature set.

    The user will have full control over interface design for their focused feature set synths.

    The synth will be able to save focused feature set synths as presets. The one App can be any number of synths.

    Already have that. It’s the miRack eurorack system app or Drambo app.

    This is a hypothetical synth that doesn't yet exist, and can have any features imaginable. Thus it can't possibly already exist because it based on a principle of infinite potential.

    Care to add a feature?

  • @celtic_elk said:
    Because design-by-committee has never produced an inferior product!

    This must be a group effort, otherwise we cant test the boundaries of feature creep.

    I'm debating if GPS can somehow be incorporated into sound design.

  • Where there's no real difference between 'oscillators/generators/modulators/processors'.

    Oscillator/LFO/Envelope/Sequencer are in practice 'the same', a waveform cycle or a 10+ minute sample with different playback options (one-shot/loop) with optional quantized output and input to control the phase position or multiply/add/subtract the input.

    Sure this would throw algorithmically created waveforms out the door but then again I love to finger-paint stuff...
    One 'app' that is a gift that keeps on giving and surprising me is Drambo, I feel like I can build just about anything :sunglasses:

  • @horsetrainer said:

    @celtic_elk said:
    Because design-by-committee has never produced an inferior product!

    This must be a group effort, otherwise we cant test the boundaries of feature creep.

    I'm debating if GPS can somehow be incorporated into sound design.

    Yes. It would need to be a community sourced effort. When an artists plays in a town, a network of followers' cell phones continuously send GPS data to the sequencer module of the synth. This is then used by a neural engine to generate the sequences for the piece being played. (Oh, and it sells the data to advertisers to help support the tour.)

  • @horsetrainer said:

    @celtic_elk said:
    Because design-by-committee has never produced an inferior product!

    This must be a group effort, otherwise we cant test the boundaries of feature creep.

    You do realize that you've invalidated the experiment by stating that up front, right?

  • edited August 2021

    It needs to import AB user posts as wavetables.

    I want my synth to spit out what it sounds like when combining mozaic help posts, current app sales posts, and any snarky comment regarding the ar909 kick.

    Also make it IAA only, and include those grumbles

  • I’d like it to have a SIRI like feature to help me solve problems (yikes, I forgot... I have no problems!)...
    Well then... to teach me new stuff, like a personal mentor, so I can have problems.

  • @NeonSilicon said:

    @horsetrainer said:

    @celtic_elk said:
    Because design-by-committee has never produced an inferior product!

    This must be a group effort, otherwise we cant test the boundaries of feature creep.

    I'm debating if GPS can somehow be incorporated into sound design.

    Yes. It would need to be a community sourced effort. When an artists plays in a town, a network of followers' cell phones continuously send GPS data to the sequencer module of the synth. This is then used by a neural engine to generate the sequences for the piece being played. (Oh, and it sells the data to advertisers to help support the tour.)

    Brilliant!

    The neural engine could track audience interest by detecting if they are in one place presumably listening, or walking out to their cars and driving off. Once the data reveals what interests the audience, new sequences can be constructed based on algorithms extrapolated from the evolving dataset.

  • @LinearLineman said:
    I’d like it to have a SIRI like feature to help me solve problems (yikes, I forgot... I have no problems!)...
    Well then... to teach me new stuff, like a personal mentor, so I can have problems.

    Like a SIRI type composing assistant so if you play something imperfectly, you can ask it to fix it.
    The AI composing assistant will be able to sing too. So people who can't sing can add vocals to their music. You just have to tell the composing assistant the words, then play the melody to be used.

  • @Samu said:
    Where there's no real difference between 'oscillators/generators/modulators/processors'.

    Oscillator/LFO/Envelope/Sequencer are in practice 'the same', a waveform cycle or a 10+ minute sample with different playback options (one-shot/loop) with optional quantized output and input to control the phase position or multiply/add/subtract the input.

    Sure this would throw algorithmically created waveforms out the door but then again I love to finger-paint stuff...
    One 'app' that is a gift that keeps on giving and surprising me is Drambo, I feel like I can build just about anything :sunglasses:

    How about adding a brain wave interface...

    Place sensors on your head in different spots, then use the brain waveforms from the different sensors to make sounds in the audio oscillator, and other waves as modulation sources.

    You could go to sleep at night wearing the sensors, and it would create music all night long made from the brain waveforms of your dreams.

    They do that with biofeedback already as a part of physical and psycho therapy. This would just be a reapplication for the purpose of making music.

  • Probably could use a breathalizer or something to turn itself off when Im and drunk and making shit music. Or at least tell me to drink some water and go to bed.

  • What would be helpful is to have phases:

    1. capture feature requests
    2. prioritize requested features using a voting/poll phase
    3. open comment phase to help more people see the value of a feature that isn't rated (pitching a developer on the benefits to make the app unique)

    Then wait a couple years, unless someone like @BramBos (who has a large body of synth apps available to be re-purposed for the task) and expect people to throw additional feature requests in the intro thread.

    I'm not sure there's an app that was developed by this process. Maybe someone with deep
    knowledge of music app history will share any relevant experiences.

  • Maybe use Siri to locate a preset from any of the installed apps that comes closest to the one that is being heard?
    This would require a sonic finger print of every single preset from every app installed on the device.

    Like

    Hey Siri, open the preset that sounds the closest to the lead/bass/pad sound in the currently playing piece of music...

    Or when looking for samples...
    Hey Siri find the snare drum from my library that sounds closest to the snare in the currently playing song...

    OR to kick it up a notch, Siri, find a set of drum samples that roughly match the currently playing beat and generate a midi-file and open it up in...

    Or, Hey Siri, analyze the currently playing sound, find out the synthesis method used, select and app and create a rough copy of the sound using it's available synthesis parameters...

    Well, maybe this will be here in 10-15 years or so, for the most parts the 'virtual assistants' are pretty useless for other than recepies, time, weather and directions and on-line shopping :sunglasses:

  • @McD said:
    What would be helpful is to have phases:

    1. capture feature requests
    2. prioritize requested features using a voting/poll phase
    3. open comment phase to help more people see the value of a feature that isn't rated (pitching a developer on the benefits to make the app unique)

    Then wait a couple years, unless someone like @BramBos (who has a large body of synth apps available to be re-purposed for the task) and expect people to throw additional feature requests in the intro thread.

    I'm not sure there's an app that was developed by this process. Maybe someone with deep
    knowledge of music app history will share any relevant experiences.

    This is an imaginary synth.

    The idea is to add the features you want it to have... No matter what those features are.

    It's all about the feature creep. But people can add both good ideas, and/or silly ideas.

  • @Samu said:
    Maybe use Siri to locate a preset from any of the installed apps that comes closest to the one that is being heard?
    This would require a sonic finger print of every single preset from every app installed on the device.

    Like

    Hey Siri, open the preset that sounds the closest to the lead/bass/pad sound in the currently playing piece of music...

    Or when looking for samples...
    Hey Siri find the snare drum from my library that sounds closest to the snare in the currently playing song...

    OR to kick it up a notch, Siri, find a set of drum samples that roughly match the currently playing beat and generate a midi-file and open it up in...

    Or, Hey Siri, analyze the currently playing sound, find out the synthesis method used, select and app and create a rough copy of the sound using it's available synthesis parameters...

    Well, maybe this will be here in 10-15 years or so, for the most parts the 'virtual assistants' are pretty useless for other than recepies, time, weather and directions and on-line shopping :sunglasses:

    Still. Those are all good ideas.

  • I don’t really care about other features as long as it’s iPad only. Call it a smugness or discrimination module if you want to market it as a feature.

  • @wim said:
    I don’t really care about other features as long as it’s iPad only. Call it a smugness or discrimination module if you want to market it as a feature.

    Can it start as iPad, and after getting it right, ship a desktop product like Mark Watt did with SpaceCraft Granular? or did that frustrate you because of the split in attention to maintenance and feature tweaking.

  • 1) Random mode... but with the choice over every single element which could be randomised (including layout, UX and colour scheme... because, why not?)

    2) MPE

    3) Full use of touchscreen to draw waveforms and envelopes. Also assignable XY pad

    4) Patches: fully renamable; bank creation (including "favourites"); easy patch/bank sharing

    5) Perfect (of course) emulations of every classic vintage synth filter (Moog, Oberheim, Korg, Roland, Elka, ARP, Buchla etc)

    6) Fully configurable FX setup (eg distortion, chorus, wah, overdrive, fuzz, reverb, ring modulator, phaser, tremolo etc) which can also be used as a stand-alone hosted AU

    7) Low CPU/resources

    8) Distinct monophonic and polyphonic modes (to emulate behaviour of old-school mono synths)

    9) Full MIDI control/mapping over every parameter

    10) In-built and accurate MIDI clock which can be used to drive other internal apps (incl via AUM) and also external hardware

    I shall leave out a looper... because "Loopy Pro".

  • @wim said:
    I don’t really care about other features as long as it’s iPad only. Call it a smugness or discrimination module if you want to market it as a feature.

    While we’re at it let’s make it an M1-exclusive

  • It can leverage all of those iPhones in the audience as true spatial voices.

  • edited August 2021

    Feed it a wav file and it generates the closest possible synthesized version. Feed it two sounds and it produces morphability between them.

  • Dynamically takes the average of all of the heart rates in the room to generate an evolving tempo.

    Would need an Apple watch tie-in.

  • @NeonSilicon said:
    Dynamically takes the average of all of the heart rates in the room to generate an evolving tempo.

    Would need an Apple watch tie-in.

    That's crazy! People would want to participate in that because they become part of the music.... As a group experience.

    It would be music that evolves based on any number of data types you could monitor from each individual.

    People are the musical instrument.

    This is like Blade Runner future stuff... :)

  • It offers an optional "face" interface via the camera in a way akin to what the theramin did with human hands.

    Vertical Angle of head = pitch (stepped in a scale or continuous)
    Horizontal direction of head = vibrato/tremolo
    Status of eyelids = volume
    Status of mouth = spectral filter (wah-wah, formants)

  • Three from me:

    • it should have a "slightly random" mode so I can save something as MIDI, then have it play back slightly differently every time. Kind of like the "humanise MIDI" processors, but also making subtle changes to the actual sound rather than just e.g. velocity and timing
    • it should save phrases that every user worldwide is creating, then let anyone pull them in randomly and use them in their own composition. Think of a giant Lego box of sounds & phrases that's always full of good & bad stuff, is free for everyone to use, and people can vote on them and categorise them however they see fit. Yep I'm aware we could never vote and categorise everything that gets created, so the emphasis would be on finding something out there in the ether that sounds magical for whatever you're creating
    • following on from the previous point, every thing this synth creates should be copyright-free, and free for anyone to use however they see fit. The licence for this synth should specifically prevent you copyrighting anything you create using the synth. From the music industry perspective, if you're using this synth you can sell access to live performances, but all forms of recording must be freely available
Sign In or Register to comment.