Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

A question on AI for anyone with current background

135

Comments

  • @cyberheater said:

    @ervin said:
    I expect universal basic income to become a much more credible proposition than it is today

    It has to happen or there will be riots.

    It's funny that governments are pushing the retire age up and up and yet there's a high chance that in 20 - 30 years most jobs will be done by machines. I've often wondered how that was meant to work.

    There are riots, looting and property destruction in the large coastal cities in the US daily and none of it has anything to do with "A.I." replacing people. A subsidy (which is what UBI represents) creates more of the thing it allegedly aims to solve. Markets based on supply and demand solve these issues more effectively than any taxpayer giveaways.

  • @monz0id said:

    @BerlinFx said:
    And as say @SevenSystems for translation game is nearly over

    Not completely, Google already tried that. Some companies here used Google translate for their Bi-lingual English/Welsh websites, but there were so many translated word (it's a very, very tricky language to translate) and grammatical errors that the outcry forced them to do it properly. And it's even more important for critical information, such as health websites.

    I can see it being a useful tool, but it will need to be checked by experienced translators and editors before it's used in public.

    I can see AI being an even more useful tool for lawyers:

    https://www.theguardian.com/sport/2023/apr/22/michael-schumacher-formula-one-interview-die-aktuelle-editor-sacked

    Some languages are more difficult than others, but they'll all be cracked eventually.

  • edited April 2023

    @cyberheater said:

    @Artj said:
    Regarding using AI in creative process, I still can't get rid of my feeling uneasy about it. Are we fooling ourselves in thinking this will help? I agree that old art influences new one. But new art is not just a new combination of old fragments, is it? I'm still convinced that deep down the current AIs are just clever statistic/random algorithms. Huge data obviously, but art is not just algorithm, or is it? If not, and if something cannot be transcribed into codes, then aren't these algos just produce the same kind of art once everyone moves in on using AI, just expanding the data library with new combination of old fragments?

    I hope I'm wrong though because I really love all kinds of automatic processes!

    I’ve tried Midjourney and the AI output is astonishing. Every bit as good as what a human can do.
    I’ve heard at least one Chinese game studio is letting go all their creatives because the can use these tools instead.

    The other thing to consider is that it doesn’t matter how incredibly intelligent current AI is, it will only get smarter. They talk about super intelligence. It’s conceivable that in a 5 to 10 years there maybe general AI that is smarter then us by an order of magnitude. If that AI was ever given agency then we’re all in big trouble.

    Keep in mind, everyone being let go by that company could also start a competing company. It's just a matter of people being proactive and aggressively using these new tools.

  • edited April 2023

    @BerlinFx said:
    Actually the result in Chat GPT 4 is all about as a human how to tell him what to create in simple words. If you are unable to know what you want and how to explain , sometimes you can can also have a very bad results from chat GPT 4 .

    So it also about your skill to explain clearly. English should d be a better direct language to use AI than my mother tongue (French) or philosophical German words. I think it is an interesting point for AI research and use.

    Yes, right now you need to treat Midjourney and Chat GPT as brilliant assistants. If you give them "too much" detail you'll be dissatisfied with the results. If you leave room for interpretation by the systems, you'll (more often than not) be pleasantly surprised by the results.

  • @SevenSystems said:

    @ervin said:

    @SevenSystems said:

    @ervin said:

    @SevenSystems said:

    No worries, I'm running all your posts through GPT first to make them easier to read 😂😉

    Was this burn suggested by GPT as well? Decent effort. 🙂

    Well I had GPT crack quite a few good jokes alright🥳

    "Two AIs walk into a bar..."

    I'm reluctant to believe a human could've come up with something better (in a matter of 500 milliseconds)

    (I've done a quick research -- the joke is clearly original. There's no "I'll have a byte" joke to be found on Google)

    😶 Colour me impressed.

  • I have a continually running thread on the fediverse in which I mentioned AI and jokes

    https://functional.cafe/@u0421793/110119988333133225

  • Retire age going up is a reaction to ‘now’ - ageing population, less people have to cover for more... and I would put here the biggest ever generational shift... it will pass and it will change

    AI for music imo is going to surpass human capabilities. Music is math with all nuances included. We can’t even listen to all there is out there, yet ‘analyse’ it... we can only use tools in meaningful ways once we learn and understand them - imo generative music is up, because many now have the tools that understands the math for them and it’s not even AI. Also original music is hard to come by because at this point most of us only accidentally stumble on something new. Soon AI will be able to analyse everything ever recorded, compare it to every possible variation there is and start to come up with things we will appreciate for its originality... though I sure there will be bumps but it will get there.

  • @ervin said:
    Jokes aside though, machine translation has been an early area for the application of (proto) AI. And stuff like your user strings will indeed not need to be translated by humans any longer. But you would probably still want the English translation of the manual for a life-saving medical device to have been at least reviewed by an expert human before they use it on you. 🙂 For a while longer anyway.

    You're assuming that such devices will continue to be operated by humans. That will probably not be for long. 🧐

  • @NeuM said:

    ….,..

    There are riots, looting and property destruction in the large coastal cities in the US daily …..

    As someone that lives adjacent to a large coastal city and whose extended family has members in various large coastal cities, this comment is inaccurate fear-mongering.

  • @SevenSystems said:
    Just a practical example from a software developer, happened 2 hours ago: I decided I want to translate my app MusicFolder to German as well.

    All texts in the app come from a central .json file, with an object that contains an id for each text, and then the individual languages as keys, and the translations as values. So, only English so far for each text id.

    Pasted .json file to ChatGPT and told him to please take the text from each 'en' key, and add another key 'de' and put a German translation of the text as value.

    Took 1 minute. Pasted json back into app, finished. Checked translations: they're perfect, including totally ambiguous words where ChatGPT had to realize (which it did) that we're dealing with a music player app. Some of the texts were HTML. It perfectly knew and translated only the text nodes.

    54 texts. Would've taken a human an hour or 80 EUR for a translation agency.

    Crazy.

    Try Github Copilot ;) It’s plugin using GPT api, available for most major IDEs. It’s subscription 10€/month (first month free) but it’s worth every single cent. Imagine autocomplete in giga steroids.. for exampke you write docblock where you describe what function you need, what are inputs, what are expected outputs, then go to new line, wait few seconds - boom. Function done.. works aso fantastic fir complex regexes.. generates also perfect docblocks for existing code lol

    It’s really incredible, saves me enormous amount of time, some tasks which would take me hours are now done in 15 minutes.. It’s like having nearby another dev which writes entire blocks of code for you, you just hi tTAB to confirm his suggestion and continue.

    Believe me, it’s magic.

  • @ervin
    Jokes aside though, machine translation has been an early area for the application of (proto) AI. And stuff like your user strings will indeed not need to be translated by humans any longer.

    we already use automatic translation into 8 languages in our app (thozsands of texts from shirt input labels to paragraphs of dome quick help in-app descriptions) - we use deepl.com - for maybe year and result is incredible. Really 99% of teanslstions doesb’t need human touch.

    But you would probably still want the English translation of the manual for a life-saving medical device to have been at least reviewed by an expert human before they use it on you. 🙂 For a while longer anyway.

    Yeah probably for 1-2 years more.. what peole doesn’t realise is that in moment when language model will provide 100% translation, it will be capabke also directly CONTROL that device with 100% accuracy, probably even better than human.

    Expecting singularity in less than 10 years

  • My company has banned tools like ChatGPT etc until they’ve completed a evaluation of the technology and it’s potential risks.

  • edited April 2023

    @cyberheater said:
    My company has banned tools like ChatGPT etc until they’ve completed a evaluation of the technology and it’s potential risks.

    company which doesn’t jump on the train right now basically liosed opportunity and will become margunal on market in upcoming years .. competition will win… year- two from now will be late..

    i would suggest you in long term search other employer, somebody who embraces implementation of AI tools in their processes

  • @dendy said:

    @cyberheater said:
    My company has banned tools like ChatGPT etc until they’ve completed a evaluation of the technology and it’s potential risks.

    company which doesn’t jump on the train right now basically liosed opportunity and will become margunal on market in upcoming years .. competition will win… year- two from now will be late..

    i would suggest you in long term search other employer, somebody who embraces implementation of AI tools in their processes

    Just one example why caution is absolutely justified: imagine your company embraces chatgpt with all its splendor, starts feeding it with all kinds of information, including customer and personal data, which then leaks out, breaking European GDPR laws and the company gets fined out of existence.

    Extreme? Sure. Possible? Absolutely.

    I think serious companies are right to be careful about investigating the implications and consequences of using AI, especially if it's used as an external service. "Get a new job if your company doesn't run headlong into the brave new world" sounds pleasingly hardcore, and it's totally on brand for you @dendy 👊😀, but it's a wild West out there right now. Thinking before acting is not the same as not acting.

    @dendy said:
    what peole doesn’t realise is that in moment when language model will provide 100% translation, it will be capabke also directly CONTROL that device with 100% accuracy

    Those are two completely different applications though. This is a very... enthusiastic statement. I mean I have provided 100% translations for medical devices in the past, and I have infinitely better fine motor control than any language AI ever conceived - but you still definitely don't want me to operate on you, my friend. 😉

  • @BerlinFx said:
    When it come to to EDM, House , Disco on the dance floor they want to dance and it’s the same they are not with pro headphones and Hifi top quality gear to analyse your music. If AI music is good to dance it will be ok.

    AI will be a shock for the ego of many producers if some AI music is at the top of the chart. For the big names the luxury will be to work high school , and they make monney in concert live where people want a big show with very good musicians on stage.

    @hes said:

    @monz0id said:

    Technology is a tool, it ain’t magic, and will seem even less so with the benefits of hindsight.

    ‘Oh dear Henry, I have just seen the devil himself - a fiery beast from Hell thundering across the field, summoned from the depths of Hades by Merlin himself!!!’

    ‘Calm yourself dear Florence, why that was just one of these new locomotives that we have these days’

  • @dendy said:

    @SevenSystems said:
    Just a practical example from a software developer, happened 2 hours ago: I decided I want to translate my app MusicFolder to German as well.

    All texts in the app come from a central .json file, with an object that contains an id for each text, and then the individual languages as keys, and the translations as values. So, only English so far for each text id.

    Pasted .json file to ChatGPT and told him to please take the text from each 'en' key, and add another key 'de' and put a German translation of the text as value.

    Took 1 minute. Pasted json back into app, finished. Checked translations: they're perfect, including totally ambiguous words where ChatGPT had to realize (which it did) that we're dealing with a music player app. Some of the texts were HTML. It perfectly knew and translated only the text nodes.

    54 texts. Would've taken a human an hour or 80 EUR for a translation agency.

    Crazy.

    Try Github Copilot ;) It’s plugin using GPT api, available for most major IDEs. It’s subscription 10€/month (first month free) but it’s worth every single cent. Imagine autocomplete in giga steroids.. for exampke you write docblock where you describe what function you need, what are inputs, what are expected outputs, then go to new line, wait few seconds - boom. Function done.. works aso fantastic fir complex regexes.. generates also perfect docblocks for existing code lol

    It’s really incredible, saves me enormous amount of time, some tasks which would take me hours are now done in 15 minutes.. It’s like having nearby another dev which writes entire blocks of code for you, you just hi tTAB to confirm his suggestion and continue.

    Believe me, it’s magic.

    When I was dev , I was dreaming of such a tool and they did it after I leave the dev job , not fair lol

  • @espiegel123 said:

    @NeuM said:

    ….,..

    There are riots, looting and property destruction in the large coastal cities in the US daily …..

    As someone that lives adjacent to a large coastal city and whose extended family has members in various large coastal cities, this comment is inaccurate fear-mongering.

    I live in a large coastal city and I stand by the comment.

  • @cyberheater said:
    My company has banned tools like ChatGPT etc until they’ve completed a evaluation of the technology and it’s potential risks.

    Let me guess, either a German company, or at least EU 😂

  • In Europe GDPR look seriously to AI , even Irish regulator is harsher now than German regulator. To tell the true US regulator watch a lot to GDPR too.

  • edited April 2023

    @ervin said:
    Just one example why caution is absolutely justified: imagine your company embraces chatgpt with all its splendor, starts feeding it with all kinds of information, including customer and personal data, which then leaks out, breaking European GDPR laws and the company gets fined out of existence.

    Ok in case of ChatGPT this may be problem - but ChatGPT is just tip of iceberg of AI tech - there is enormous amount of other AI tools which can help to business in various areas .. Exactly for reason you mentioned we use our own AI , trained on our data, running on our servers, for various features in our APP - to not break GDPR ;)

    To be more exact - we use GPT api where is no issue with GDPR and for GDPR sensitive tasks we use our own AI running on our servers inside EU.

    Honestly you even doesn't need leak to break GDPR - if you are processing any GDPR sensitive data, just plain sending them into OpenAI api is breaking of law ;)

    For example there is LLama (AI model from Facebook) which you can run on your own server - it has effectiveness 90% of GPT4 but you need just one NVidia A100 to run it - and becasue it runs on your own computer inside EU, GDPR is not issue in this case ;) (of course in case you are processing data of users with their GDPR consent lol)

  • edited April 2023

    @ervin said:
    Those are two completely different applications though. This is a very... enthusiastic statement. I mean I have provided 100% translations for medical devices in the past, and I have infinitely better fine motor control than any language AI ever conceived - but you still definitely don't want me to operate on you, my friend. 😉

    thing is, medical AI's are already now much more accurate and effective than humans in diagnostics - and there is no reason they shouldn't be better in using medical tech .. there are cases where AI diagnosed various illnesses in stage where human was not able to identify them at all ..

    so yeah.. i personally doesn't have problem to trust AI in medical stuff ;) Probably even more than humans, cause i saw a LOT of doctors failing to provide correct diagnose or treatment or even performing wrong surgery ..

  • @dendy said:
    so yeah.. i personally doesn't have problem to trust AI in medical stuff ;) Probably even more than humans, cause i saw a LOT of doctors failing to provide correct diagnose or treatment or even performing wrong surgery ..

    That’s a good point, actually. Both my parents, and gran suffered as a consequence of incompetent doctors.

  • @dendy said:

    @ervin said:
    Those are two completely different applications though. This is a very... enthusiastic statement. I mean I have provided 100% translations for medical devices in the past, and I have infinitely better fine motor control than any language AI ever conceived - but you still definitely don't want me to operate on you, my friend. 😉

    thing is, medical AI's are already now much more accurate and effective than humans in diagnostics - and there is no reason they shouldn't be better in using medical tech .. there are cases where AI diagnosed various illnesses in stage where human was not able to identify them at all ..

    so yeah.. i personally doesn't have problem to trust AI in medical stuff ;) Probably even more than humans, cause i saw a LOT of doctors failing to provide correct diagnose or treatment or even performing wrong surgery.

    That's all documented and true, but as I said this and language are two separate areas. "The moment AI is good at diagnostics it will also be good at translation" is just not an argument. Although it can be an observation of coincidence, that's true 🙂

    Anyway, I think we mostly where that the> @dendy said:

    @ervin said:
    Those are two completely different applications though. This is a very... enthusiastic statement. I mean I have provided 100% translations for medical devices in the past, and I have infinitely better fine motor control than any language AI ever conceived - but you still definitely don't want me to operate on you, my friend. 😉

    thing is, medical AI's are already now much more accurate and effective than humans in diagnostics - and there is no reason they shouldn't be better in using medical tech .. there are cases where AI diagnosed various illnesses in stage where human was not able to identify them at all ..

    so yeah.. i personally doesn't have problem to trust AI in medical stuff ;) Probably even more than humans, cause i saw a LOT of doctors failing to provide correct diagnose or treatment or even performing wrong surgery ..

    Sure. That's all documented and true, but my point was that this and language are two totally separate areas. "The moment AI is good at diagnostics it will also be good at translation" is just not an argument. Although it can be an observation of coincidence, that's true 🙂

    Anyway, I think we mostly agree that the brave new world of AI is just around the corner. Well, except for Elon's Teslas which will probably keep self-exploding and causing accident after accident into eternity.🥴

    Also, dogs can sniff out illnesses before any medical diagnostics, and they are also much friendlier than AI! 😁

  • @ervin said:
    Anyway, I think we mostly agree that the brave new world of AI is just around the corner. Well, except for Elon's Teslas which will probably keep self-exploding and causing accident after accident into eternity.🥴

    How much higher is the accident rate per driven mile of self-driving Teslas compared to an average human driver? (please include source)

  • edited April 2023

    @ervin
    Anyway, I think we mostly agree that the brave new world of AI is just around the corner.

    new, for sure .. if brave - uhm, not completely sure :-)) I’m a bit realist here, it may end fantastic positive for mankind, but it may also end very much horrible

    We will see.. for sure it is unstoppable - the potential how this tech can positively change everything is too big to ignore it, but of course it’s good to not pretend i may also turn very bad for us.

    One is for sure. We will see, rather sooner than later, we have max 10 years to true self-improving AGI … see u in singularity ;)

  • If you were a cautious person and you had an advanced AI system that you didn't quite understand how it worked you'd take precautions.

    1) You wouldn't allow it to interact with millions of people getting more information about humans than it already knows.
    2) You wouldn't train it with millions of human intentions such that it's impossible to know if it's actually conscious or is faking being conscious
    3) You wouldn't give it the ability to learn how to program because if it could program then there is a possibility that at some point it might just start trying to improve it's own code and iterate on that process fast then we could possibly imagine.

    We've done all that and more. I can't see how this is going to end well for humans.

  • If the neigh-sayers got their six month pause, what happens when the six months is up? Six months isn’t anywhere near enough to understand what’s occurring inside an LLM – that’s probably a job for a suitably trained AI

  • Just a reminder ChatGPT is not a knowledge AI, it is a language-use AI.

  • edited April 2023

    @cyberheater said:
    If you were a cautious person and you had an advanced AI system that you didn't quite understand how it worked you'd take precautions.

    1) You wouldn't allow it to interact with millions of people getting more information about humans than it already knows.
    2) You wouldn't train it with millions of human intentions such that it's impossible to know if it's actually conscious or is faking being conscious
    3) You wouldn't give it the ability to learn how to program because if it could program then there is a possibility that at some point it might just start trying to improve it's own code and iterate on that process fast then we could possibly imagine.

    We've done all that and more. I can't see how this is going to end well for humans.

    In case you are talking about GPT then

    1) GPT engine DOESN'T learn anything from interacting with humans.. it was trained by OpenAI on fixed dataset, with clean cut in (i think september) 2021. It has even very limited short term memory, if you discuss on some topic for longer time, it even starts forgetting what you were discussing on beginning :-) Definitely doesn't remember or even know about discussions in different chats.

    2) Again. Nobody trains anything now. Only thing which is changed is filtering for problematic topics but this is done by OpenAI team. They literally tweak some parameters of model manually, to prevent GPT talk about sensitive and problematic topics - that's main reason they made it available for public - to allow people do their standard shit and tweak model the way that it does not do standard human shit and refuses do it even when instructed to do it by humans :)))

    GPT engine itself has no way to "retrain" itself or update data. Again, see 1) - it's dataset is fixed. Actually process of training is quite HW heavy and it is even not possible to train such large model "in realtime" based on direct interactions with ppl. We are not there yet.

    3) Programing skills are basically no different from language skills, so becaise this is language model, understanding of code is natural result. It CAN'T execute OWN CODE (unless it is not executed by some external engine made by humans, like autoGPT - https://godmode.space). But directly on it's own, GPT engine can write code when instructed to do, but can't execute it. Can't browse internet. It's pretty safe.

    Also what people aren't getting right is that GPT has not it's own will. It can't develop any ideas from zero (like we humans do). GPT needs prompt and then it works basically just like super autocomplete on steroids based on that prompt. So no, it really is not Skynet which decides do destroy humans :)))

    Currently biggest problem of AI is risk, that it will be used by other humans for doing bad things. Unfortunately, that is inevitable and that is what we humans are doing for ages with EVERY new invention (remembering nuclear energy ?? )

Sign In or Register to comment.