Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Question for devs, do you find AI helpful in coding?

2

Comments

  • Every time I’ve messed with AI coding tools, they impressed when I was just experimenting, but I quickly found them useless when it came to helping with actual work. Though I’ll continue to give it a try every now and again as things progress.

    Part of that may be that there’s not a widely available corpus of the kind of embedded programming that I currently do to train it on.

    I’ve worked on several projects over the years (wow…I suppose I have to say decades now) that applied AI and machine learning techniques to the problem domain. But not to writing code.

  • I work in Visual Studio all day, which has Intellicode AI code completion enabled… and it’s really hit and miss. If it just learned my idioms it would be much more useful! i.e. null checking code is all over the place and follows a predictable pattern. But no.

  • @Gavinski said:

    @hes said:

    @garden said:
    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    @SevenSystems said:
    What if "true" cognition is just "operation on language patterns"? ;)

    Yes, I would object similarly. If you're saying that ChatGPT/GPT4 is "not AI in any meaningful sense of the term", then you have very strange definition of AI. AI is "artificial intelligence". The algorithms on Spotify and Amazon that make song and product recommendations are AI, under any meaningful definition of the term. The generalized form of AI in GPT, much more so.

    It's important to note that AI is (or can be) something completely separate from notions of "sentience" or "consciousness" or "personhood". You can have AI without any of those things. Any basic resource on AI will draw attention to the distinction. E.g., https://www.linkedin.com/pulse/artificial-consciousness-vs-intelligence-stefan-korn

    Also, I'm pretty sure not even experts have a very good understanding of how the human brain works. But, yes, it's clear that pattern recognition is a huge part of it.

    Yes... But also, let's not focus solely on the brain. 'Embodied Cognition' is worth a Google for anyone interested.

    By the way, just to mention that everyone has free access to gpt4 through Bing Chat (you used to have to use Microsoft Edge browser or app to get access to that, but it also now works directly in the Bing app. Chargpt+ still has some features that people might find it worth paying for.

    Also, for accuracy, Claude AI is supposed to be better than chatgpt, in theory it should tell you when it can't give an answer instead of just making shit up. Have any coders seen whether they get better results from it?

    Thanks for this I’ve been using a lot of different ai things recently, and didn’t realise. Will now switch focus a bit - so cool ABIUT gpt4. It’s bloody amazing for French conversation and so many things. I just order it to be a French speaker (note I don’t say person 👀) and helps daily practice so much. I think specifically with learning a language there is no better tool or facility. I know enough to pick up if something sounds dodgy most of the time so I’ll keep an eye out.

    On the other stuff here. The earth is a homeostatic system. Kind of a tautology. We could be pedantic and say new elements are introduced through the atmosphere - there is something akin to respiration on a planetary scale with gases being filtered both ways by the atmoshere’s specific composition. But let’s just say it’s a zero sum system. Because otherwise we can just expand the context ti a solar system or a galaxy. The elements in that system move around and change, and the overal morphology may change somewhat but what is there does not change, just the arrangement.

    So human beings and life in general are part of that singular arrangement. Just where systems have formed within system. We are part of a landscape not the whole of a portrait. It’s just we’re defined, necessarily by our experiential ignorance of that. So patterns, are patterns, are patterns and the difference between recognition, and a physical stimulus-reaction at a chemical level is one of complexity not category.

    Anyway Morning.

  • I think that “large language model” and “stable diffusion”, as have been offered here, along with other terms that specify and expose specific technologies, are very useful for unraveling the mystique a bit.

    Most of this technology is also a lot more approachable, at least at some level, than it used to be, and we’ve seen several examples of that here over the past months. It’s a real toolkit now, not just a magic box, and I urge you, if you’ve not done so already, to try your hand at it.

  • @fisherro said:
    Every time I’ve messed with AI coding tools, they impressed when I was just experimenting, but I quickly found them useless when it came to helping with actual work.

    Part of that may be that there’s not a widely available corpus of the kind of embedded programming that I currently do to train it on.

    @MadGav said:
    I work in Visual Studio all day, which has Intellicode AI code completion enabled… and it’s really hit and miss. If it just learned my idioms it would be much more useful! i.e. null checking code is all over the place and follows a predictable pattern. But no.

    Exactly this. If you’re working in a language that is well represented in public repositories, which is what the publicly available engines are trained on, then they do… ok. But if you get into something more rarified, it just falls apart. Obviously this will improve as the megacorps that run these things find ways to access more and more source text, but it does expose both method and weakness.

    And I’ve observed that the code models have an effect similar to the plain text models that now infect so much online writing, particularly of company blogs, ad copy, and the like. It produces and enforces the common, not the original.

  • The user and all related content has been deleted.
  • @wingwizard said:

    @Gavinski said:

    @hes said:

    @garden said:
    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    @SevenSystems said:
    What if "true" cognition is just "operation on language patterns"? ;)

    Yes, I would object similarly. If you're saying that ChatGPT/GPT4 is "not AI in any meaningful sense of the term", then you have very strange definition of AI. AI is "artificial intelligence". The algorithms on Spotify and Amazon that make song and product recommendations are AI, under any meaningful definition of the term. The generalized form of AI in GPT, much more so.

    It's important to note that AI is (or can be) something completely separate from notions of "sentience" or "consciousness" or "personhood". You can have AI without any of those things. Any basic resource on AI will draw attention to the distinction. E.g., https://www.linkedin.com/pulse/artificial-consciousness-vs-intelligence-stefan-korn

    Also, I'm pretty sure not even experts have a very good understanding of how the human brain works. But, yes, it's clear that pattern recognition is a huge part of it.

    Yes... But also, let's not focus solely on the brain. 'Embodied Cognition' is worth a Google for anyone interested.

    By the way, just to mention that everyone has free access to gpt4 through Bing Chat (you used to have to use Microsoft Edge browser or app to get access to that, but it also now works directly in the Bing app. Chargpt+ still has some features that people might find it worth paying for.

    Also, for accuracy, Claude AI is supposed to be better than chatgpt, in theory it should tell you when it can't give an answer instead of just making shit up. Have any coders seen whether they get better results from it?

    Thanks for this I’ve been using a lot of different ai things recently, and didn’t realise. Will now switch focus a bit - so cool ABIUT gpt4. It’s bloody amazing for French conversation and so many things. I just order it to be a French speaker (note I don’t say person 👀) and helps daily practice so much. I think specifically with learning a language there is no better tool or facility. I know enough to pick up if something sounds dodgy most of the time so I’ll keep an eye out.

    On the other stuff here. The earth is a homeostatic system. Kind of a tautology. We could be pedantic and say new elements are introduced through the atmosphere - there is something akin to respiration on a planetary scale with gases being filtered both ways by the atmoshere’s specific composition. But let’s just say it’s a zero sum system. Because otherwise we can just expand the context ti a solar system or a galaxy. The elements in that system move around and change, and the overal morphology may change somewhat but what is there does not change, just the arrangement.

    So human beings and life in general are part of that singular arrangement. Just where systems have formed within system. We are part of a landscape not the whole of a portrait. It’s just we’re defined, necessarily by our experiential ignorance of that. So patterns, are patterns, are patterns and the difference between recognition, and a physical stimulus-reaction at a chemical level is one of complexity not category.

    Anyway Morning.

    Actually, as a language teacher myself, I can say with some authority that, while very useful, there is currently no substitute for a good teacher. The main problem is for people with subpar pronunciation, which includes many learners, and in some countries includes most learners. The way the live voice chat works in chatGPT, whatever you say is transformed into text before being processed by the AI. So if your accent is not spot on, what you way will be often converted to gibberish. This model also means that ChatGPT cannot correct your pronunciation. Currently, in live voice conversations it also seems impossible for ChatGPT to slow down, and it also doesn’t give enough chance for learners to answer, as even small pauses are taken as a sign that it should now answer. For improving reading and writing, however, it is an invaluable and amazing tool!

    I showed the live chat capabilities of ChatGPT+ to a guy I met the other day, a Turkish speaker, with limited English. You should have seen his face light up when I told him he could speak Turkish to it!

  • @tja said:
    Not a developer at all (don't even dabble around anymore) but after a kind soul gifted me three free months of GPT-4, I already know that I will continue this and pay for it!

    It can help me getting a grasp of topics I would otherwise need to find webpages, websites or papers for, can summarize those topics, formulate and rearrange information and even replace a person to talk about such topics with - a bit, at least.

    Only sad thing is it's controlled answers - it often just cannot continue like a person, but needs to repeat it's cautious reply without getting to the point.

    And also, everything you talk with this thing get's saved and stored and surely analyzed.
    I don't really feel well about this ...

    How to gift it to someone? I couldn’t really see an easy way

  • that’s true, I hadn’t thought of pronunciation. However terrible my vocabulary and listening have been, for some reason I’ve always had really good pronunciation. I remember when I was in Paris and could barely string a sentence tigether, people still commented they had assumed I was French (and so probably also incredibly stupid given I could barely talk).

    For me, it’s being ab,e to check all kinds of contexts and get quite in depth with it about specifics of construction and why and how that would previously require a lot of googling or different apps.

    Listening is still brutal but full flowing written conversation will help I think, as it’s more about volume

  • edited November 2023

    My only experience so far has been with some free browser for a bit when it was 3.5. (Not Open AI)

    What had really piqued my interest at the time was a beta with Kahn Academy and Chat GTP4. The only reason I didn’t do it was cause it was expensive, I don’t remember how much, but at the time definitely more than $20/ month.

    Curiosity got me to go check again just now, and i see Khan Academy has reduced the asking to a $9 month donation. I see It’s called Khanmigo. Apparently, you still simply use your current account.
    I think for me, not being a freelancer of anything, this will make more sense to try right now. I’ve always been an on/off learner of Khan Academy for years. Numbers were always my weakness in high school. Barely graduated. Later in life, Khan has enabled me to play catch-up, and has even giving me a slight edge it seems, over other CNC operators.
    What I love about Khan, is that it’s set up to easily identify your weaknesses. Not understanding a problem? Click on something on the same page which more or less reveals links/lessons to the principles needed to achieve a certain level of understanding to the related problem. Just keep going back as far as you need to. It’s decent at tracking progress too.

    All this was free by the way, and before AI.
    I’m really curious to see how it is now!

    [Edit] Apologies, I thought I was inside @Gavinski s thread, regarding his interview.

  • The user and all related content has been deleted.
  • @Gavinski said:

    @tja said:
    Not a developer at all (don't even dabble around anymore) but after a kind soul gifted me three free months of GPT-4, I already know that I will continue this and pay for it!

    It can help me getting a grasp of topics I would otherwise need to find webpages, websites or papers for, can summarize those topics, formulate and rearrange information and even replace a person to talk about such topics with - a bit, at least.

    Only sad thing is it's controlled answers - it often just cannot continue like a person, but needs to repeat it's cautious reply without getting to the point.

    And also, everything you talk with this thing get's saved and stored and surely analyzed.
    I don't really feel well about this ...

    How to gift it to someone? I couldn’t really see an easy way

  • Use it every day it’s phenomenal…does better at open source development python, react, .net core, SQL etc and business documentation, RFP… as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

  • wimwim
    edited November 2023

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

  • Yeah, it’s good reason why open source is good for LLMs and vice versa, more people will find it easier to use and learn the open source language options as it becomes like a snowball effect for AI.
    Will be interesting to see how that affects things in the future.

  • @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    I was thinking it was the mass of github project codebases that were more useful for training AI than mere documentation, but certainly in case of AUv3 it's both that are lacking.

  • edited November 2023

    @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    Oh I’m not doing Auv3 programming…more in the medical / health sector — bust just in general Swift/SwiftUI has been in such a constant state of flux, I doubt the LLM has been able to make sense of any of it

  • @realdawei said:

    @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    Oh I’m not doing Auv3 programming…more in the medical / health sector — bust just in general Swift/SwiftUI has been in such a constant state of flux, I doubt the LLM has been able to make sense of any of it

    Pretty much any software technology since roughly 2010 has been like that... that's why I can't really take most of the modern stuff seriously anymore and instead developed my own frameworks and programming language, which are designed (reasonably) well ONCE and then don't need to be "updated" once every few weeks 😄 (if you thought Swift is bad, look at the flux and "choice" in web technologies! Python is also a bad offender).

    But maybe I'm just too old (fashioned) 🤷‍♂️

  • @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    I'm not using GPT-4 yet, but from the info about recent updates in the last week, it appears now possible for a user to create a "private" GPT, e.g., by giving GPT documentation and/or coded projects to train on. Then you can publish these private GPTs and make them available to anyone.

    If I understand it correctly, seems like it should be quite easy to do this for Mozaic, Streambyter, etc. If nobody does this soon, I expect I'll experiment with trying this sometime soon.

  • edited November 2023

    @hes said:

    @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    I'm not using GPT-4 yet, but from the info about recent updates in the last week, it appears now possible for a user to create a "private" GPT, e.g., by giving GPT documentation and/or coded projects to train on. Then you can publish these private GPTs and make them available to anyone.

    If I understand it correctly, seems like it should be quite easy to do this for Mozaic, Streambyter, etc. If nobody does this soon, I expect I'll experiment with trying this sometime soon.

    This would be cool. I think this is the way things will be going, as consistency is the main problem in image and video. The solution is for models trained on aspects of people’s personal projects. Then development of visual assets from that. I can’t wait for the proper apps to start being developed that are ai suites rather than video or graphics suites with Ai functions. 3d modelling is starting to look cool now

  • @hes said:

    @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    I'm not using GPT-4 yet, but from the info about recent updates in the last week, it appears now possible for a user to create a "private" GPT, e.g., by giving GPT documentation and/or coded projects to train on. Then you can publish these private GPTs and make them available to anyone.

    If I understand it correctly, seems like it should be quite easy to do this for Mozaic, Streambyter, etc. If nobody does this soon, I expect I'll experiment with trying this sometime soon.

    Keep in mind that decent training requires a huge corpus of solid high-quality work for the resulting model to generate good quality output.

  • @espiegel123 said:

    @hes said:

    @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    I'm not using GPT-4 yet, but from the info about recent updates in the last week, it appears now possible for a user to create a "private" GPT, e.g., by giving GPT documentation and/or coded projects to train on. Then you can publish these private GPTs and make them available to anyone.

    If I understand it correctly, seems like it should be quite easy to do this for Mozaic, Streambyter, etc. If nobody does this soon, I expect I'll experiment with trying this sometime soon.

    Keep in mind that decent training requires a huge corpus of solid high-quality work for the resulting model to generate good quality output.

    Yes, will partly be learning about limits/capabilities of the AI. I’m curious how far it gets with just documentation. Then, at least with Mozaic, there’s decent load of code in Patchstorage projects. Enough? I have no idea.

  • @hes said:

    @espiegel123 said:

    @hes said:

    @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    I'm not using GPT-4 yet, but from the info about recent updates in the last week, it appears now possible for a user to create a "private" GPT, e.g., by giving GPT documentation and/or coded projects to train on. Then you can publish these private GPTs and make them available to anyone.

    If I understand it correctly, seems like it should be quite easy to do this for Mozaic, Streambyter, etc. If nobody does this soon, I expect I'll experiment with trying this sometime soon.

    Keep in mind that decent training requires a huge corpus of solid high-quality work for the resulting model to generate good quality output.

    Yes, will partly be learning about limits/capabilities of the AI. I’m curious how far it gets with just documentation. Then, at least with Mozaic, there’s decent load of code in Patchstorage projects. Enough? I have no idea.

    My understanding is that probably orders of magnitude more code would be needed than what is in the patchstorage repository for the results to be reliable. And if the scripts are not first-rate, the results won’t be either.

  • @espiegel123 said:

    @hes said:

    @espiegel123 said:

    @hes said:

    @wim said:

    @realdawei said:
    … as for iOS development no way it’s lost. Have to do that the old fashioned way which involved much gnashing of teeth

    Ha! That makes sense. There's precious little documentation on AUv3 development for AI to consume. It's getting better now, no thanks to Apple, as amazingly generous developers have been relatively recently been publishing source code. Hopefully that'll start giving AI some more to work with over time.

    I'm not using GPT-4 yet, but from the info about recent updates in the last week, it appears now possible for a user to create a "private" GPT, e.g., by giving GPT documentation and/or coded projects to train on. Then you can publish these private GPTs and make them available to anyone.

    If I understand it correctly, seems like it should be quite easy to do this for Mozaic, Streambyter, etc. If nobody does this soon, I expect I'll experiment with trying this sometime soon.

    Keep in mind that decent training requires a huge corpus of solid high-quality work for the resulting model to generate good quality output.

    Yes, will partly be learning about limits/capabilities of the AI. I’m curious how far it gets with just documentation. Then, at least with Mozaic, there’s decent load of code in Patchstorage projects. Enough? I have no idea.

    My understanding is that probably orders of magnitude more code would be needed than what is in the patchstorage repository for the results to be reliable. And if the scripts are not first-rate, the results won’t be either.

    My understanding is much different. In fact, pretty sure I remember seeing a video where a guy taught chatgpt how to program in a new language. Your understanding is correct, I’m sure, with old specialized AI built for specific purpose. These new AI are generalized intelligence. They already know all about how programming languages work, have knowledge of MIDI and how it works, just need some specific info about new language. Or maybe not. I’m curious to find out.

  • edited November 2023
    • never mind - (why can’t we delete posts?)
  • https://openai.com/blog/introducing-gpts

    This is about that new chatgpt feature mentioned above

  • I've built a free ChatGPT-like app for iOS / iPadOS / macOS that has both OpenAI GPT support (via bring-your-own-key) as well as open-source models such as Llama2, Mistral, StableLM - the latest cutting-edge ones that run locally and privately on your own device.

    The app is native but the bots are open-source JavaScript that runs in a sandboxed environment. Later I'll be adding features for self-hosting these bots for your own use, plus in-app programmability so that you can extend bots with custom behaviors and share with the community on GitHub.

    Homepage: https://ChatOnMac.com

    TestFlight: https://testflight.apple.com/join/Xg6ZuTaD

    I've submitted it to Apple and hope to launch next week. Appreciate any feedback meanwhile.

  • @AlexForsyth said:
    AI has indeed become a helpful tool in coding for many developers. It can save time by generating code snippets, handling repetitive tasks, and even assisting in debugging. While it doesn't replace the creativity and expertise of developers, it can be a valuable assistant, especially when it comes to turning concepts into code.

    Thanks for this. I wondered if it might replace the more rote elements that had always prevented me from going a bit further learning (there’s only so much time :) )

  • edited January 12

    The party will soon be over:

    (they're castrating GPT a little bit more every single day, it's slowly becoming useless for most tasks. Government intervention to prevent mass unemployment, money issues, I don't know...)

  • The more they do this the quicker we get much better ai models from other companies/countries

    I’ve been using unfiltered gpt for a while

Sign In or Register to comment.