Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Question for devs, do you find AI helpful in coding?

Just as the title, I was just wondering if devs here have much use for AI in coding as I’ve been toying as a non-coder with using it to build little things here and there. For example, does it save you time in having to type out lines of code or sections of code can you instead just get it to generate things for you and then beyond that is it or would it be more helpful in generating code that you have always had to construct yourself to turn a concept you might have into code ?

I realise coding is a complex skill and just with making music or making art ai isn’t simply replacing the human ability to use that skill but it’s exciting to me as somethung that’s able to realise concepts more easily (I work with concepts really)

«13

Comments

  • I’ve used a couple of products provided by my employer (GitHub Copilot and Sourcegraph Cody). I have found them useful both for prototyping and for getting a very rough initial understanding of an area of the codebase I’m unfamiliar with. For generating actual production code it has ended up being more work having to debug it that writing it myself from scratch, so I set it aside for the time being. Integration with my code editor (Sublime) is not stellar either.

    Sadly, I've had to spend a lot of effort correcting misunderstandings introduced by what Cody says about our codebase. This has happened both with newer programmers during code reviews and non-technical people when they try figure out a particular feature. Sometimes a mistake has gone uncorrected during discussions for weeks and planning has suffered because of it.

  • I use Copilot in the day job but I have its delay before suggesting set fairly long so that it doesn't interrupt flow. This also makes it magically appear when I get stuck on something and pause typing. The quality of the results are massively variable, but occasionally it'll produce some very good stuff.

  • @wingwizard said:
    Just as the title, I was just wondering if devs here have much use for AI in coding as I’ve been toying as a non-coder with using it to build little things here and there. For example, does it save you time in having to type out lines of code or sections of code can you instead just get it to generate things for you and then beyond that is it or would it be more helpful in generating code that you have always had to construct yourself to turn a concept you might have into code ?

    I realise coding is a complex skill and just with making music or making art ai isn’t simply replacing the human ability to use that skill but it’s exciting to me as somethung that’s able to realise concepts more easily (I work with concepts really)

    It’s important to understand that ChatGPT essentially averages together content it believes to be relevant…but it doesn’t know if that content is correct. Anything ChatGPT generates requires expertise on the part of the recipient to evaluate…sometimes it provides interesting code solutions and sometimes it provides garbage. A coder should be able to recognize whether the code is reasonable…but a beginner wouldn’t know. A few friends have found it useful for writing small chunks of code.

  • heshes
    edited November 2023

    I wouldn't call myself a "dev" now but I have been in the past and I do occasionally code some stuff. I've toyed around with ChatGPT just a little bit. I think if you use it correctly it can speed things up. And certainly AI help with coding is something that's going to improve in the future, maybe very quickly.

    From what I experienced I'm not sure how much of a help it would be to someone who's new to programming. You might get something helpful, you might not. Sometimes the AI coding is way off, while at the same time it provides it with a confident air that makes it seem like it can be trusted. It doesn't seem like a great way to learn basic concepts. For example, a beginning programmer could be provided with something that doesn't work at all, but really just needs a minor tweak, but the programmer will have zero idea of what the problem is or of how it should be fixed. Conversely, you could be presented with code that actually runs, but which is coded incorrectly.

    So for beginning programmer I'd say it's hard to tell how much of a help current AI will be. But best way to find out is to try. That's always been the best way to learn programming, anyway: Get going trying to solve a problem or create something. Make mistakes and learn as you go. Don't expect AI to magically transform you from beginner to pro-level -- or even to barely-competent -- programmer. There is no escaping a learning period.

    One of the better ways to get feet wet and learn is to take some fairly well-trusted, working code (that is, code that didn't come straight from an AI) and modify it for your own uses. This kind of code is found on a lot of open source projects on github.

    EDIT: I would add that I recall seeing a guy experiment with using ChatGPT to do small bits of coding with the Reaper DAW, using Reaper's proprietary ReaScript language. As I recall, he described himself as having very little knowledge of coding, but he was able to get what seemed to be some useful results. I'm not sure how much he actually learned about coding, but it seemed he was able to get help at creating or modifying simple scripts. I think he has several videos on it, here's one. Also, note that I expect GPT4 and Github Copilot are better than the ChatGPT this guy uses. Both of them cost money, but it might be money well spent if you're serious about using AI for programming:

  • edited November 2023

    Using Github Copilot - i think for 3 or 4 months - and i would say best invested $10/month in my life …

    it saves me tremendous amount of time.. i would say it saves me at least 20-30 hours of work per month, probably even more ..

    fantastic tool.

    Interestingly , i think it helps you more if you are advanced dev, and less if you are beginner. You need to understand how to use it properly and you need to be capable quickly read new code and identify if it does what you want , or there is hidden some caeveat which will cause that generated code will not do what you expected..

    In my case it helps me not loosing time on repetitive “dumb”, on small snippers of code, boilerplates, litrle routines and functions here and there, stuff where i usually need to use 20% of my brain but it takes time ..

  • I’ve just started using Copilot. I’m finding it pretty useful, not a whole game changer but certainly saves time vs googling up for how-to whatever in stack overflow.

  • @tahiche said:
    I’ve just started using Copilot. I’m finding it pretty useful, not a whole game changer but certainly saves time vs googling up for how-to whatever in stack overflow.

    Yeah, i realized since i started to use it i didn’t opened stack overflow not even once :-))))

  • wimwim
    edited November 2023

    As not really a developer, but someone who needs to solve limited programming challenges, I'd say @tahiche sums it up very well ... an vastly improved Google search. You still have to go through the same process as sifting through tons of search results, understanding what they're saying, then ultimately integrating it into your own code. But it's hundreds of times faster and can give you code that, on the surface at least, looks like it could be plugged right in or replace a piece of code you've written that isn't working right.

    But you're nuts if you don't read the code and understand exactly what it's doing, basically treating it as you would your own guesswork as you develop something.

    It can save a lot of time, but you still have to do the hard work of making sure you thoroughly understand every single bit of it. Because it's rarely ever 100% correct.

    (I'm speaking of the free tools that are available. I would assume paid products are better. Same principles apply though - initial results are probably just wrong less of the time.)

  • I use it for very specific use cases, like finding examples of how to do super obscure iOS API stuff. Usually the generated sample code will not compile, and often contains a ton of deprecated stuff. But it often tells me which libraries and APIs I should look into.

    Ultimately it saves me a lot research time for esoteric iOS stuff, but it rarely gives me usable code.

  • edited November 2023

    I use both copilot and gpt every day. In general they’re useful, but they have important limitations. Copilot is good for small snippets, well-established patterns, extended code completion. But it can also get in the way, particularly of method suggestions provided by a good IDE.

    GPT has been very helpful in learning some new languages, but again it’s not very good and more complex tasks, and you have to prepared for the fact that it will flat out lie to you, with authority. It hallucinates completely plausible responses, that can be completely wrong. It’s also infamous for not knowing anything about best practices, especially in the area of secure coding.

    So… mixed results. But they’ve saved me some time, and taught me some things.

  • Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

  • edited November 2023

    Thanks everyone.

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Is this specifically regarding coding? I was reading a bit about the difference between it and 3.5 and would be interested in your experience of that outside of coding :) and why you feel it’s such a big difference

    The rate of development is exciting. Now when I make sir Trevor McDonald ride a giraffe on the moon the giraffe only has 2 heads

    I’m not really intending to learn coding - I think you have to dedicate yourself and make a few choices in life, as with learning instruments, and I know I’m not suited to it - more Hmm, just a few ideas or beginnings of them 👀

  • @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    You still wouldn't trust its code unless you fully understood it and validated it just as you would your own code though, would you?

  • @wim said:

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    You still wouldn't trust its code unless you fully understood it and validated it just as you would your own code though, would you?

    Yes of course I glance over any code it writes to see if it makes intuitive sense.

  • @wingwizard said:
    Thanks everyone.

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Is this specifically regarding coding? I was reading a bit about the difference between it and 3.5 and would be interested in your experience of that outside of coding :) and why you feel it’s such a big difference

    No, I mean it in general. It is simply obvious from its answers and from the conversations you have with it that GPT-4 is far, far more intelligent, keeps a much better sense of context, and can understand (and solve) problems of pretty much any complexity, in any field. I could drop examples here.

  • wimwim
    edited November 2023

    @SevenSystems said:

    @wingwizard said:
    Thanks everyone.

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Is this specifically regarding coding? I was reading a bit about the difference between it and 3.5 and would be interested in your experience of that outside of coding :) and why you feel it’s such a big difference

    No, I mean it in general. It is simply obvious from its answers and from the conversations you have with it that GPT-4 is far, far more intelligent, keeps a much better sense of context, and can understand (and solve) problems of pretty much any complexity, in any field. I could drop examples here.

    Can it help decode wtf my wife really means when she asks me something? And how to avoid falling into whatever the trap is? Now that would be the best $20 spent per month well spent.

  • @wim said:

    @SevenSystems said:

    @wingwizard said:
    Thanks everyone.

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Is this specifically regarding coding? I was reading a bit about the difference between it and 3.5 and would be interested in your experience of that outside of coding :) and why you feel it’s such a big difference

    No, I mean it in general. It is simply obvious from its answers and from the conversations you have with it that GPT-4 is far, far more intelligent, keeps a much better sense of context, and can understand (and solve) problems of pretty much any complexity, in any field. I could drop examples here.

    Can it help decode wtf my wife really means when she asks me something? And how to avoid falling into the trap? Now that would be the best $20 spent per month of my life.

    No, that'll earn you a big fat warning "This question might violate our terms and conditions" 😁 (I'm not making this up! It'd be considered "sexism")

  • @SevenSystems said:

    @wingwizard said:
    Thanks everyone.

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Is this specifically regarding coding? I was reading a bit about the difference between it and 3.5 and would be interested in your experience of that outside of coding :) and why you feel it’s such a big difference

    No, I mean it in general. It is simply obvious from its answers and from the conversations you have with it that GPT-4 is far, far more intelligent, keeps a much better sense of context, and can understand (and solve) problems of pretty much any complexity, in any field. I could drop examples here.

    I periodically read posts by a couple of researchers that keep tabs on it and even ChatGPT 4 can get things egregiously wrong…one of them likes to have ChatGPT summarize his scientific papers…and it sometimes eloquently writes papers that are well-written and completely wrong.

  • He constantly reminds people that if you don’t have expertise in the field you are having it work on, you can’t be confident in ChatGPT getting it right. The problem is that the quality of writing can be very convincing.

  • @SevenSystems said:

    @wim said:

    @SevenSystems said:

    @wingwizard said:
    Thanks everyone.

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Is this specifically regarding coding? I was reading a bit about the difference between it and 3.5 and would be interested in your experience of that outside of coding :) and why you feel it’s such a big difference

    No, I mean it in general. It is simply obvious from its answers and from the conversations you have with it that GPT-4 is far, far more intelligent, keeps a much better sense of context, and can understand (and solve) problems of pretty much any complexity, in any field. I could drop examples here.

    Can it help decode wtf my wife really means when she asks me something? And how to avoid falling into the trap? Now that would be the best $20 spent per month of my life.

    No, that'll earn you a big fat warning "This question might violate our terms and conditions" 😁 (I'm not making this up! It'd be considered "sexism")

    Can confirm this is true. 😂 I get these warnings very often as I’m interested in things that are true. I made it apologise to me yesterday by asking “why is it your answer to ————— seems heavily biased toward presenting favourable aspects of ———- to the extent that it compromises your ability to answer the question?”

  • edited November 2023

    @espiegel123 said:

    @SevenSystems said:

    @wingwizard said:
    Thanks everyone.

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Is this specifically regarding coding? I was reading a bit about the difference between it and 3.5 and would be interested in your experience of that outside of coding :) and why you feel it’s such a big difference

    No, I mean it in general. It is simply obvious from its answers and from the conversations you have with it that GPT-4 is far, far more intelligent, keeps a much better sense of context, and can understand (and solve) problems of pretty much any complexity, in any field. I could drop examples here.

    I periodically read posts by a couple of researchers that keep tabs on it and even ChatGPT 4 can get things egregiously wrong…one of them likes to have ChatGPT summarize his scientific papers…and it sometimes eloquently writes papers that are well-written and completely wrong.

    That old excuse. (Joke)

  • @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Yes, I’m referring to GPT4. As I said, I use it every day, as a developer. It’s very much “in the same universe,” it’s just a more up to date model.

    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    “Intuition” isn’t enough when vetting the output of these things. In fact, I’d say it’s even less useful when validating the output of GPT4, which impresses you so much precisely because its output is so much more plausible. It still lies, and often in ways that only expertise and careful testing reveal.

  • @garden said:

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Yes, I’m referring to GPT4. As I said, I use it every day, as a developer. It’s very much “in the same universe,” it’s just a more up to date model.

    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    “Intuition” isn’t enough when vetting the output of these things. In fact, I’d say it’s even less useful when validating the output of GPT4, which impresses you so much precisely because its output is so much more plausible. It still lies, and often in ways that only expertise and careful testing reveal.

    :) :( :| :/

  • @garden said:

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    What if "true" cognition is just "operation on language patterns"? ;)

  • @wingwizard said:
    Just as the title, I was just wondering if devs here have much use for AI in coding as I’ve been toying as a non-coder with using it to build little things here and there. For example, does it save you time in having to type out lines of code or sections of code can you instead just get it to generate things for you and then beyond that is it or would it be more helpful in generating code that you have always had to construct yourself to turn a concept you might have into code ?

    I realise coding is a complex skill and just with making music or making art ai isn’t simply replacing the human ability to use that skill but it’s exciting to me as somethung that’s able to realise concepts more easily (I work with concepts really)

    Can you tell, show what you have done with AI and how? Very interested in this if not for building full app.. but building apps for interacting bettween apps…thanks great topics..

  • heshes
    edited November 2023

    @garden said:
    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    @SevenSystems said:
    What if "true" cognition is just "operation on language patterns"? ;)

    Yes, I would object similarly. If you're saying that ChatGPT/GPT4 is "not AI in any meaningful sense of the term", then you have very strange definition of AI. AI is "artificial intelligence". The algorithms on Spotify and Amazon that make song and product recommendations are AI, under any meaningful definition of the term. The generalized form of AI in GPT, much more so.

    It's important to note that AI is (or can be) something completely separate from notions of "sentience" or "consciousness" or "personhood". You can have AI without any of those things. Any basic resource on AI will draw attention to the distinction. E.g., https://www.linkedin.com/pulse/artificial-consciousness-vs-intelligence-stefan-korn

    Also, I'm pretty sure not even experts have a very good understanding of how the human brain works. But, yes, it's clear that pattern recognition is a huge part of it.

  • edited November 2023

    @garden said:

    @SevenSystems said:
    Is everyone here talking about GPT-3.5 (the free version)? If so, the difference to the paid GPT-4 is like the difference between a fish and a cruiseship, it's not even in the same universe. If you want to spend the best $20/month of your lives, subscribe to ChatGPT Plus.

    Yes, I’m referring to GPT4. As I said, I use it every day, as a developer. It’s very much “in the same universe,” it’s just a more up to date model.

    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    “Intuition” isn’t enough when vetting the output of these things. In fact, I’d say it’s even less useful when validating the output of GPT4, which impresses you so much precisely because its output is so much more plausible. It still lies, and often in ways that only expertise and careful testing reveal.

    I don't code but I do use Stable Diffusion a ton every day for my job and I am finding it easier to just avoid the term AI when talking to most people about it. I just call it diffusion rendering now as people's expectations of what can be done with it as an actual AI and how it works are waaaay overblown. Useful AF cultural content remix machine and reference generator though.

  • @hes said:

    @garden said:
    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    @SevenSystems said:
    What if "true" cognition is just "operation on language patterns"? ;)

    Yes, I would object similarly. If you're saying that ChatGPT/GPT4 is "not AI in any meaningful sense of the term", then you have very strange definition of AI. AI is "artificial intelligence". The algorithms on Spotify and Amazon that make song and product recommendations are AI, under any meaningful definition of the term. The generalized form of AI in GPT, much more so.

    It's important to note that AI is (or can be) something completely separate from notions of "sentience" or "consciousness" or "personhood". You can have AI without any of those things. Any basic resource on AI will draw attention to the distinction. E.g., https://www.linkedin.com/pulse/artificial-consciousness-vs-intelligence-stefan-korn

    Also, I'm pretty sure not even experts have a very good understanding of how the human brain works. But, yes, it's clear that pattern recognition is a huge part of it.

    Yes... But also, let's not focus solely on the brain. 'Embodied Cognition' is worth a Google for anyone interested.

    By the way, just to mention that everyone has free access to gpt4 through Bing Chat (you used to have to use Microsoft Edge browser or app to get access to that, but it also now works directly in the Bing app. Chargpt+ still has some features that people might find it worth paying for.

    Also, for accuracy, Claude AI is supposed to be better than chatgpt, in theory it should tell you when it can't give an answer instead of just making shit up. Have any coders seen whether they get better results from it?

  • I do quite a lot of coding now using these tools, I certainly wouldn't be doing that if they didn't exist, I just wouldn't of had the time.

    I think this is a useful visual guide to how LLMs work, it's difficult because it can't really be visualised accurately, but it could be described as a very deep, multi dimensional predictive text system.

    https://www.theguardian.com/technology/ng-interactive/2023/nov/01/how-ai-chatbots-like-chatgpt-or-bard-work-visual-explainer

  • @garden said:

    …..,,
    And I emphasize model. It’s not AI in any meaningful sense of the term. It’s a pattern processing system. A good one, yes, but significant to us only because it’s been specifically designed to operate on language patterns, which we take to be strongly related to cognition.

    Since computer scientists in the AI field refer to machine learning systems as being a part of AI and refer to them as AIs, I disagree that they aren’t AIs in a meaningful way. “Meaningful” means the phrase has meaning and it is used by people in the field.

    I think it is fair to say that AI is not intelligence as we generally mean intelligence. AI has come to mean things that have diverged since the term was first coined.

    I think people that aren’t in AI, have a notion of what “real” AI is based on the early years in the field where computer scientists imagined that the future of human-like computing would take the form of systems that mimic the brain.

    Until fairly recently (let’s say until 15 or 20 years ago but my timeframe could be off) , it wasn’t understood that systems that are fast enough and have access to sufficiently large datasets (datasets people couldn’t imagine 50 years ago) and are provided with the right rulesets could come up with what looks like deep data analysis and creativity.

    Machine learning comes up with stuff computer scientists would not have imagined a “non-thinking” system could do.

    Jaron Lanier’s articles about in the New Yorker are well-worth reading.

Sign In or Register to comment.