Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

AI for auv3 development?

I've been goofing around with ChatGPT a bit, making scripts and stuff, both to do admin things with files and to automate some things in illustrator and indesign.

I'm thinking about using AI to get help porting some open source AU plugins I like to AUv3, for myself and just to challenge myself. See if I learn something for it.

So, assuming we understand the high rate of failure, would you reccommend any particular LLM or environment for AI assisted coding in this?

I'm not expecting miracles and I'm aware that learning everything myself has its merits, I haven't ruled that out. But I managed to make my ideas reality by using ChatGPT, and I don't think it took me less time, it just got me over hurdles I never got over before.

Comments

  • oh and I respect people who just hate AI, I just don't really want that discussion, I already did that.

  • edited February 18

    in my experience, best LLM for general coding tasks is Claude Sonnet 3.5

    But hold in mind one important thing which many people aren't getting correctly.

    LLM is NOT database. It really has NOT stored inside entire API documentations or other stuff which was used as training data. Direct plain LLM is not replacement for google or stack overflow.

    Do not ask it directly for API feature, questions about some call / function parameters and so on - you are risking that It answers you with something which is not really true (eg. "hallucinations")

    This is because these things are working on principle which is called "optimistic execution" - It givers you first probable answer to your question it manages to complete - which is not always the CORRECT answer.

    For example, if you ask something what was NOT in training data (or was removed from network configuration during learning phase - not space here to explain this in detail), it just tries to build answer based on similar concepts. Eg, if you ask at some specific API, it can generate probable answer based on multiple OTHER APIs from learning data ;-). It literally tries to "guess" right answer.

    TLRD: If you want ask how you should make some specific algorithm, solve some specific problem - things which in general needs just logical thinking and basic coding language knowledge - it is perfectly OK to ask LLM.

    If you want to ask on some particular detail from API documentation - use rather something like Perplexity.AI , which uses LLM just to understanding what you are asking, they it actually GOOGLES internet for relateed answers and then presents you result again parsed by LLM from all stuff it found on intenret.

    Another good mehod wow to ask speciffic details in some API is just UPLOAD into it entire API documentation (for exampel PDF) and they ask things about that API ..

    But by default - plain LLM is not database where is stored 1:1 entire internet :-) That's not the case.

Sign In or Register to comment.