Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Uploading synth manuals to ai…

This is probably the most honorable way to incorporate ai into your music production.

Upload your synth manuals to notebookLM and then let the ai dig thru the manual and help you create a solution… you can even turn your manual into “podcast”

Imma try it with a couple synths and see if i can learn something new. I have a lot of iOS synths i never use and this could bring a lil life back to them..

I’m not really into reading manuals.

Comments

  • Digitakt 2 mindmap…

  • this is interesting .. will try it, thanks for info

  • edited April 18

    @dendy said:
    this is interesting .. will try it, thanks for info

    It’s pretty fun. I just asked it a simple question… how to turn a single cycle waveform into a pad in the Digitakt 2, and gave me a very great response. The response has clickable links to the manual in case you forget how to do something

  • An interactive manual, one you could speak to and ask questions (instead of an entire podcast) it seems to me might be more relevant for people.

  • @NeuM said:
    An interactive manual, one you could speak to and ask questions (instead of an entire podcast) it seems to me might be more relevant for people.

    You can do that!
    The podcast thing is just there , but not really needed obviously

  • @reasOne said:

    @NeuM said:
    An interactive manual, one you could speak to and ask questions (instead of an entire podcast) it seems to me might be more relevant for people.

    You can do that!
    The podcast thing is just there , but not really needed obviously

    As long as they give accurate answers and don't "hallucinate" their responses, that would be great.

  • @NeuM said:

    @reasOne said:

    @NeuM said:
    An interactive manual, one you could speak to and ask questions (instead of an entire podcast) it seems to me might be more relevant for people.

    You can do that!
    The podcast thing is just there , but not really needed obviously

    As long as they give accurate answers and don't "hallucinate" their responses, that would be great.

    NotebookLM’s answers cites the manual’s pages from which its answers are derived, so that you can verify if you wish.

  • @NeuM said:
    As long as they give accurate answers and don't "hallucinate" their responses, that would be great.

    LLMs do not hallucinate if you ask them question about information from speciffic data set / documents you provided them (you can also in GPT add any documents and the query questions about those documents) - cause in that case it works differently, it really "reads and understands" that document and answers you based on stuff in that document..

    Halucinations arise when you ask general question without providing source data, so it answers just based on pretrained configuration (which is NOT exact data).

    This is why for asking any question about some facts is better to use things like Perplexity.AI - which are actually browsing internet and then use LLM just to analyse and extract data from context found on web, while raw plain LLM is better to use just for processing provided context data or for thing like general ideas / logic thinking.

    Additionally, to mitigate hallucinations you should explicitpy mention in prompt that if it doesn't know correct answer, it shouls just admit"i don't know" - it's cause those things are by default running in so called "optimistic execution mode" which means they are basicallytrying to answer you at all costs - this behaviour can be largerly overrided by proper prompt.

    But anyway - never use plain LLMs as replacement for google, for such purpose use Perplexity.AI

  • Deep seek seems to read them.
    Xi says Hi and good luck with your synth

  • @dendy said:

    @NeuM said:
    As long as they give accurate answers and don't "hallucinate" their responses, that would be great.

    LLMs do not hallucinate if you ask them question about information from speciffic data set / documents you provided them (you can also in GPT add any documents and the query questions about those documents) - cause in that case it works differently, it really "reads and understands" that document and answers you based on stuff in that document..

    Halucinations arise when you ask general question without providing source data, so it answers just based on pretrained configuration (which is NOT exact data).

    Hmm, ChatGPT seems to disagree with you. Here's how ChatGPT responded when I asked if what you said was true:

    Large language models (LLMs) can still hallucinate, even when answering questions about specific documents you provided them. The risk is lower, especially if the retrieval or context window is properly managed, but it’s not eliminated.

    🧠 Why Hallucinations Still Happen

    Even when a document is provided directly—whether by:
    • Uploading it to ChatGPT
    • Using tools like RAG (retrieval-augmented generation)
    • Putting it directly into the prompt

    the model can still:
    • Misinterpret context or skip over important nuance
    • Infer information that sounds plausible but isn’t stated
    • Blend in general world knowledge even when it contradicts the source
    • Fill in gaps when your question implies something the document doesn’t clearly address

    🧪 Example of When Hallucinations Still Occur

    Document says:
    “John Smith resigned in 2020 after ten years as CTO.”

    User asks:
    “When did John Smith become CEO?”

    → The model might “helpfully” guess based on the date and his role—even though the document never says he was CEO at all.

    🧰 In Technical Terms

    This is especially true for:
    • Longer documents, where the model may not attend to all parts equally
    • Complex reasoning, where inference chains are needed
    • Ambiguous or leading questions

    Even RAG-based systems, which retrieve and inject relevant passages, don’t eliminate hallucinations—they just reduce the odds, especially when paired with careful prompt engineering or fine-tuning.

    ✅ TL;DR:

    You’re right: even when “reading” from a specific document, hallucinations can still happen. The model doesn’t have a truth-checking mechanism—it just predicts the most likely continuation based on its training and inputs, not what is strictly true.

    Would you like an example or test this live with a sample document?

  • Yes, I found that these tools can sometimes be useful for manual summaries, but in many cases there are mistakes, often huge ones, and crucial omissions.

    Generally, for me, if I want to be sure my info is accurate, it's much better to just study the manual myself, in the traditional way. I'm also likely to retain the info and understand it on a deeper level.

    Difficulty has its rewards. The easy path is not always the best when it comes to learning.

    It's also often less time consuming to just read the manual myself, as combing through and correcting the output of the AI itself can be hella time consuming. Often the old ways are still the best.

  • @res

    well ok.. you (or chatgpt:)) provided example where question was about something what is actually not found in document added to context window ..

    Obviously this is where “optimistic execution” goes again in play .. this can be easily mitigated by telling model to explicitly answer only based on information in provided document and admit lack of information if asked something out of context window data …

    While yes, there is still very small (theoretical) likelihood of hallucinating, in reality with latest models you will probably never experience it ..
    But yeah, if you want nitoick, it is not 100% impossible, ok.

    Another important element is context window size. Just because some model states it has for example 1 million tokens context window that doesn’t mean it’s good idea to upload there text of that size and then query it.. Context window is not like “database” where everything is stored equaly - also with big context window, older data are a bit “blured” and may be even forgotten .. so in reality it’s good idea to not use too big documents (relative size to max context window). Personally always tryinh to be bellow 30-40% of officialy stated context window size.

  • Perhaps you should ask the companies behind those various instruction manuals before you upload their copyrighted work into an LLM

  • edited April 19

    @mistercharlie said:
    Perhaps you should ask the companies behind those various instruction manuals before you upload their copyrighted work into an LLM

    it is huge difference to use it just as document for context window and to use it as training data for network learning.. very different things.. by uploading document which you then query doesn't mean that data are used for training that model...

  • edited April 19

    @reasOne this is an interesting use of ai tools. I wasn’t aware of NotebookLM. Thanks for sharing.

    My experience mirrors @Gavinski’s thoughts in some ways. I’ve been trying to limit the tools I use and learn how to use the limited set well. My process is to create a set of searchable, networked notes in Obsidian. The process of making myself write out an app’s functionality really solidifies my understanding. I do this the old fashioned way by using the app, or watching tutorials, then taking notes. I could see a role in this process for having your own personalized GPT though, especially for complex apps with a lot of functionality (eg a DAW)

    Regarding copyright, not a lawyer but this limited personal use seems like fair use.

  • @dendy said:

    @mistercharlie said:
    Perhaps you should ask the companies behind those various instruction manuals before you upload their copyrighted work into an LLM

    it is huge difference to use it just as document for context window and to use it as training data for network learning.. very different things.. by uploading document which you then query doesn't mean that data are used for training that model...

    👍

  • edited April 19

    I'm sorry, but that kind of clickbait article meant to frighten people is nonsense. "A.I." is a tool which extends what individuals can do. Mind you, a person can still do everything the "A.I." can, but it could take them years or decades to master the skills necessary (whatever they are).

    No one's brain is being damaged or depleted by the existence of A.I. A person can choose to expend whatever time and energy they want in order to achieve the same results. A lazy person will still be lazy. A skilled person will still retain those skills.

  • edited April 19

    @NeuM said:

    I'm sorry, but that kind of clickbait article meant to frighten people is nonsense. "A.I." is a tool which extends what individuals can do. Mind you, a person can still do everything the "A.I." can, but it could take them years or decades to master the skills necessary (whatever they are).

    No one's brain is being damaged or depleted by the existence of A.I. A person can choose to expend whatever time and energy they want in order to achieve the same results. A lazy person will still be lazy. A skilled person will still retain those skills.

    Nope, that is not a considered response. Tools do change our brains, there is copious evidence for that, and it would be very naive to contend otherwise. Either you would be denying that tools change the brain, or that AI is a tool. Neither claim would stand up to scrutiny.

    GPS weakens spatial memory over time, calculators reduce arithmetic fluency, and even the adoption of writing itself, over time, rewired how we humans store and process information. Similarly, AI shapes how we think, what we practice, and what we stop doing. This is not to say AI has no use or upsides. I use AI, it has pros and cons. But what I said above stands.

  • Everything on this forum turns into a debate 🤣 we are def looping pros… looping a debate in every conversation.

    Ima upload some manuals to this for quick reference and fast summaries of the manuals information.
    I think its cool 🆒
    If it’s not someones thing.. then ok!

  • @reasOne said:
    Everything on this forum turns into a debate 🤣 we are def looping pros… looping a debate in every conversation.

    Ima upload some manuals to this for quick reference and fast summaries of the manuals information.
    I think its cool 🆒
    If it’s not someones thing.. then ok!

    I think it’s cool too, just want to mention the other side of the coin. The tech is interesting, it’s exciting, but the downsides are worth a mention, that’s all!

  • @reasOne said:
    Everything on this forum turns into a debate 🤣 we are def looping pros… looping a debate in every conversation.

    Ima upload some manuals to this for quick reference and fast summaries of the manuals information.
    I think its cool 🆒
    If it’s not someones thing.. then ok!

    I’ve been using NotebookLM ever since another regular forum member had posted about it a month or so ago, and I like it. It has been a very useful tool for me, because I still took the time to read the parts of the manual so that i may learn the lexicon of the product, or discipline that I am learning. The same feature can have different names amongst different products. Disciplines involving formulas, there are terms and variables and definitions one should become familiar with in order to –ask good questions/prompt well and obtain good results. So @Gavinski makes a good point.

  • edited April 19

    Some time ago I uploaded digitone's manual to Claude.AI and then asked it to give me some recipes for guitar sounds. It left me at around 60-70% of what i was looking for, which was surprising. That was like a year ago. it should be even better now.

  • @reasOne said:
    This is probably the most honorable way to incorporate ai into your music production.
    Upload your synth manuals to notebookLM and then let the ai dig thru the manual and help you create a solution
    I’m not really into reading manuals.

    This has been the first real actual game changer for me in ages. It has opened up apps that I had given up or forgotten.
    Right now I've dug out Polyphase and it's like I finally understand it and am making use of it again.
    The ai is like having a tutor sitting patiently at your side and whenever I need help it answers it to the point immediately.
    In some cases where the actual info was missing or unclear in the manual the ai has so far pointed that out. Or sometimes I have to reword my question to get the answer.

    Trying to find explicit answers in a long manual has frustrated me so many times or even threw me out of my groove when I was in the middle of something. Or just make me sleepy.

    I don't think ai will detract from the work of 'professional explainers' videos. I like @Gavinski 's and others for introducing me to something new but I never could use those for hunting down particular bits of info when I needed it.

  • @MrStochastic said:

    @reasOne said:
    This is probably the most honorable way to incorporate ai into your music production.
    Upload your synth manuals to notebookLM and then let the ai dig thru the manual and help you create a solution
    I’m not really into reading manuals.

    This has been the first real actual game changer for me in ages. It has opened up apps that I had given up or forgotten.
    Right now I've dug out Polyphase and it's like I finally understand it and am making use of it again.
    The ai is like having a tutor sitting patiently at your side and whenever I need help it answers it to the point immediately.
    In some cases where the actual info was missing or unclear in the manual the ai has so far pointed that out. Or sometimes I have to reword my question to get the answer.

    Trying to find explicit answers in a long manual has frustrated me so many times or even threw me out of my groove when I was in the middle of something. Or just make me sleepy.

    I don't think ai will detract from the work of 'professional explainers' videos. I like @Gavinski 's and others for introducing me to something new but I never could use those for hunting down particular bits of info when I needed it.

    That's a good point. You wouldn't go to the A.I. (at least, not yet) to provide you with a presentation and demo of a plugin or hardware, but asking for it to pull an answer out of a detailed manual is a perfect use case.

Sign In or Register to comment.