Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

In the future, you will disbelieve everything, and be happy

2»

Comments

  • @NeuM said:

    @Gavinski said:

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    At least one person has already been arrested in the UK for posts on Facebook. Said another way… UK citizens are being arrested for having opinions. That is not a positive development.

    https://www.msn.com/en-us/news/world/police-arrest-woman-over-inaccurate-southport-social-media-post/ar-AA1otnee

    But it’s also not new. This from 2022:

    https://www.theverge.com/2022/2/7/22912054/uk-grossly-offensive-tweet-prosecution-section-127-2003-communications-act

    At least in the US, defamation can be countered by lawsuits. One who spreads malicious lies faces repurcussions from those defamed. They aren’t arrested by the government unless they are making credible violent threats against others.

    There is a difference between 'having an opinion' and intentionally spreading lies with malicious intent. I hope she is treated with the full force of the law of found guilty.

    Don't defamation laws address this already in the UK?

    And any thoughts about this Labour councilor being arrested on suspicion of encouraging violence?

    https://x.com/darrengrimes_/status/1821568294437990576?s=61&t=EblTN1YExzME8eJ7t_dizQ

    It doesn't say exactly what she is being charged with, maybe it is defamation? No idea. But there has been tons of intentional malicious misinformation fueling these protests and those people should be punished if found to have been intentionally lying to incite attacks on innocent civilians. Anyway, this is not the place for this discussion.

    Briefly though, according to chatgpt:

    The person could be charged under the Public Order Act 1986, particularly for offenses related to stirring up racial hatred. This act makes it illegal to publish or distribute material that is threatening, abusive, or insulting, with the intent to stir up racial hatred. Given that the riots were linked to Islamophobic and anti-immigration sentiments, this law could be applied.

    Additionally, they might be charged with incitement to violence, a serious offense under UK law, which covers encouraging or assisting in the commission of an offense. Given the violence that ensued, charges related to conspiracy to commit violent disorder or even terrorism-related offenses could be considered if the intent was to create widespread fear and chaos.

    Perhaps this will be (and should be) the last post on this matter, but forcing citizens to shut their mouths instead of allowing them a forum to voice their dissent may not the best move on the part of their elected government. The First Amendment in the US protects offensive speech, not speech which everyone can agree with.

    Agreed that it should be the last post on this, yes

  • @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    At least one person has already been arrested in the UK for posts on Facebook. Said another way… UK citizens are being arrested for having opinions. That is not a positive development.

    https://www.msn.com/en-us/news/world/police-arrest-woman-over-inaccurate-southport-social-media-post/ar-AA1otnee

    But it’s also not new. This from 2022:

    https://www.theverge.com/2022/2/7/22912054/uk-grossly-offensive-tweet-prosecution-section-127-2003-communications-act

    At least in the US, defamation can be countered by lawsuits. One who spreads malicious lies faces repurcussions from those defamed. They aren’t arrested by the government unless they are making credible violent threats against others.

    There is a difference between 'having an opinion' and intentionally spreading lies with malicious intent. I hope she is treated with the full force of the law of found guilty.

    Don't defamation laws address this already in the UK?

    And any thoughts about this Labour councilor being arrested on suspicion of encouraging violence?

    https://x.com/darrengrimes_/status/1821568294437990576?s=61&t=EblTN1YExzME8eJ7t_dizQ

    It doesn't say exactly what she is being charged with, maybe it is defamation? No idea. But there has been tons of intentional malicious misinformation fueling these protests and those people should be punished if found to have been intentionally lying to incite attacks on innocent civilians. Anyway, this is not the place for this discussion.

    Briefly though, according to chatgpt:

    The person could be charged under the Public Order Act 1986, particularly for offenses related to stirring up racial hatred. This act makes it illegal to publish or distribute material that is threatening, abusive, or insulting, with the intent to stir up racial hatred. Given that the riots were linked to Islamophobic and anti-immigration sentiments, this law could be applied.

    Additionally, they might be charged with incitement to violence, a serious offense under UK law, which covers encouraging or assisting in the commission of an offense. Given the violence that ensued, charges related to conspiracy to commit violent disorder or even terrorism-related offenses could be considered if the intent was to create widespread fear and chaos.

    Perhaps this will be (and should be) the last post on this matter, but forcing citizens to shut their mouths instead of allowing them a forum to voice their dissent may not the best move on the part of their elected government. The First Amendment in the US protects offensive speech, not speech which everyone can agree with.

    Agreed that it should be the last post on this, yes

    LOL.

  • @NeuM said:
    Said another way… UK citizens are being arrested for having opinions.

    Not exactly true.

    She was arrested for "spreading false information about the identity of the Southport attacker on social media", not for having an opinion.

    It wasn't having an opinion that got her in trouble - it was publishing false "facts". A bit like shouting "fire" in a darkened cinema when there is no fire...

  • @Pictor said:
    Or.... just let's turn off any screen, for good. Problem solved!

    Honestly, yeah. As simple as it is, that is the solution.

    The digital realm has never been reality. It’s another world that people created and put stuff into it. The new age we’re entering is an extension of that.

    But even so, nature evolves. Dinosaurs that roamed the earth terrorizing creatures are now fossils that are used to make oil. Industrial evolution, moving into the digital age is a sign that time is ever flowing like a river.

    So, what do we do? Seize the moment. Enjoy life as it is, and as it evolves. Because we are here on this earth for a limited time. Our experiences are ours, and uniquely ours, and nothing can take that from us. They can try and replicate it, but our own views, thoughts, interpretations, conscience that has lasted for our lifetime will remain ours.

  • @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not. This has to change.

    I'll shut up now. :smile:

    Who are the real nutters, controlling society? Remember WMD’s amongst countless other ‘facts’.

  • edited August 9

    @Pictor said:

    @kirmesteggno said:

    @Svetlovska said:
    My point is, a few years down the line, and we won’t be able to trust the objective truth about any kind of footage about anything… Who knew that postmodernism was such a bitch?

    There will be AI for fact checking anything on your screen in real time. Problems create new solutions.

    Hopefully you are right!
    I just have doubts because of the "probabilistic" nature of LLMs... hence they are good at creating doubts, but they are really bad at creating certainty; for what I've seen until now.
    Also it's gonna be really hard to "certify" a judgement from an algorithmic network, if we don't develop serious tools for accurately debugging how these machine work. At the moment they are a black-box with no chance to understand its processes.

    Competitive open source models do exist right now. Ironically the best ones made by a company infamous for disinformation, but they seem to be on an redemption arc lately!

    You may have heard of retrieval augmented generation (RAG), basically a tools that run alongside the LLM and correct the output by instructing it to check additional sources. These tools might get baked into the models themselves at some point, or pre-training and finetuning become much more affordable.

    Such a model would need to get updated and re-trained basically every day as new information is added to the curated sources.

    I guess one way it could be done is by deconstructing audio and images into some kind of tree and timeline and check it against a relatively trusted and curated source like Wikipedia and calculate some kind of certainty score depending on how many matches it has found. If it can't find metions of items, locations and people like those that can be seen in the OP video it would show 0%.

    This way it could at least catch the obvious fakes. I assume that computers will have some kind of TPU hardware for LLMs quite soon.

  • Trusted, Wikipedia really :#

    The only real way to have a reasonable grasp of truth, is to do your own research, then ascertain who corroborates with your own findings, then recheck and recheck, in this regard there is no shortcut.

    The alternatives blissful ignorance, trust or faith.

  • @knewspeak said:
    Trusted, Wikipedia really :#

    The only real way to have a reasonable grasp of truth, is to do your own research, then ascertain who corroborates with your own findings, then recheck and recheck, in this regard there is no shortcut.

    The alternatives blissful ignorance, trust or faith.

    Wikipedia is better than nothing, and it was just an example for catching the obvious stuff. Multiple sources could be combined to enhance the certainty of course.

    You can't research everything you come across, that's why such an automated solution makes sense when false media can be created and distributed by anyone at the click of a button. It's a better solution than regulating it to death.

  • @seonnthaproducer said:

    @Pictor said:
    Or.... just let's turn off any screen, for good. Problem solved!

    Honestly, yeah. As simple as it is, that is the solution.

    The digital realm has never been reality. It’s another world that people created and put stuff into it. The new age we’re entering is an extension of that.

    But even so, nature evolves. Dinosaurs that roamed the earth terrorizing creatures are now fossils that are used to make oil. Industrial evolution, moving into the digital age is a sign that time is ever flowing like a river.

    So, what do we do? Seize the moment. Enjoy life as it is, and as it evolves. Because we are here on this earth for a limited time. Our experiences are ours, and uniquely ours, and nothing can take that from us. They can try and replicate it, but our own views, thoughts, interpretations, conscience that has lasted for our lifetime will remain ours.

    I agree.
    Existentially speaking, the only true reality is the one that we subjectively experience from our relative point of view. And that's what matters. And we are free to direct it as we desire.
    The shared collective illusion of a single truth is actually a sub-reality that is part of our experience; it doesn't necessarily exist, it is not necessarily true just because in the realm of materialism it seems like cohesive (or following the modern trend of using scientific data to hold on on an interpretation).

    I actually am happy that we have to face these philosophical topics, as a society. Before AI everybody thought I was either crazy or trying to gaslight them, when bringing up these topics 😂

  • edited August 9

    One surprising development in all this is that governments have managed to make citizens strongly and almost fanatically believe (and defend) that it's mostly CITIZENS that are lying and manipulating, NOT governments 🥴

  • @kirmesteggno said:
    Competitive open source models do exist right now. Ironically the best ones made by a company infamous for disinformation, but they seem to be on an redemption arc lately!

    You may have heard of retrieval augmented generation (RAG), basically a tools that run alongside the LLM and correct the output by instructing it to check additional sources. These tools might get baked into the models themselves at some point, or pre-training and finetuning become much more affordable.

    Such a model would need to get updated and re-trained basically every day as new information is added to the curated sources.

    I've read the term; I understand the idea. I'm just not sure how much more real "certainty" or "reliability" would that provide.
    But yeah, LLM alone aren't enough, so wrapping them in some kind of vetting process might help.
    Although I would still not be sure how much I can trust the results :)
    And I'm still waiting for a decent solution to allow the live re-training to be sufficient to keep up with the infinite amount of new info produced every day. At the moment we lack that power. Retraining every day might be not attainable (with the current processes/tech we use).

    But sure, open source is doing great in the field, contrasting the monopolistic attitude of corps on AI.

    I guess one way it could be done is by deconstructing audio and images into some kind of tree and timeline and check it against a relatively trusted and curated source like Wikipedia and calculate some kind of certainty score depending on how many matches it has found. If it can't find metions of items, locations and people like those that can be seen in the OP video it would show 0%.

    Hey! Building a Wikipedia with a certainty score was my idea during pandemic :smiley: , when I hoped to build a platform for neutrally confront multiple info and score them. The idea was exactly that certainty is an illusion, and that we needed a healthier space where anything can be written but then vetted and scored according to several key factors (verifiable, scientific, speculative, reliable, locally solid/sound). But it was pretty ambitious....
    Nowadays LLM already do something similar (with a lot less control on the process).

    But about giving 0% because there was no previous mention is, imho, a biased result.
    Also I personally don't fully trust Wikipedia. It's open and started as a nice project, I also know people working into it, but it's still exposed to elitist/bipartisan selection of info. They too often make unavailable the discussions that lead to choosing one "truth" instead of another (so it's not fully open). Plus I had a couple of other personal experiences that made me seriously doubt of the influence pyramid behind (that great tech that is) Wikipedia.
    Just to say that (for how I look at things) it's really hard to have reliable/stable info.

    This way it could at least catch the obvious fakes. I assume that computers will have some kind of TPU hardware for LLMs quite soon.

    We already do on recent iPads, don't we? :)> @knewspeak said:

    Trusted, Wikipedia really :#

    The only real way to have a reasonable grasp of truth, is to do your own research, then ascertain who corroborates with your own findings, then recheck and recheck, in this regard there is no shortcut.

    The alternatives blissful ignorance, trust or faith.

    I absolutely agree ^^

    (also I believe that in the end, in this reality, we can't escape from eventually trusting or having faith is some premises; I mean, for me, also science itself is submitted to this rule).> @kirmesteggno said:

    @knewspeak said:

    Wikipedia is better than nothing, and it was just an example for catching the obvious stuff. Multiple sources could be combined to enhance the certainty of course.

    I would just invert 180° the way of reasoning of "enhance the certainty"..... imo certainty is a relativistic illusion.
    I'd rather try to "reduce the uncertainty"; and, after that, keep considering it as uncertainty, without falling in the bias of believing it as if it was "almost certain" (that, imho, is one of the big biases of our modern times).

    You can't research everything you come across, that's why such an automated solution makes sense when false media can be created and distributed by anyone at the click of a button. It's a better solution than regulating it to death.

    "you can't research everything" is one of the reasons why I wanted to create that platform: alone is a mission impossible. The idea of collective research is great (the same as science or wikipedia enact).

  • edited August 9

    @SevenSystems said:
    One surprising development in all this is that governments have managed to make citizens strongly and almost fanatically believe (and defend) that it's mostly CITIZENS that are lying and manipulating, NOT governments 🥴

    Yes that's a new kind of politics: barrel dropping the responsibility and accountability on population rather than on state representatives/roles/system.
    I'm thinking that gaslighting us with cognitive dissonance and ambiguity and uncertainty is part of a long-term plan for changing the status quo of old school democracies. That's why I am alarmed when govs (or cultural movements) try to break old principles and flip the narrative; can lead to very detrimental effects, with the passing of decades. (philosophy is a delicate thing)

  • @Pictor said:

    @kirmesteggno said:
    Competitive open source models do exist right now. Ironically the best ones made by a company infamous for disinformation, but they seem to be on an redemption arc lately!

    You may have heard of retrieval augmented generation (RAG), basically a tools that run alongside the LLM and correct the output by instructing it to check additional sources. These tools might get baked into the models themselves at some point, or pre-training and finetuning become much more affordable.

    Such a model would need to get updated and re-trained basically every day as new information is added to the curated sources.

    I've read the term; I understand the idea. I'm just not sure how much more real "certainty" or "reliability" would that provide.
    But yeah, LLM alone aren't enough, so wrapping them in some kind of vetting process might help.
    Although I would still not be sure how much I can trust the results :)
    And I'm still waiting for a decent solution to allow the live re-training to be sufficient to keep up with the infinite amount of new info produced every day. At the moment we lack that power. Retraining every day might be not attainable (with the current processes/tech we use).

    But sure, open source is doing great in the field, contrasting the monopolistic attitude of corps on AI.

    Smaller models, like those that run on mobile devices and run locally, e.g as a browser plugin could be enough for this because they can have a narrower focus.

    I guess one way it could be done is by deconstructing audio and images into some kind of tree and timeline and check it against a relatively trusted and curated source like Wikipedia and calculate some kind of certainty score depending on how many matches it has found. If it can't find metions of items, locations and people like those that can be seen in the OP video it would show 0%.

    Hey! Building a Wikipedia with a certainty score was my idea during pandemic :smiley: , when I hoped to build a platform for neutrally confront multiple info and score them. The idea was exactly that certainty is an illusion, and that we needed a healthier space where anything can be written but then vetted and scored according to several key factors (verifiable, scientific, speculative, reliable, locally solid/sound). But it was pretty ambitious....
    Nowadays LLM already do something similar (with a lot less control on the process).

    I think it mostly depends on the training data/sources and the freshness of the data of course. LLMs make "utopian" knowledge work projects like that obtainable when combined with the right tools.

    But about giving 0% because there was no previous mention is, imho, a biased result.

    Probably, just pulled it out of my ass at the time of writing, like the Wikipedia example.

    Also I personally don't fully trust Wikipedia. It's open and started as a nice project, I also know people working into it, but it's still exposed to elitist/bipartisan selection of info. They too often make unavailable the discussions that lead to choosing one "truth" instead of another (so it's not fully open). Plus I had a couple of other personal experiences that made me seriously doubt of the influence pyramid behind (that great tech that is) Wikipedia.
    Just to say that (for how I look at things) it's really hard to have reliable/stable info.

    Wikipedia is part of the mainstream, and chicken or egg topics where there isn't one clear answer are of course fought over by those who benefit from it one way or another.

    This way it could at least catch the obvious fakes. I assume that computers will have some kind of TPU hardware for LLMs quite soon.

    We already do on recent iPads, don't we? :)

    Yeah, for smaller domain specialized models. For the bigger ones with general kowledge it's going to use cloud computing. Those could be used together though. Someone like Apple with enough resources could easily train and update a small model on a daily or at least weekly basis.

    @kirmesteggno said:

    @knewspeak said:

    Wikipedia is better than nothing, and it was just an example for catching the obvious stuff. Multiple sources could be combined to enhance the certainty of course.

    I would just invert 180° the way of reasoning of "enhance the certainty"..... imo certainty is a relativistic illusion.
    I'd rather try to "reduce the uncertainty"; and, after that, keep considering it as uncertainty, without falling in the bias of believing it as if it was "almost certain" (that, imho, is one of the big biases of our modern times).

    Would it really matter if it's 10% true or 90% false? 🧐

    You can't research everything you come across, that's why such an automated solution makes sense when false media can be created and distributed by anyone at the click of a button. It's a better solution than regulating it to death.

    "you can't research everything" is one of the reasons why I wanted to create that platform: alone is a mission impossible. The idea of collective research is great (the same as science or wikipedia enact).

    It's a good idea. A collective effort with multiple LLMs, and multiple entities (with different world views) envolved could be the best solution.

Sign In or Register to comment.