Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

In the future, you will disbelieve everything, and be happy

I’ll just leave this here.

«1

Comments

  • you can see where Akria got inspiration from

  • @Danny_Mammy said:
    you can see where Akria got inspiration from

    Nooo, lol - this is almost certainly AI-made. Cool find @Svetlovska!

  • edited August 6

    how incredibly disappointing. So, this used Akiria in its generation.

  • edited August 6

    It is AI. The channel specialises in this kind of thing. E.g:

    My point is, a few years down the line, and we won’t be able to trust the objective truth about any kind of footage about anything… Who knew that postmodernism was such a bitch?

  • That channel's ai work is great fun. The details, the background scenes. etc. But you're right about belief in objective truth. I don't know how we're going to survive this.
    I recently read this quote from Hannah Arendt: "If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer. And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please."

  • Those darn kids and their rocket races!

  • Just do you know it isn’t all AI…

  • @Svetlovska said:
    It is AI. The channel specialises in this kind of thing. E.g:

    My point is, a few years down the line, and we won’t be able to trust the objective truth about any kind of footage about anything… Who knew that postmodernism was such a bitch?

    Postmodernism was always a bitch.

  • edited August 6

    The trend was clear since several years. imho at least.
    We see politics, medias, culture, a lot more prone to just follow along with things and beliefs, even when they are not certain, and even play on uncertainty and ambiguity.
    With generative AI this will get really bad soon.
    Gaslighting is becoming a huge phenomenon. Fake news was just a timid appetizer.

    When I try to explain to friends what I predict, I tell them that we are getting into the "era of falsification".....
    I don't think that it's a coincidence that the same society that pushes for AI also is gradually loosing its spirituality (that is: believing in something without real proofs, but at least istitutionalized so everybody believes the same).

    I advice everybody to have a psychedelic experience as soon as possible and get used to that psychosis feeling of not being sure about reality. That's going to make you stronger (or crazy) an will be useful later on :wink:


    What I worry the most is at jurisdictional, legal, and information level.
    If you can't trust the evidences, then it's gonna be a huge problem with the future.
    Same if you can't distinguish between a real or fake signature, or recognize if a videocall for bank identification is real or generated.

    The only solution I see it's big evil corps building chips to have a author-signature/watermark so everything digital becomes traceable and verifiable (but the corps will become "the judges".... and I don't trust corps after NSA scandal).


    EDIT: sorry.... I made the thread depressing :smiley:

  • @MrStochastic said:
    That channel's ai work is great fun. The details, the background scenes. etc. But you're right about belief in objective truth. I don't know how we're going to survive this.
    I recently read this quote from Hannah Arendt: "If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer. And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please."

    That's fascinating.... I think we'll have to develop our gut feelings, increase our thinking ability to maintain doubt and reason following dichotomies instead of single-truths.
    Not too hard for a philosopher, or a software developer, or a detective. But most people will go psychosis. I'm sorry for the new generations that will have to cope with the initial/transition phase.

    Or.... just let's turn off any screen, for good. Problem solved!

  • edited August 7

    @LinearLineman said “it isn’t all AI.” Indeed. Which is kind of my point. When computer graphics, AI, voice cloning and real footage can all be blended together so seamlessly, so much so that you can’t even tell where reality stops and fantasy begins, and the tools for doing so are widely dispersed, the ability to create false narratives about anything is boosted exponentially. (And that’s before ubertrolls like Elon Musk, currently stating that ‘civil war in the UK is inevitable’, sigh, go out of their way to amplify them.)

    The consequences, in a world of bad actors seeking to influence, exacerbate and divide, are profound. @MrStochastic’s Hannah Arendt quote is spot on. And she knew a thing or two about bad actors.

  • @MrStochastic said:
    That channel's ai work is great fun. The details, the background scenes. etc. But you're right about belief in objective truth. I don't know how we're going to survive this.
    I recently read this quote from Hannah Arendt: "If everybody always lies to you, the consequence is not that you believe the lies, but rather that nobody believes anything any longer. And a people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please."

    100%

  • The first step is to change the status of social media outlets from "a platform" to "publishers".

    That makes them accountable for all the stuff they publish.

    And let's not kid ourselves, they are publishing.

  • @Simon said:
    The first step is to change the status of social media outlets from "a platform" to "publishers".

    That makes them accountable for all the stuff they publish.

    And let's not kid ourselves, they are publishing.

    We need more accountability for Social Media for sure. Don't even want to derail this thread by going into how disturbed I am by what's currently happening to Twitter. But I don't think they can easily be classified as publishers. Publishers vet and edit, if needed, before publishing. Implementing a foolproof automated vetting system on social media platforms is likely easier said than done.

    Actually - I just now asked chatgpt, "Is it currently technically possible to pre-regulate posts fairly and accurately based on the words and images contained therein?" and got this answer:

    Technically, it is possible to pre-regulate posts to some extent using algorithms and machine learning models. These systems can analyze text and images to identify potentially harmful or inappropriate content. However, doing this fairly and accurately is a significant challenge.

    Algorithms can be quite effective at detecting obvious violations, such as explicit content or hate speech. They rely on natural language processing (NLP) for text and computer vision for images to flag potentially problematic content. Despite advancements, these systems still struggle with context, nuance, and cultural differences. This can lead to false positives, where harmless content is flagged, and false negatives, where harmful content slips through.

    Human moderation can help improve accuracy but is labor-intensive and raises concerns about consistency and bias. Pre-regulating posts in a way that is both fair and accurate requires a combination of advanced technology and human oversight, which is resource-intensive and challenging to implement on a large scale.

    Therefore, while there are tools to pre-regulate content, achieving fairness and accuracy remains a complex and ongoing issue.

  • edited August 7

    @Gavinski said:
    But I don't think they can easily be classified as publishers. Publishers vet and edit, if needed, before publishing. Implementing a foolproof automated vetting system on social media platforms is likely easier said than done.

    I imagine the courts will have to decide this, but I would maintain that they are publishers. Yes, they don't edit before publishing but they are still publishing.

    If a print newspaper said "just send in your views and we'll print them without reading or editing them" then that would be the same thing as the social media companies are doing.

    If you publish something that is untrue or defamatory then claiming "we didn't read, edit or approve it" is no defense.

    A news comapny can write stories and edit them but if they don't print or broadcast those stories then they are not publishing them.

    The second they release it to the public they become publishers.

    The phone company doesn't control what is said over the phone lines. But they are ok because they are not publicly broadcasting it.

    That's why I think social media compaines are publicly broadcasting content.

  • edited August 7

    Any controlling or censuring solution would be quite controversial.

    I understand the necessity on social medias, but then the same regulation would be extended naturally to the whole (public) internet.
    Or, like Meta is doing lately (at least in EU), require to pay a fee for the service as other publishers/newspapers are doing since few years. At that point things might be not public anymore.

    In Italy, every few years somebody tried to pass laws to require accountability “as journalists” to any blog post, sometimes also reporting any blogger to be in the journalists register (in Italy, most of those registers are exploited to become an elitist sect and allow gatekeeping). These laws would practically make normal people accountable and censorable and prone to be sued by entities with enough money to destroy you in court….. it easily would become a totalitarian information system.
    And we already got fascism not too long ago.

    Regulating the internet is a crazy unsolvable problem.
    Like in finance, there will always be some country that makes exceptions to host huge corporations, granting them benefits or protection.
    I mean, even just ThePirateBay managed to get around legislations and keeping going on, after decades, with a legally ambiguous activity.

    I think we should avoid censoring,
    Maybe rather a system to avoid instant virality could sort some positive effects, by disincentivising bad actors. Nobody needs that, it’s just a money thing, virality of content is superfluous.

    Plus, I think that we still don’t have a good digital education system. Probably it’s the main issue nowadays; throwing kids, teenagers, uneducated adults, on a system that promotes dopamine hits, junkieness, unregulated marketing, self-interest and success disconnected by honesty/quality of content.

    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

  • edited August 7

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not. This has to change.

    I'll shut up now. :smile:

  • @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

  • @Gavinski said:

    These systems can analyze text and images to identify potentially harmful or inappropriate content. However, doing this fairly and accurately is a significant challenge.

    That's the kicker isn't it? Who gets to decide what is "potentially harmful or inappropriate content"? I sure don't trust my government to do it...

  • @lasselu said:

    @Gavinski said:

    These systems can analyze text and images to identify potentially harmful or inappropriate content. However, doing this fairly and accurately is a significant challenge.

    That's the kicker isn't it? Who gets to decide what is "potentially harmful or inappropriate content"? I sure don't trust my government to do it...

    Exactly. Well, I'd rather have the UK govt doing it than Elon fricking Musk, mind you

  • @Gavinski said:

    @lasselu said:

    @Gavinski said:

    These systems can analyze text and images to identify potentially harmful or inappropriate content. However, doing this fairly and accurately is a significant challenge.

    That's the kicker isn't it? Who gets to decide what is "potentially harmful or inappropriate content"? I sure don't trust my government to do it...

    Exactly. Well, I'd rather have the UK govt doing it than Elon fricking Musk, mind you

    Yeah well, I don't really trust either of them... :)

  • @Svetlovska said:
    My point is, a few years down the line, and we won’t be able to trust the objective truth about any kind of footage about anything… Who knew that postmodernism was such a bitch?

    There will be AI for fact checking anything on your screen in real time. Problems create new solutions.

  • edited August 8

    @Gavinski said:
    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    There should just be accountability. Full stop.

    I mean [aaarrgh Pictor! Didn't you say "full stop"??] .... somebody doing shit to others and me being able just to watch that happening and not being able to do anything else than complaining or blaming, and seeing no enforcement of "justice", is something that I'm witnessing everyday... since bombing Gaza started. (just a mention, as an analogy; I seriously don't wanna enter in the topic)
    That's a government; there were also rulings of guiltiness and crimes of war.
    But nobody has the (legal) authority to stop that.

    The same was always for the internet, since the very beginning.
    Step in the wrong jurisdiction, and you'll pay for your crimes on the internet. But if you stay anonymous or never get into the "wrong" jurisdiction, you can get away with that.
    It's on the internet, and outside. Since ever.
    Internet is just another (more blurry) jurisdiction.

    Dunno, I'd prefer seeing a prosecution with mediators and debate, rather than entitled cancel culture. The premises are already in our system; we should just follow them.
    Also, it's paramount critical to not impose our vision to others as the "correct one", in a space diverse like the internet.

    It is to accept that there is no perfect solution, like it is accepted in the real world.
    And to accept that totalitarian control never really gets to solve the issue (because totalitarian is also ideological, hence bipartisan, hence unjust) and rather creates other even bigger threats to democratic societies.

    Trying to solve problems is one thing. But we should try to distance from the control freak entitled attitude that (imho western) world has gotten in the last decades (especially because of the internet).
    I mean.... would be sad if we had to pass through authoritarian systems again, after 20th century.

    Principles of Law are already there; it is necessary to find a balance for the digital world.
    (and I don't see world-wide agreement on digital space, at least if we don't go for a globally recognized regulator; and that would also be pretty dangerous in the long term)
    AI potential is scary, but we can certainly get to a system of deterrence and accountability (as you @Gavinski suggested), without having to push for too much control or breaking of privacy principles.

    @lasselu said:
    That's the kicker isn't it? Who gets to decide what is "potentially harmful or inappropriate content"? I sure don't trust my government to do it...

    Well, you already en-trust them to judge on any legal issue. Why not on the internet? (assumed it is in their legislation)
    I'm not saying I like that; but there must be some kind of "order"; and republics delegated executive, legislative and judiciary institutions, so we don't have to directly take care of that.
    I also don't trust governments; but I maybe I trust even less an anarchic society.

  • edited August 8

    @kirmesteggno said:

    @Svetlovska said:
    My point is, a few years down the line, and we won’t be able to trust the objective truth about any kind of footage about anything… Who knew that postmodernism was such a bitch?

    There will be AI for fact checking anything on your screen in real time. Problems create new solutions.

    Hopefully you are right!
    I just have doubts because of the "probabilistic" nature of LLMs... hence they are good at creating doubts, but they are really bad at creating certainty; for what I've seen until now.
    Also it's gonna be really hard to "certify" a judgement from an algorithmic network, if we don't develop serious tools for accurately debugging how these machine work. At the moment they are a black-box with no chance to understand its processes.

  • edited August 8

    @Gavinski said:

    @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    At least one person has already been arrested in the UK for posts on Facebook. Said another way… UK citizens are being arrested for having opinions. That is not a positive development.

    https://www.msn.com/en-us/news/world/police-arrest-woman-over-inaccurate-southport-social-media-post/ar-AA1otnee

    But it’s also not new. This from 2022:

    https://www.theverge.com/2022/2/7/22912054/uk-grossly-offensive-tweet-prosecution-section-127-2003-communications-act

    At least in the US, defamation can be countered by lawsuits. One who spreads malicious lies faces repurcussions from those defamed. They aren’t arrested by the government unless they are making credible violent threats against others.

  • edited August 8

    @NeuM said:

    @Gavinski said:

    @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    At least one person has already been arrested in the UK for posts on Facebook. Said another way… UK citizens are being arrested for having opinions. That is not a positive development.

    https://www.msn.com/en-us/news/world/police-arrest-woman-over-inaccurate-southport-social-media-post/ar-AA1otnee

    But it’s also not new. This from 2022:

    https://www.theverge.com/2022/2/7/22912054/uk-grossly-offensive-tweet-prosecution-section-127-2003-communications-act

    At least in the US, defamation can be countered by lawsuits. One who spreads malicious lies faces repurcussions from those defamed. They aren’t arrested by the government unless they are making credible violent threats against others.

    There is a difference between 'having an opinion' and intentionally spreading lies with malicious intent. I hope she is treated with the full force of the law if found guilty.

  • edited August 8

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    At least one person has already been arrested in the UK for posts on Facebook. Said another way… UK citizens are being arrested for having opinions. That is not a positive development.

    https://www.msn.com/en-us/news/world/police-arrest-woman-over-inaccurate-southport-social-media-post/ar-AA1otnee

    But it’s also not new. This from 2022:

    https://www.theverge.com/2022/2/7/22912054/uk-grossly-offensive-tweet-prosecution-section-127-2003-communications-act

    At least in the US, defamation can be countered by lawsuits. One who spreads malicious lies faces repurcussions from those defamed. They aren’t arrested by the government unless they are making credible violent threats against others.

    There is a difference between 'having an opinion' and intentionally spreading lies with malicious intent. I hope she is treated with the full force of the law of found guilty.

    Don't defamation laws address this already in the UK?

    And any thoughts about this Labour councilor being arrested on suspicion of encouraging violence?

    https://x.com/darrengrimes_/status/1821568294437990576?s=61&t=EblTN1YExzME8eJ7t_dizQ

  • @NeuM said:

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    At least one person has already been arrested in the UK for posts on Facebook. Said another way… UK citizens are being arrested for having opinions. That is not a positive development.

    https://www.msn.com/en-us/news/world/police-arrest-woman-over-inaccurate-southport-social-media-post/ar-AA1otnee

    But it’s also not new. This from 2022:

    https://www.theverge.com/2022/2/7/22912054/uk-grossly-offensive-tweet-prosecution-section-127-2003-communications-act

    At least in the US, defamation can be countered by lawsuits. One who spreads malicious lies faces repurcussions from those defamed. They aren’t arrested by the government unless they are making credible violent threats against others.

    There is a difference between 'having an opinion' and intentionally spreading lies with malicious intent. I hope she is treated with the full force of the law of found guilty.

    Don't defamation laws address this already in the UK?

    And any thoughts about this Labour councilor being arrested on suspicion of encouraging violence?

    https://x.com/darrengrimes_/status/1821568294437990576?s=61&t=EblTN1YExzME8eJ7t_dizQ

    It doesn't say exactly what she is being charged with, maybe it is defamation? No idea. But there has been tons of intentional malicious misinformation fueling these protests and those people should be punished if found to have been intentionally lying to incite attacks on innocent civilians. Anyway, this is not the place for this discussion.

  • @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    At least one person has already been arrested in the UK for posts on Facebook. Said another way… UK citizens are being arrested for having opinions. That is not a positive development.

    https://www.msn.com/en-us/news/world/police-arrest-woman-over-inaccurate-southport-social-media-post/ar-AA1otnee

    But it’s also not new. This from 2022:

    https://www.theverge.com/2022/2/7/22912054/uk-grossly-offensive-tweet-prosecution-section-127-2003-communications-act

    At least in the US, defamation can be countered by lawsuits. One who spreads malicious lies faces repurcussions from those defamed. They aren’t arrested by the government unless they are making credible violent threats against others.

    There is a difference between 'having an opinion' and intentionally spreading lies with malicious intent. I hope she is treated with the full force of the law of found guilty.

    Don't defamation laws address this already in the UK?

    And any thoughts about this Labour councilor being arrested on suspicion of encouraging violence?

    https://x.com/darrengrimes_/status/1821568294437990576?s=61&t=EblTN1YExzME8eJ7t_dizQ

    It doesn't say exactly what she is being charged with, maybe it is defamation? No idea. But there has been tons of intentional malicious misinformation fueling these protests and those people should be punished if found to have been intentionally lying to incite attacks on innocent civilians. Anyway, this is not the place for this discussion.

    Briefly though, according to chatgpt:

    The person could be charged under the Public Order Act 1986, particularly for offenses related to stirring up racial hatred. This act makes it illegal to publish or distribute material that is threatening, abusive, or insulting, with the intent to stir up racial hatred. Given that the riots were linked to Islamophobic and anti-immigration sentiments, this law could be applied.

    Additionally, they might be charged with incitement to violence, a serious offense under UK law, which covers encouraging or assisting in the commission of an offense. Given the violence that ensued, charges related to conspiracy to commit violent disorder or even terrorism-related offenses could be considered if the intent was to create widespread fear and chaos.

  • @Gavinski said:

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @NeuM said:

    @Gavinski said:

    @Simon said:

    @Pictor said:
    I think all this shit is just symptoms of an unhealthy society. The issue is not in the tool or in the freedom it gives; imho.

    Society has always been unhealthy.

    The difference was back then the nutters couldn't get publishers to broadcast their views. Social media has given nutters an outlet that will publish anything, no matter how untrue, damaging or illegal it is.

    Traditional media had filtering. Social media does not.

    I'll shut up now.

    We're all agreed on how dangerous it is. As Pictor said though, it is incredibly hard at this point to regulate. Let's see what happens now in the UK following recent events. I imagine they will prosecute some people for the content of their tweets if it can be proven they were posted with malicious intent, were knowingly spreading misinformation, and can be shown to have incited crimes. Of course, that's just a rough sketch, I'm no lawyer lol.

    If people know they are likely to be fined or jailed if found guilty of maliciously spreading potentially dangerous lies they will self censor. Instead of limiting behaviour preemptively, punish bad behaviour after the fact. Exactly the way that libel and slander laws operate, for example.

    At least one person has already been arrested in the UK for posts on Facebook. Said another way… UK citizens are being arrested for having opinions. That is not a positive development.

    https://www.msn.com/en-us/news/world/police-arrest-woman-over-inaccurate-southport-social-media-post/ar-AA1otnee

    But it’s also not new. This from 2022:

    https://www.theverge.com/2022/2/7/22912054/uk-grossly-offensive-tweet-prosecution-section-127-2003-communications-act

    At least in the US, defamation can be countered by lawsuits. One who spreads malicious lies faces repurcussions from those defamed. They aren’t arrested by the government unless they are making credible violent threats against others.

    There is a difference between 'having an opinion' and intentionally spreading lies with malicious intent. I hope she is treated with the full force of the law of found guilty.

    Don't defamation laws address this already in the UK?

    And any thoughts about this Labour councilor being arrested on suspicion of encouraging violence?

    https://x.com/darrengrimes_/status/1821568294437990576?s=61&t=EblTN1YExzME8eJ7t_dizQ

    It doesn't say exactly what she is being charged with, maybe it is defamation? No idea. But there has been tons of intentional malicious misinformation fueling these protests and those people should be punished if found to have been intentionally lying to incite attacks on innocent civilians. Anyway, this is not the place for this discussion.

    Briefly though, according to chatgpt:

    The person could be charged under the Public Order Act 1986, particularly for offenses related to stirring up racial hatred. This act makes it illegal to publish or distribute material that is threatening, abusive, or insulting, with the intent to stir up racial hatred. Given that the riots were linked to Islamophobic and anti-immigration sentiments, this law could be applied.

    Additionally, they might be charged with incitement to violence, a serious offense under UK law, which covers encouraging or assisting in the commission of an offense. Given the violence that ensued, charges related to conspiracy to commit violent disorder or even terrorism-related offenses could be considered if the intent was to create widespread fear and chaos.

    Perhaps this will be (and should be) the last post on this matter, but forcing citizens to shut their mouths instead of allowing them a forum to voice their dissent may not the best move on the part of their elected government. The First Amendment in the US protects offensive speech, not speech which everyone can agree with.

Sign In or Register to comment.