Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
Agreed that it should be the last post on this, yes
LOL.
Not exactly true.
She was arrested for "spreading false information about the identity of the Southport attacker on social media", not for having an opinion.
It wasn't having an opinion that got her in trouble - it was publishing false "facts". A bit like shouting "fire" in a darkened cinema when there is no fire...
Honestly, yeah. As simple as it is, that is the solution.
The digital realm has never been reality. It’s another world that people created and put stuff into it. The new age we’re entering is an extension of that.
But even so, nature evolves. Dinosaurs that roamed the earth terrorizing creatures are now fossils that are used to make oil. Industrial evolution, moving into the digital age is a sign that time is ever flowing like a river.
So, what do we do? Seize the moment. Enjoy life as it is, and as it evolves. Because we are here on this earth for a limited time. Our experiences are ours, and uniquely ours, and nothing can take that from us. They can try and replicate it, but our own views, thoughts, interpretations, conscience that has lasted for our lifetime will remain ours.
Who are the real nutters, controlling society? Remember WMD’s amongst countless other ‘facts’.
Competitive open source models do exist right now. Ironically the best ones made by a company infamous for disinformation, but they seem to be on an redemption arc lately!
You may have heard of retrieval augmented generation (RAG), basically a tools that run alongside the LLM and correct the output by instructing it to check additional sources. These tools might get baked into the models themselves at some point, or pre-training and finetuning become much more affordable.
Such a model would need to get updated and re-trained basically every day as new information is added to the curated sources.
I guess one way it could be done is by deconstructing audio and images into some kind of tree and timeline and check it against a relatively trusted and curated source like Wikipedia and calculate some kind of certainty score depending on how many matches it has found. If it can't find metions of items, locations and people like those that can be seen in the OP video it would show 0%.
This way it could at least catch the obvious fakes. I assume that computers will have some kind of TPU hardware for LLMs quite soon.
Trusted, Wikipedia really
The only real way to have a reasonable grasp of truth, is to do your own research, then ascertain who corroborates with your own findings, then recheck and recheck, in this regard there is no shortcut.
The alternatives blissful ignorance, trust or faith.
Wikipedia is better than nothing, and it was just an example for catching the obvious stuff. Multiple sources could be combined to enhance the certainty of course.
You can't research everything you come across, that's why such an automated solution makes sense when false media can be created and distributed by anyone at the click of a button. It's a better solution than regulating it to death.
I agree.
Existentially speaking, the only true reality is the one that we subjectively experience from our relative point of view. And that's what matters. And we are free to direct it as we desire.
The shared collective illusion of a single truth is actually a sub-reality that is part of our experience; it doesn't necessarily exist, it is not necessarily true just because in the realm of materialism it seems like cohesive (or following the modern trend of using scientific data to hold on on an interpretation).
I actually am happy that we have to face these philosophical topics, as a society. Before AI everybody thought I was either crazy or trying to gaslight them, when bringing up these topics 😂
One surprising development in all this is that governments have managed to make citizens strongly and almost fanatically believe (and defend) that it's mostly CITIZENS that are lying and manipulating, NOT governments 🥴
I've read the term; I understand the idea. I'm just not sure how much more real "certainty" or "reliability" would that provide.
But yeah, LLM alone aren't enough, so wrapping them in some kind of vetting process might help.
Although I would still not be sure how much I can trust the results
And I'm still waiting for a decent solution to allow the live re-training to be sufficient to keep up with the infinite amount of new info produced every day. At the moment we lack that power. Retraining every day might be not attainable (with the current processes/tech we use).
But sure, open source is doing great in the field, contrasting the monopolistic attitude of corps on AI.
Hey! Building a Wikipedia with a certainty score was my idea during pandemic , when I hoped to build a platform for neutrally confront multiple info and score them. The idea was exactly that certainty is an illusion, and that we needed a healthier space where anything can be written but then vetted and scored according to several key factors (verifiable, scientific, speculative, reliable, locally solid/sound). But it was pretty ambitious....
Nowadays LLM already do something similar (with a lot less control on the process).
But about giving 0% because there was no previous mention is, imho, a biased result.
Also I personally don't fully trust Wikipedia. It's open and started as a nice project, I also know people working into it, but it's still exposed to elitist/bipartisan selection of info. They too often make unavailable the discussions that lead to choosing one "truth" instead of another (so it's not fully open). Plus I had a couple of other personal experiences that made me seriously doubt of the influence pyramid behind (that great tech that is) Wikipedia.
Just to say that (for how I look at things) it's really hard to have reliable/stable info.
We already do on recent iPads, don't we? > @knewspeak said:
I absolutely agree ^^
(also I believe that in the end, in this reality, we can't escape from eventually trusting or having faith is some premises; I mean, for me, also science itself is submitted to this rule).> @kirmesteggno said:
I would just invert 180° the way of reasoning of "enhance the certainty"..... imo certainty is a relativistic illusion.
I'd rather try to "reduce the uncertainty"; and, after that, keep considering it as uncertainty, without falling in the bias of believing it as if it was "almost certain" (that, imho, is one of the big biases of our modern times).
"you can't research everything" is one of the reasons why I wanted to create that platform: alone is a mission impossible. The idea of collective research is great (the same as science or wikipedia enact).
Yes that's a new kind of politics: barrel dropping the responsibility and accountability on population rather than on state representatives/roles/system.
I'm thinking that gaslighting us with cognitive dissonance and ambiguity and uncertainty is part of a long-term plan for changing the status quo of old school democracies. That's why I am alarmed when govs (or cultural movements) try to break old principles and flip the narrative; can lead to very detrimental effects, with the passing of decades. (philosophy is a delicate thing)
Smaller models, like those that run on mobile devices and run locally, e.g as a browser plugin could be enough for this because they can have a narrower focus.
I think it mostly depends on the training data/sources and the freshness of the data of course. LLMs make "utopian" knowledge work projects like that obtainable when combined with the right tools.
Probably, just pulled it out of my ass at the time of writing, like the Wikipedia example.
Wikipedia is part of the mainstream, and chicken or egg topics where there isn't one clear answer are of course fought over by those who benefit from it one way or another.
Yeah, for smaller domain specialized models. For the bigger ones with general kowledge it's going to use cloud computing. Those could be used together though. Someone like Apple with enough resources could easily train and update a small model on a daily or at least weekly basis.
@kirmesteggno said:
Would it really matter if it's 10% true or 90% false? 🧐
It's a good idea. A collective effort with multiple LLMs, and multiple entities (with different world views) envolved could be the best solution.