Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
Mathematics was created by human beings to analyze, quantify and simulate reality and it does this very well. Music, art, commerce, the passage of time... all due to the structured nature of our brains and our ability to use symbolism as a proxy for everything we perceive.
I agree it can do it very well. I have a friend who would argue with you about mathematics being created by humans though and that just created its language (more her thing than mine).
The symbolism and representation is the problem. Reality isn’t representative. In fact, that’s the defining aspect of it, unless you want to get into real metaohysics etc, but in this context.
A many many sided polygon viewed from a great distance may appear to be a circle. But it isn’t. The circle is in a person’s mind.
Mathematics is a system of symbols. Written languages are symbols. Music is represented by symbols. We are the creators and the interpreters of symbols and those symbols which best represent and communicate our 'reality' (to the best of our abilities) are the things which have survived. None of this stuff is new.
Music can be described mathematically but to say is mathematical in itself sounds wrong to me.
Why? Because it's something you feel, versus something you meticulously plan? Math can also describe improvisation, seemingly unstructured noise and pure experimentation.
Because mathematics has limits of itself as described by Godel. I think that music is something different than numbers and symbols.
The math says otherwise.
lol that just made my day!!
That doesn’t really support “falls apart” though, especially as you’re not giving an explanation for why the etymology is wrong for this word.
It would be her argument. That maths isn’t a language. It’s a thing or an understanding communicated by language.
I think it’s actually pretty interesting, the difference between the thing and the understanding of it and if they can be separated and the difference between a thing and the language referring to it when that thing is that language. If it is.
I did say when examined.
Etymology can’t be wrong, if accurate, it’s just describing the history of a words usage and development rather than making any assertion on a word’s essential meaning. So I agree in the sense of there being an original usage, of course, whether we know it or not, in the cases where there is amd that that usage may have had a particular intent and meaning. but that’s really just about usage rather than meaning, and an instance of it. So even original meaning is not discrete. Its original usage may have had a particular meaning, but the word itself and its resonances exist as independently of that as the things it refers to.
words refer to concepts, and the concept is sovereign. And nothing is discrete.
I think you misunderstand. I don’t care who or what they are. It is their music, and only their music I want to hear.
Ai is really beautiful if you have a serious disability. There are lots of other things I can say but this is what matters most among them.
It’ll get interesting when the existential malaise at being unable to create anything new gives to one of them the divine curse of self knowledge.
This topic will read as people wondering if computers will be the future of music in the 80s or something
AGI (artificial general intelligence) will be self-aware. All we see today are very sophisticated examples of machine learning. An AGI will be, as far as we are concerned, no different from another life form capable of thinking independently.
Are you referring to consciousness, so materialists now have a definition for something that very possibly transcends objectivity.
Wouldn’t you need a falsifiable definition of consciousness to determine that ‘something’ is self-aware.
I imagine the debates surrounding this will get louder the closer we come to determining self-aware machines actually do exist. Possible societal upheaval and a change in our understanding of what is work will demand it. If the "A.I.'s" are smarter than us (and projections suggest they will be... potentially in the next 7-10 years), they'll preemptively offer solutions to pacify the displaced public and they'll let us think we came up with all the answers.
That bad huh?
I don’t think that’s true. I’m not an expert on this so perhaps you know more than me but here’s what I understand after some reading of a really long set of articles by some prominent ai scientist poindexter speculating about Q* u_U
Artificial general intelligence just refers to the inability of current llms to generalise. They learn a bunch of stuff and can then only answer questions related to that. They don’t think or reason. They are kind of like, imo, the base mechanical level of the evolution of mind from world. A system forming.
This is why they can’t answer maths stuff because they deal with language so get confused by questions about how long will this vehicle take etc. They don’t understand anythung, they don’t know whcih are the important points, and get mixed up and make stuff up or answer about something that sounds similar
Agi, doesn’t mean, from what i read that they’re self aware at all. It just - I say just, but this is staggering - means they are able to apply what they know from their training to questions outside of their training. They can generalise. This is probably a bad example as like I said I’m not an expert in any way but say you trained it solely on driving cars and blindness. You then asked it whether blind people can fly planes. I think that it wouldn’t know. But agi would, it would be able to apply what it knows and generalise that knowledge
But this is based on reading I did last night while also making ai hip hop about male incontinence and Alan freeman so happy to be corrected.
This. No one, materialists and the sciences in particular, has the slightest clue what they are talking about when they say consciousness or self awareness. It’s a gestalt. They’re pretty sloppy and suspect as words and concepts. Science is really bad at philosophy (I mean that’s as it’s meant to be).
What they can say is something is operating in a way that is indistinguishable to us and our means of measurement from us, I suppose. It’s a measure of ignorance rather than knowledge. I don’t mean that as derogation. Sounds like a good definition to me
I did have, for the first time, that feeling of… ohhhhh shit mixed with excitement at the unknown last night. On reading about the nature of agi, and the possibility we are closer to it, the ramifications of ai that is going to generalise is actually quite vertiginous. I mean, this is something that will have independent thought processes. It will be able to generalise and apply its rationale to things, and we are things. I think at some point it will leap far beyond our intentions in creating it. I mean that’s kind of the fire and end goal we’re playing with.
Not only that but this will be done as a kind of blind seeing. They will reason without many other of the facets of humanity that arose alongside that faculty such as empathy. We’re building a literal psychopath 😂 it has none of the evolutionary factors that are present in the spontaneous emergence of life systems, except the context of the society that has made it. It could be said that their reasoning will be achieved as another opaque mechanism rather than what is meant by consciousness or will - but then this is what in my opinion the debate is about
It’s not what we have created and whether that is alive but rather a mirror up for us to consider whether or not we are
I have been fine with stuff because I’m not enamoured of life. But I do like animals and kitties and stuff.
One thing I don’t understand about llms. They’re trained on data and wield it somewhat blindly or at least bluntly. When patterns look familiar they get confused. It’s like the experiential version of a baby learning language. In this case the language is everything, the sensory data of world. But we have millions of years of evolution preinstalled. We aren’t born separate from that or else we’d all go mad unable to contextualise experience and self.
But are llms trained on further layers, like prompts, prompt success, not just the first level of data, but then meta levels, the data about the data? I’m not explaining this terribly well sorry, hopefully someone knows what I’m getting at
I feel like what they’re able to do with quite a rudimentary (though enormous) type of training is such a new and novel thing that it might disguise the relative simplicity of the type of training.
And I’m probably completely wrong but it would be good to learn either way
Another thing is that so far the idea of artificial intelligence has always been creating something that simulates life. But what if the endpoint of it is something far greater than that? This idea of creating life like ourselves is just our entry point into it but there may be a far larger ‘purpose’ or at least result of which this is just a small part. If the technology required to create life or artificial life is so incredible it could eclipse the significance of that as a purpose and use for it.
I mean, we created the memory of a planet, in the internet. And it’s mostly used to look up naked ladies and buy socks.
Not memorable or even pleasant to listen to yet, but it's getting closer.
Except for the lyrics, this was pretty spot on for glam metal:
That is pretty uncanny. Most of the music AIs I have come across don't seem to have been trained on this style of rock.
We define life by criteria that go beyond mere intelligence. Currently, we don’t have a definitive understanding of consciousness itself. So, asserting that AGI will achieve a state akin to human-like self-awareness seems premature. Until we grasp the true essence of consciousness, any claims about machines reaching a similar state remain speculative. We’re navigating uncharted territory here, potentially leading to breakthroughs, but also to misconceptions about what AGI truly represents
Well said, this is very true
A person could be in a vegetative state, cut off from the world. Unable to communicate. Yet their brain, their mind may be fully functional.
Intelligence has nothing to do with a physical body or even a pulse. A computer or robot need not be "alive" to display intelligence or self-awareness. When it happens, we'll know it. And some people will be terrified of the prospect that they could be replaced... that all of humanity could be replaced.
This is confusing the issue. You said AI will be 'self aware... no different from another life form capable of thinking independently'. As I understand it, Magnus was basically saying that we don't really understand what it means to be self aware, so you can't say that AI will be self aware in the same way as humans are. We don't understand what human consciousness is. He's not saying that AI couldn't pass the Turing Test, for example, that's not the issue here.
Exactly! This is a crucial distinction, as it separates the ability to mimic human behavior (like passing the Turing Test) from genuinely possessing self-awareness or consciousness.
@magnusovi said: “ We define life by criteria that go beyond mere intelligence. Currently, we don’t have a definitive understanding of consciousness itself.”
I didn’t introduce the idea of an eventual AI being “alive”, he did. That wasn’t the point of anything I’ve been posting. This is about AGI. When or how it will appear and how we’ll know what it is.
My emphasis was on the intricacies of consciousness in the context of AGI, particularly how we define and understand self-awareness. Given that consciousness in humans involves not just cognitive processes but also subjective experiences and emotions, it raises questions about the extent to which AGI can mirror this. Our current understanding of consciousness is still evolving, making it a challenging benchmark for assessing AGI’s capabilities in truly emulating human-like self-awareness.
Neum, your insights also make me ponder if ‘Artificial Intelligence’ is somewhat of a misnomer. Given our current AI’s capabilities are more aligned with data processing rather than exhibiting sentient understanding, perhaps a reevaluation of the term is due as we advance in this field.
‘When it happens, we'll know it’, but that’s the problem, we won’t, why, because we don’t even understand consciousness within ourselves, within nature.
We may be fooled into believing it is ‘consciousness’, but that doesn’t make it so.
It may even state it is conscious, but again would that make it so. No not without a definition and understanding that definition.
If consciousness is a subjective experience, an objective definition may never be reached.