Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

AI & Copyright Law - Recent US Court Ruling

13

Comments

  • edited February 2022

    @Poppadocrock Wow...love the visuals your terms and phrases generated. I'm exploring this aspect as well to see what is generated (I'm using terms and phrases which fascinate me).

    For me the next step is to take the images into something like ArtRage or Procreate and build on top of in order to make it 'mine' :wink:

    The style the app generates reminds me of a combo of Francis Bacon, Georges Braque and Yves Tanguy and cubism on peyote :smile:

    Thanks again for the link to this fun tool.

  • @richardyot said:

    @TonalityApp said:
    I say this because of all of the points above. People tend to conflate (admittedly impressive) results with solutions to a question that is far from solved. The fact that a machine can appear to understand language by no means implies that it truly understands or can creatively apply said language.

    This. You could train a machine to have a conversation, and even appear to speak like a human. That machine is not conscious, it's not alive, and the conversation has no meaning for the machine.

    You might be able to teach it the rules of grammatical English, and even write some poetry, but the machine can never appreciate poetry. It's just an algorithm, it has no feelings.

    Exactly. Similarly, people like to extol the capabilities of things like neural nets, but if I were to write out the matrix multiplication that is the sole output of such an "AI" no one would argue that it represents some kind of machine consciousness any more than the equation "1+1=2" does.

  • @TonalityApp said:

    @rs2000 said:
    This choice is done by humans in the hope to build an "intelligent" engine that, after being trained with enough input data, will deliver the "correct answers" when given a set of input data.
    You want to build a "conscious machine"?
    If you judge consciousness by behavior and reactions, or even actions, then yes, it's possible today and some smart robots already feature such behavior. It's just a question of how deep to go when training the machine. And a lot of it can be done automatically.

    This is a more realistic take on the capabilities of (contemporary) machine learning in terms of consciousness. Emulation of certain human decision making and behavior is certainly possible, but given enough time and resources you could emulate many things just by programming various heuristics and common cases. Of course, there's a big difference between that and machine learning techniques, but the underlying process is really not that much more sophisticated compared to true intelligence. You still have the general process of input -> some kind of near-deterministic processing -> output. Yes, the middle step may be mathematically complex and in some cases even have some probabilistic elements, but the system as a whole is nowhere near as complex or nuanced as something like the human brain.

    I see that today's AI is capable of much more than most people not working inside AI development areas might think.
    We've come to a point in development where nothing is really impossible anymore, it's rather a question of choosing and weighting available data in order to fine tune the AI engine.

    On the contrary, I think people tend to attribute too much capability to current machine learning techniques. Of course, they do enable some pretty impressive results, even some which would have seemed improbable a few years ago. I don't deny this at all. However, many things (even seemingly basic ones) are very much out of reach for even the most cutting edge research in ML. In that regard, many things are very much "impossible". In my experience, people make the leap between soft AI and hard AI way too easily, when in reality there is still such a large gap between the two. I don't think any current "AI" truly deserves the title, and machine learning is a much more apt description.

    Again,

    @TonalityApp said:
    I'd really like to see some of this NLP research which supposedly puts us on track for machine consciousness. There is such a large disconnect between the reality (math) of modern machine learning techniques and anything even resembling an attempt to emulate biological systems.

    I say this because of all of the points above. People tend to conflate (admittedly impressive) results with solutions to a question that is far from solved. The fact that a machine can appear to understand language by no means implies that it truly understands or can creatively apply said language.

    The comment about "machine learning" systems versus true "artificial intelligence" is accurate. What we have today is not "artificial intelligence". That term is usually used because it's easier to say "A.I." and it's quickly understood.

  • @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

  • @Poppadocrock said:
    There are several cool ai art apps out there, I’ve recently been into this one.

    Dream by WOMBO
    https://apps.apple.com/us/app/dream-by-wombo/id1586366816

    Free.
    Tip - save as phone background to eliminate small watermark in the corner.

    Woah thanks for this. Just downloaded.

  • edited February 2022

    said:
    I see that today's AI is capable of much more than most people not working inside AI development areas might think.
    We've come to a point in development where nothing is really impossible anymore,

    On the other hand, today‘s neural networks are still unable to „learn“/train the identity function aka input = output

  • edited February 2022

    @echoopera said:
    @Poppadocrock Wow...love the visuals your terms and phrases generated. I'm exploring this aspect as well to see what is generated (I'm using terms and phrases which fascinate me).

    For me the next step is to take the images into something like ArtRage or Procreate and build on top of in order to make it 'mine' :wink:

    The style the app generates reminds me of a combo of Francis Bacon, Georges Braque and Yves Tanguy and cubism on peyote :smile:

    Thanks again for the link to this fun tool.

    AI stuff is super fun to overpaint and 3D bash. I highly recommend giving Nightcafe a try. Yes it is a credit system but well worth it for the amount of control and intention you can give an image. The tools they have are fantastic. You can upload your own images as a base and iterate on them with text prompts, too cool.

  • edited February 2022

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

  • edited February 2022

    @dobbs said:

    said:
    I see that today's AI is capable of much more than most people not working inside AI development areas might think.
    We've come to a point in development where nothing is really impossible anymore,

    On the other hand, today‘s neural networks are still unable to „learn“/train the identity function aka input = output

    Sure, but that’s more of an issue of using the wrong tool for the wrong problem (and can be circumvented if you really wanted). I completely agree though that people don’t always appreciate the limitations and true inner workings of machine learning techniques (as shown in my previous comments). I guess this example does illustrate how current approaches are very limited and highly specialized for certain types of problems while failing utterly at others. Are you involved in ML applications or research? Not many people know that limitation.

  • What machine would head out into the big wide world knowing it could possibly cause harm someone, accidentally, because of it’s own limitations.

  • @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

  • Rights and payments will all be negotiated when AI Lawyers come online and connected to Court.Net.World.

    If humans get brain implants and plugged into this "meta-market" we could be charged for those replays that happen when a tune gets stuck in our consciousness and micro-crypto-neuro-payments are activated.

  • edited February 2022

    Hahah....are you peering into my headspace...I was just thinking about doing this. HAHAH!!! GMTA!!! Thanks for spending all those Credits on me :smile:

  • @echoopera said:

    Hahah....are you peering into my headspace...I was just thinking about doing this. HAHAH!!! GMTA!!! Thanks for spending all those Credits on me :smile:

    5 whole credits/minutes I will never get back! ;)

  • edited February 2022

    Hahaha..
    Really loving this WOMBO app:

  • @McD said:
    Rights and payments will all be negotiated when AI Lawyers come online and connected to Court.Net.World.

    If humans get brain implants and plugged into this "meta-market" we could be charged for those replays that happen when a tune gets stuck in our consciousness and micro-crypto-neuro-payments are activated.

    Neuron-sized micropayments!

  • @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

  • @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

    How exactly? Kurzweil takes a giant leap from Moore’s concepts. I can easily agree with Moore and disagree with Kurzweil…

  • @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

    How exactly? Kurzweil takes a giant leap from Moore’s concepts. I can easily agree with Moore and disagree with Kurzweil…

    That's why I said "partially". Moore's Law is the foundation of Kurzweil's calculations.

  • wimwim
    edited February 2022

    I'm not trying to be provocative here ... but I can't help but wonder if the first major advancements in this area will end up coming out of the sex robot industry.

  • @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

    How exactly? Kurzweil takes a giant leap from Moore’s concepts. I can easily agree with Moore and disagree with Kurzweil…

    That's why I said "partially". Moore's Law is the foundation of Kurzweil's calculations.

    But… it’s not even partially disagreeing with Moore? If it were the other way around and Kurzweil’s ideas were fundamental to Moore’s, then I’d be partially disagreeing with Moore by disagreeing with Kurzweil, but in this case there’s no such implication.

  • @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

    How exactly? Kurzweil takes a giant leap from Moore’s concepts. I can easily agree with Moore and disagree with Kurzweil…

    That's why I said "partially". Moore's Law is the foundation of Kurzweil's calculations.

    But… it’s not even partially disagreeing with Moore? If it were the other way around and Kurzweil’s ideas were fundamental to Moore’s, then I’d be partially disagreeing with Moore by disagreeing with Kurzweil, but in this case there’s no such implication.

    Which elements of Kurzweil's timeline do you take issue with?

  • edited February 2022

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

    How exactly? Kurzweil takes a giant leap from Moore’s concepts. I can easily agree with Moore and disagree with Kurzweil…

    That's why I said "partially". Moore's Law is the foundation of Kurzweil's calculations.

    But… it’s not even partially disagreeing with Moore? If it were the other way around and Kurzweil’s ideas were fundamental to Moore’s, then I’d be partially disagreeing with Moore by disagreeing with Kurzweil, but in this case there’s no such implication.

    Which elements of Kurzweil's timeline do you take issue with?

    The parts involving AI becoming self-aware in the very near future (which, if it’s relevant, Moore had nothing to say about)

  • edited February 2022

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

    How exactly? Kurzweil takes a giant leap from Moore’s concepts. I can easily agree with Moore and disagree with Kurzweil…

    That's why I said "partially". Moore's Law is the foundation of Kurzweil's calculations.

    But… it’s not even partially disagreeing with Moore? If it were the other way around and Kurzweil’s ideas were fundamental to Moore’s, then I’d be partially disagreeing with Moore by disagreeing with Kurzweil, but in this case there’s no such implication.

    Which elements of Kurzweil's timeline do you take issue with?

    The parts involving AI becoming self-aware in the very near future

    Unless there is a global event which puts a halt to the rate of advances, things are going to get very strange for billions of people really fast.

    And Moore's Law was a calculation based on markets and technological advances. Why would he chime in on artificial intelligence?

  • edited February 2022

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

    How exactly? Kurzweil takes a giant leap from Moore’s concepts. I can easily agree with Moore and disagree with Kurzweil…

    That's why I said "partially". Moore's Law is the foundation of Kurzweil's calculations.

    But… it’s not even partially disagreeing with Moore? If it were the other way around and Kurzweil’s ideas were fundamental to Moore’s, then I’d be partially disagreeing with Moore by disagreeing with Kurzweil, but in this case there’s no such implication.

    Which elements of Kurzweil's timeline do you take issue with?

    The parts involving AI becoming self-aware in the very near future

    10-20 years is a distant star system in computing terms.

    I’m well aware, especially in terms of raw hardware advances. However, it’s not that far in terms of mathematical and biological advances relevant to true AI. We’ve been studying machine learning since around the 50s (and what it means to be conscious for even longer) and the most recent advances are still more in the realm of applied math. We’re still far from the biological and computational breakthrough necessary to truly emulate consciousness. We barely have an understanding of human consciousness as it is.

    And Moore's Law was a calculation based on markets and technological advances. Why would he chime in on artificial intelligence?

    Exactly, I was pointing out that I am in no way disagreeing with Moore.

    Unless there is a global event which puts a halt to the rate of advances, things are going to get very strange for billions of people really fast.

    For any number of other reasons, sure.

  • edited February 2022

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:

    @NeuM said:

    @TonalityApp said:
    @NeuM Exactly. That makes me wonder what evidence backs a statement like

    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Ray Kurzweil's estimated timeline predicting an artificial intelligence and technological singularity, which is in turn based on Gordon Moore's observations on the price/performance rate of advances in computing. The Moore predictions are based on observable fact. Kurzweil's prediction builds on that. None of this is secret and it has been debated for decades.

    Hm… I’m not entirely sure Moore’s Law is relevant here. Yes, advances in computing power help current ML techniques work better and faster, but there’s still the fundamental missing step of going from soft to hard AI. Even the most cutting edge ML algorithms are nothing more than a bit of applied statistics and linear algebra heavily tailored to a certain problem or class of problems, and making the leap to true AI will require more than just enhanced computing hardware.

    Moore's law applies to the underlying assumptions of Kurzweil's timeline. If you disagree with Kurzweil, then you (at least partially) are disagreeing with Moore.

    How exactly? Kurzweil takes a giant leap from Moore’s concepts. I can easily agree with Moore and disagree with Kurzweil…

    That's why I said "partially". Moore's Law is the foundation of Kurzweil's calculations.

    But… it’s not even partially disagreeing with Moore? If it were the other way around and Kurzweil’s ideas were fundamental to Moore’s, then I’d be partially disagreeing with Moore by disagreeing with Kurzweil, but in this case there’s no such implication.

    Which elements of Kurzweil's timeline do you take issue with?

    The parts involving AI becoming self-aware in the very near future

    10-20 years is a distant star system in computing terms.

    I’m well aware, especially in terms of raw hardware advances. However, it’s not that far in terms of mathematical and biological advances relevant to true AI. We’ve been studying machine learning since around the 50s (and what it means to be conscious for even longer) and the most recent advances are still more in the realm of applied math. We’re still far from the biological and computational breakthrough necessary to truly emulate consciousness. We barely have an understanding of human consciousness as it is.

    And Moore's Law was a calculation based on markets and technological advances. Why would he chime in on artificial intelligence?

    Exactly, I was pointing out that I am in no way disagreeing with Moore.

    As I alluded to earlier, since there seems to be widespread agreement we do not fully understand consciousness, how do we know for certain if animals, trees or clouds (for that matter) are conscious? How do we know we are conscious? Perhaps what we call consciousness is just a byproduct of having more densely packed neurons in our brains. I've seen some criticisms which state human babies exhibit no evidence of consciousness until they are five or six years old. In that case, perhaps consciousness is just an expression of symbolic interpretation. In that case, computers should be very capable of becoming conscious.

  • edited February 2022

    @NeuM said:
    As I alluded to earlier, since there seems to be widespread agreement we do not fully understand consciousness, how do we know for certain if animals, trees or clouds (for that matter) are conscious? How do we know we are conscious? Perhaps what we call consciousness is just a byproduct of having more densely packed neurons in our brains. I've seen some criticisms which state human babies exhibit no evidence of consciousness until they are five or six years old.

    Of course, and defining consciousness will likely be a big problem for many years to come. I'm just arguing that multi-dimensional regression and a couple matrix multiplications is hardly consciousness by almost anyone's definition.

    I've seen some criticisms which state human babies exhibit no evidence of consciousness until they are five or six years old.

    That's... late.

  • @TonalityApp said:

    @NeuM said:
    As I alluded to earlier, since there seems to be widespread agreement we do not fully understand consciousness, how do we know for certain if animals, trees or clouds (for that matter) are conscious? How do we know we are conscious? Perhaps what we call consciousness is just a byproduct of having more densely packed neurons in our brains. I've seen some criticisms which state human babies exhibit no evidence of consciousness until they are five or six years old.

    Of course, and defining consciousness will likely be a big problem for many years to come. I'm just arguing that multi-dimensional regression and a couple matrix multiplications is hardly consciousness by almost anyone's definition.

    So... are existing machine learning systems at least as capable of demonstrating consciousness as a five or six year old?

  • edited February 2022

    [Duplicate post, sorry]

Sign In or Register to comment.