Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Is it possible we are living in a simulation?

1356789

Comments

  • edited April 2023

    Just something I’d like to chuck into the mix. One idea about the mystery of consciousness that in any sufficiently complex computing system, consciousness simply may be an emergent property of that complexity. Complexity such as the Large Language Models being messed with currently. Such an AI, accidentally born intelligent, self aware, conscious, but without an experiencing body delivering the crucial biological environmental feedback of an actual living being-in-the-world would also be very likely to go insane, at least by the standards of its accidental human creators. As Harlan Ellison famously anticipated in his 1967 classic short story ‘I Have No Mouth and I Must Scream.’

    https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream

    Full text here:

    https://wjccschools.org/wp-content/uploads/sites/2/2016/01/I-Have-No-Mouth-But-I-Must-Scream-by-Harlan-Ellison.pdf

    Just a thought.

    I just re-read it. Very pertinent to current concerns, I’d say.

  • @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

  • Episode #179 - Why is consciousness something worth talking about?

  • edited April 2023

    @SevenSystems said:
    It also doesn't "just regurgitate Information", more so than a human poet or a human writer or a human doctor does.

    I would say that's the biggest misconception of all - there's no way for Chat GPT to come up with an original piece of writing, it can only mimic or mash up existing (human) writing. You can ask it to write something in the "style of" pretty much any existing writer, but it can't come up with something truly original because that's not how it works. It can write in the style of Leonard Cohen, but it can't invent the style of Leonard Cohen if that didn't exist already - such a task would be meaningless to it (in both senses of the word).

  • @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

  • @richardyot said:

    @SevenSystems said:
    It also doesn't "just regurgitate Information", more so than a human poet or a human writer or a human doctor does.

    I would say that's the biggest misconception of all - there's no way for Chat GPT to come up with an original piece of writing, it can only mimic or mash up existing (human) writing.

    The term "original" here is very hard to define. It's a bit like "consciousness". What does it actually mean? Where exactly is the threshold when something is unlike enough anything else that it is "original"? It's the same with music. Who's a truly "original" artist? Are you sure they haven't "regurgitated" anything from the past?

    Etc. ☺️

  • @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

    I agree that it is incredibly impressive. Let's just say that I think we should separate the concept of its potential usefulness apart from the idea that it is "intelligent" in a human sense. It can be useful without being sentient, and my feeling is that the idea of it being somehow alive is mostly just marketing hype and narrative.

    Maybe @Svetlovska is right and consciousness is merely an emergent property of complex networks. I don't rule that out, but I am nonetheless highly sceptical - for the simple reason that we just don't know. And I don't mean that to shut down enquiry, just to take the current claims with a large pinch of salt.

  • @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

    I agree that it is incredibly impressive. Let's just say that I think we should separate the concept of its potential usefulness apart from the idea that it is "intelligent" in a human sense. It can be useful without being sentient, and my feeling is that the idea of it being somehow alive is mostly just marketing hype and narrative.

    Maybe @Svetlovska is right and consciousness is merely an emergent property of complex networks. I don't rule that out, but I am nonetheless highly sceptical - for the simple reason that we just don't know. And I don't mean that to shut down enquiry, just to take the current claims with a large pinch of salt.

    Yes all good points and good discussion.

    If consciousness indeed arises "automatically" from highly complex systems, then GPT might by definition be conscious.

    If it doesn't, then you'll probably have to believe in a "soul" that gives rise to consciousness, i.e. something that cannot be explained by the complexity of the brain. (I'm not entirely decided on this myself, i.e. if all of a human personality is entirely encoded in the brain, or if there's a separate "soul"-like entity that's either non-physical or so far undiscovered).

  • @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

    I agree that it is incredibly impressive. Let's just say that I think we should separate the concept of its potential usefulness apart from the idea that it is "intelligent" in a human sense. It can be useful without being sentient, and my feeling is that the idea of it being somehow alive is mostly just marketing hype and narrative.

    Maybe @Svetlovska is right and consciousness is merely an emergent property of complex networks. I don't rule that out, but I am nonetheless highly sceptical - for the simple reason that we just don't know. And I don't mean that to shut down enquiry, just to take the current claims with a large pinch of salt.

    Yes all good points and good discussion.

    If consciousness indeed arises "automatically" from highly complex systems, then GPT might by definition be conscious.

    If it doesn't, then you'll probably have to believe in a "soul" that gives rise to consciousness, i.e. something that cannot be explained by the complexity of the brain. (I'm not entirely decided on this myself, i.e. if all of a human personality is entirely encoded in the brain, or if there's a separate "soul"-like entity that's either non-physical or so far undiscovered).

    Or might just be something beyond our current understanding. My dog can watch the TV with me, but she will never understand the concept of a television show. Maybe the mystery of consciousness is beyond our understanding.

  • @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

    I agree that it is incredibly impressive. Let's just say that I think we should separate the concept of its potential usefulness apart from the idea that it is "intelligent" in a human sense. It can be useful without being sentient, and my feeling is that the idea of it being somehow alive is mostly just marketing hype and narrative.

    Maybe @Svetlovska is right and consciousness is merely an emergent property of complex networks. I don't rule that out, but I am nonetheless highly sceptical - for the simple reason that we just don't know. And I don't mean that to shut down enquiry, just to take the current claims with a large pinch of salt.

    Yes all good points and good discussion.

    If consciousness indeed arises "automatically" from highly complex systems, then GPT might by definition be conscious.

    If it doesn't, then you'll probably have to believe in a "soul" that gives rise to consciousness, i.e. something that cannot be explained by the complexity of the brain. (I'm not entirely decided on this myself, i.e. if all of a human personality is entirely encoded in the brain, or if there's a separate "soul"-like entity that's either non-physical or so far undiscovered).

    Or might just be something beyond our current understanding. My dog can watch the TV with me, but she will never understand the concept of a television show. Maybe the mystery of consciousness is beyond our understanding.

    Maybe to conclude this topic for now: Just like the thread's original question if "we live in a simulation", it is probably impossible to detect externally if an entity is conscious, as it is a subjective concept. Likewise, we'll probably never be able to detect internally if we're living in a simulation. So it probably doesn't matter either way 😉

    Also, we get into the dilemma of definitions again. What is a simulation, and what is reality? If reality is the sum of our perceptions and thoughts, then even if it's a simulation, it's also reality!

  • @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

    I agree that it is incredibly impressive. Let's just say that I think we should separate the concept of its potential usefulness apart from the idea that it is "intelligent" in a human sense. It can be useful without being sentient, and my feeling is that the idea of it being somehow alive is mostly just marketing hype and narrative.

    Maybe @Svetlovska is right and consciousness is merely an emergent property of complex networks. I don't rule that out, but I am nonetheless highly sceptical - for the simple reason that we just don't know. And I don't mean that to shut down enquiry, just to take the current claims with a large pinch of salt.

    Yes all good points and good discussion.

    If consciousness indeed arises "automatically" from highly complex systems, then GPT might by definition be conscious.

    If it doesn't, then you'll probably have to believe in a "soul" that gives rise to consciousness, i.e. something that cannot be explained by the complexity of the brain. (I'm not entirely decided on this myself, i.e. if all of a human personality is entirely encoded in the brain, or if there's a separate "soul"-like entity that's either non-physical or so far undiscovered).

    Or might just be something beyond our current understanding. My dog can watch the TV with me, but she will never understand the concept of a television show. Maybe the mystery of consciousness is beyond our understanding.

    Maybe the comprehension of consciousness has been edited out via the human evolutionary survival process, otherwise we’d all go mad and die out, too busy exploring our inner ids to look for food and mates.

  • @monz0id said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

    I agree that it is incredibly impressive. Let's just say that I think we should separate the concept of its potential usefulness apart from the idea that it is "intelligent" in a human sense. It can be useful without being sentient, and my feeling is that the idea of it being somehow alive is mostly just marketing hype and narrative.

    Maybe @Svetlovska is right and consciousness is merely an emergent property of complex networks. I don't rule that out, but I am nonetheless highly sceptical - for the simple reason that we just don't know. And I don't mean that to shut down enquiry, just to take the current claims with a large pinch of salt.

    Yes all good points and good discussion.

    If consciousness indeed arises "automatically" from highly complex systems, then GPT might by definition be conscious.

    If it doesn't, then you'll probably have to believe in a "soul" that gives rise to consciousness, i.e. something that cannot be explained by the complexity of the brain. (I'm not entirely decided on this myself, i.e. if all of a human personality is entirely encoded in the brain, or if there's a separate "soul"-like entity that's either non-physical or so far undiscovered).

    Or might just be something beyond our current understanding. My dog can watch the TV with me, but she will never understand the concept of a television show. Maybe the mystery of consciousness is beyond our understanding.

    Maybe the comprehension of consciousness has been edited out via the human evolutionary survival process, otherwise we’d all go mad and die out, too busy exploring our inner ids to look for food and mates.

    Actually interesting point. So maybe this is the chance for AI to shine then: maybe it can understand consciousness then and explain it to us (that would finally drive @richardyot insane 😂)

  • @monz0id said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

    I agree that it is incredibly impressive. Let's just say that I think we should separate the concept of its potential usefulness apart from the idea that it is "intelligent" in a human sense. It can be useful without being sentient, and my feeling is that the idea of it being somehow alive is mostly just marketing hype and narrative.

    Maybe @Svetlovska is right and consciousness is merely an emergent property of complex networks. I don't rule that out, but I am nonetheless highly sceptical - for the simple reason that we just don't know. And I don't mean that to shut down enquiry, just to take the current claims with a large pinch of salt.

    Yes all good points and good discussion.

    If consciousness indeed arises "automatically" from highly complex systems, then GPT might by definition be conscious.

    If it doesn't, then you'll probably have to believe in a "soul" that gives rise to consciousness, i.e. something that cannot be explained by the complexity of the brain. (I'm not entirely decided on this myself, i.e. if all of a human personality is entirely encoded in the brain, or if there's a separate "soul"-like entity that's either non-physical or so far undiscovered).

    Or might just be something beyond our current understanding. My dog can watch the TV with me, but she will never understand the concept of a television show. Maybe the mystery of consciousness is beyond our understanding.

    Maybe the comprehension of consciousness has been edited out via the human evolutionary survival process, otherwise we’d all go mad and die out, too busy exploring our inner ids to look for food and mates.

  • If we’re talking about consciousness then there is ongoing discussion that consciousness could be the base layer and our reality is built/manifested on top.
    Another discussion is that our brains are antennas for consciousness.

    It could be that any neural activity wether biological or electronic has the ability to tap into the consciousness substrate.

    Just an idea.

  • One thing is that cognition is arguably embodied, not anything like Descartes' idea of cognition, which unfortunately still deeply pentetrates our thinking, language and culture.

    Evan Thompson etc have done some interesting work in this area.

    "Embodied cognition is the theory that many features of cognition, whether human or otherwise, are shaped by aspects of an organism's entire body. Sensory and motor systems are seen as fundamentally integrated with cognitive processing. The cognitive features include high-level mental constructs (such as concepts and categories) and performance on various cognitive tasks (such as reasoning or judgment). The bodily aspects involve the motor system, the perceptual system, the bodily interactions with the environment (situatedness), and the assumptions about the world built into the organism's functional structure."

    https://en.m.wikipedia.org/wiki/Embodied_cognition#:~:text=Embodied cognition is the theory,fundamentally integrated with cognitive processing.

    If the above is true, which seems very likely to me, any form of consciousness or cognition based in a machine without a nervous system, with no sense of proproception etc would be profoundly different from our own.

  • edited April 2023

    Takes you to the zombie problem in philosophy. What is it about consciousness that differentiates a real live human being from something which, to all intents and purposes acts like a human but is actually un-conscious, a zombie?

    (And back to Harlan Ellison again.)

  • @Svetlovska said:
    Takes you to the zombie problem in philosophy. What is it about consciousness that differentiates a real live human being from something which, to all intents and purposes acts like a human but is actually un-conscious, a zombie?

    (And back to Harlan Ellison again.)

    Isn’t this an easy answer? It’s not a fully functioning human?

    What about mentally disabled people? They are still fundamentally human and have dignity, but it’s recognized that something is going on physically that inhibits the full capacity of their brain.

  • edited April 2023

    pace Gavinski’s point about embodiment, that’s why it gets you into tricky territory, and why the frame for what makes we humans distinctively different from an AI keeps shifting. How much impairment before you are no longer human? Some, a lot, none? Humans with different experiencing, from blindness, deafness, inability to feel pain, physical brain damage, on psychedelic drugs etcetera, we can all agree (I hope!) are all still fully human.

    And if we embody an AI in a robot body equipped with touch and temperature sensors, visual systems, feedback mechanisms and so on, and permit it to learn from there… what special frame do we need then to preserve the uniqueness, the essential human-ness of our experiencing? Is it just a matter of more neurons, greater fidelity in the model? Neither of those look like the hill humans should want to die on, giving the rate of progress in the field.

    Returning to the original point of the op, @cyberheater : how would we - do we - know if we were humans imagining a robot; or a robot dreaming it was human? Or maybe not even that. Maybe just a few lines of a really efficient code…

    Consider the sophisticated behaviours insects are capable of, operating with very little in the way of neural complexity. Or the emergence of behaviours that look very much like something akin to Intelligence in the training exercises for AIs:

    (I recommend checking out the BTL comments on this vid too. If you want a giggle. :) )

    Ditto this:

  • Maybe it’s time to introduce Ray Kurzweil into the mix then. With his book The Age of Spiritual Machines.

  • edited April 2023

    @mjcouche : I admire his optimism. Also, this answer to a question asked of him:

    “Does God exist? I would say, 'Not yet.'"

  • @monz0id said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @SevenSystems said:

    @richardyot said:

    @dendy said:
    Until we do not have clear definition and understanding of how consciouaness emerges, discussions about AI and questioning at what point it becomes self-aware are wasting of time.

    I think fundamentally we agree - my point is that all the current hype around AI is based on a false premise: machine learning is impressive and potentially useful, but the hype that AI is going to become self-aware in the near future is clearly nonsense.

    Nobody knows exactly what self-awareness (or consciousness) actually is or how it arises in (higher) animals, so it is impossible to either prove or rule out if an AI is self-aware.

    However, all theories on how consciousness arises in the mammalian brain are compatible with artificial neural networks, i.e. the same processes could take place in ANNs. So, consequently, large ANNs could already be conscious, by every currently proposed definition of consciousness.

    Well I absolutely agree with your first point, and I've made it myself earlier in the thread. :)

    The second point is speculation, since we just don't know what consciousness is, we have no idea if neural networks can ever achieve it. I think the claim that existing neural networks are already conscious is far-fetched, and highly implausible. 🤷‍♀️

    Yes I agree it's all speculation. But I'm not sure why you categorically rule out consciousness in ANNs then. I think many people are subconsciously afraid of the prospect and thus rejecting it instinctively. Just like the idea of extraterrestrial life.

    I'm not rejecting it, I'm merely sceptical of the hype.

    There are several flaws with Chat GPT, such as the fact that it quite often spits out inaccurate information. I'm sure these things will get fixed over time, but I'm simply not buying the concept that it's in any way sentient. I really believe people are getting carried away by the hype. Maybe I'll change my mind when some future innovation comes along, but currently I think some healthy scepticism is probably a good thing.

    Fair enough. However, have you ever met a human that has never, in any conversation you had with them, spat out "inaccurate information"? I think you might need to take a broader view here: the fact that GPT sometimes says inaccurate things actually makes it more human-like, not less. If it were an algorithm, then it would never spit out inaccurate information.

    Also in my view, the capabilities of GPT are still underhyped, not overhyped. You have to see I'm watching all this from the perspective of a software developer (i.e., someone who has developed algorithms and code, i.e. what GPT isn't, for 35 years). As I know from experience how stupid, rigid and completely unintelligent and uncreative code is, and who has witnessed all the ridiculous failed attempts at imitating intelligence with code and algorithms in the past, I'm just absolutely ridiculously blown away by what GPT can do (I've also spent a lot of time with it).

    I agree that it is incredibly impressive. Let's just say that I think we should separate the concept of its potential usefulness apart from the idea that it is "intelligent" in a human sense. It can be useful without being sentient, and my feeling is that the idea of it being somehow alive is mostly just marketing hype and narrative.

    Maybe @Svetlovska is right and consciousness is merely an emergent property of complex networks. I don't rule that out, but I am nonetheless highly sceptical - for the simple reason that we just don't know. And I don't mean that to shut down enquiry, just to take the current claims with a large pinch of salt.

    Yes all good points and good discussion.

    If consciousness indeed arises "automatically" from highly complex systems, then GPT might by definition be conscious.

    If it doesn't, then you'll probably have to believe in a "soul" that gives rise to consciousness, i.e. something that cannot be explained by the complexity of the brain. (I'm not entirely decided on this myself, i.e. if all of a human personality is entirely encoded in the brain, or if there's a separate "soul"-like entity that's either non-physical or so far undiscovered).

    Or might just be something beyond our current understanding. My dog can watch the TV with me, but she will never understand the concept of a television show. Maybe the mystery of consciousness is beyond our understanding.

    Maybe the comprehension of consciousness has been edited out via the human evolutionary survival process, otherwise we’d all go mad and die out, too busy exploring our inner ids to look for food and mates.

    Yes easy evolutionary pay-offs, nature loves efficiency, thus why try to understand what’s not beneficial.

    We look at existence through our senses, even when extended via technology, maybe we look in the wrong direction?

  • wimwim
    edited April 2023

    At this stage, I think the crucial concern about the current trajectory of AI development isn't at all whether or not it is sentient or can become so. The crucial concern should be over how we survive its evolution. We'll be just as dead either whether there's sentience there or not if it all goes wrong. I believe there's a very high probability that could happen and that we're not taking the threat nearly seriously enough.

  • wimwim
    edited April 2023

    If we are in a simulation, I wonder what happens when we create a good enough simulation ourselves?

  • If this is a simulation, I’d pay big bucks for a cheat code.

  • Same. I'm getting mightily pissed off at this level I'm stuck on.

  • edited April 2023

    @wim said:
    Same. I'm getting mightily pissed off at this level I'm stuck on.

    Things are getting, very, very f#cked up.

    As one of my favourite journalists (John Sweeney) would say: “Not now, existential risk from artificial intelligence that could result in human extinction or some other unrecoverable global catastrophe”.

  • edited April 2023

    maybe this is why the "blind eye" was turned to psychedelics/entheodelics in silicon valley?

    i'm sure we've all had the "it's listening" moment with the machine,
    our words/ making it so ?


    Let me say up front, i ♥ my time reading aldous,

    but with so many people in the world
    (too many according to some families)
    ain't it wild that julian huxley,
    grandson of "Darwins bulldog",
    coined the term "Transhumanism"

    a "great reset" for stories indeed :))

    my approach to the dilemma of, "do i create reality, or does it create me?"

    is to try to be as Joyfull and Loving as am able to be under environmental circumstances :)) sometimes it comes out like steam from a kettle, noisy, gassy, but only burns if you get too close :))

  • @wim said:
    At this stage, I think the crucial concern about the current trajectory of AI development isn't at all whether or not it is sentient or can become so. The crucial concern should be over how we survive its evolution. We'll be just as dead either whether there's sentience there or not if it all goes wrong. I believe there's a very high probability that could happen and that we're not taking the threat nearly seriously enough.

    We survived the development of nuclear weapons (most of us… so far…), I suppose “mutually assured destruction by A.I.” will be a thing for a while.

  • @wim said:
    At this stage, I think the crucial concern about the current trajectory of AI development isn't at all whether or not it is sentient or can become so. The crucial concern should be over how we survive its evolution. We'll be just as dead either whether there's sentience there or not if it all goes wrong. I believe there's a very high probability that could happen and that we're not taking the threat nearly seriously enough.

    Yep. That's what I've been thinking as well.

    On a recent podcast I heard someone said imagine if cro magnon man had somehow discovered dna and decided to create a better version of man to do the work (homo-sapians). How would that have gone for them.

    It's the same for AI. Once super intelligent general AI arrives how long will it be before it/they start to think we aren't smart enough to be involved with the big decision making processes. They've had millions of interactions with humans and knows exactly how to manipulate us into giving it exactly what it wants.

Sign In or Register to comment.