Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

AI & Copyright Law - Recent US Court Ruling

24

Comments

  • @NeuM said:
    People are also layers of programming, shaped by evolution. Software and hardware have the advantage of being able to leapfrog evolution.

    Just say for a moment you are right. Do you see a difference between A.I. consciousness and sentient awareness?

  • edited February 2022

    What about flying cars? Will any of these big-tech companies think of that?

  • @pedro said:
    What about flying cars? Will any of these big-tech companies think of that?

    We'll have flying cars long before we have sentient machines. We know how to make machines fly, we understand the mechanics of flight. We have literally zero knowledge of what consciousness even is, it's not a question that science can convincingly answer. Sentient machines are totally out of our reach.

  • edited February 2022

    @richardyot said:
    We have literally zero knowledge of what consciousness even is

    I agree. I'm starting to think that consciousness is not biology and if that's the case machines will never get there.

  • edited February 2022

    @pedro said:
    What about flying cars? Will any of these big-tech companies think of that?

    Not “cars” exactly, but you can see these short hop aircraft are filling a gap between the traditional small aircraft and road vehicles:

    https://jetoptera.com/

    https://jetsonaero.com/

    https://www.jobyaviation.com/

    https://opener.aero/

  • @cyberheater said:

    @NeuM said:
    People are also layers of programming, shaped by evolution. Software and hardware have the advantage of being able to leapfrog evolution.

    Just say for a moment you are right. Do you see a difference between A.I. consciousness and sentient awareness?

    What is consciousness? That question has bedeviled people for a long, long time. Are non-human animals conscious? How would we know? Brains/minds in different species have evolved in different ways. True A.I. in computers will develop in its own way.

    https://www.kurzweilai.net/dialogue-a-conversation-on-creating-a-mind

  • @NeuM said:

    @HotStrange said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    This seems unlikely to me unless we extend the timeline out another 50 years.

    I don’t know about that. I’ve seen some very surprising demonstrations of advanced language based machine learning systems recently which have me convinced the Kurzweil timeline is going to be close.

    Same, I was completely skeptical (like many on this post) until I caught some recent presentations on machine learning and NLP specifically. And, of course, it depends on how we define "conscious". I think 10 years is probably too optimistic but 20 years? Maybe.

  • edited February 2022

    I'd really like to see some of this NLP research which supposedly puts us on track for machine consciousness. There is such a large disconnect between the reality (math) of modern machine learning techniques and anything even resembling an attempt to emulate biological systems.

  • Consciousness is just the side effect of forgetting that you are God.

  • @Poppadocrock said:
    @AudioGus pytti Ai. Whoa.

    Wow, here is a neat one...

  • @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Corporations are people too but I've never seen one taken to jail for murder

  • @richardyot said:

    @pedro said:
    What about flying cars? Will any of these big-tech companies think of that?

    We'll have flying cars long before we have sentient machines. We know how to make machines fly, we understand the mechanics of flight. We have literally zero knowledge of what consciousness even is, it's not a question that science can convincingly answer. Sentient machines are totally out of our reach.

    Well then we'll never have sentient machines because flying cars will never happen. Much like drone deliveries and self driving taxis, it's not the technology that makes it a bad idea, it's the everything else.

  • @AudioGus said:

    @Poppadocrock said:
    @AudioGus pytti Ai. Whoa.

    Wow, here is a neat one...

    These are wild. Reminds me of Hieronymus Bosch’s work. Wild trippy surreal and psychedelic nightmare as all get out 🤪

  • @sclurbs said:

    @richardyot said:

    @pedro said:
    What about flying cars? Will any of these big-tech companies think of that?

    We'll have flying cars long before we have sentient machines. We know how to make machines fly, we understand the mechanics of flight. We have literally zero knowledge of what consciousness even is, it's not a question that science can convincingly answer. Sentient machines are totally out of our reach.

    Well then we'll never have sentient machines because flying cars will never happen. Much like drone deliveries and self driving taxis, it's not the technology that makes it a bad idea, it's the everything else.

    That’s an odd opinion. There’s more R&D and real companies today working in the short hop aviation space than I’ve seen in the last 40 years or so. There are new plans being announced all the time. See the four company links I provided above.

  • @echoopera said:

    @AudioGus said:

    @Poppadocrock said:
    @AudioGus pytti Ai. Whoa.

    Wow, here is a neat one...

    These are wild. Reminds me of Hieronymus Bosch’s work. Wild trippy surreal and psychedelic nightmare as all get out 🤪

    'Beksinski' is a very effective and popular prompt that people use. Would love to play around with Pytti but would need a Google Collab pro+ subscription to make it worthwhile. For stills though I do find Nightcafe to be the sweet spot.

  • @NeuM said:

    @sclurbs said:

    @richardyot said:

    @pedro said:
    What about flying cars? Will any of these big-tech companies think of that?

    We'll have flying cars long before we have sentient machines. We know how to make machines fly, we understand the mechanics of flight. We have literally zero knowledge of what consciousness even is, it's not a question that science can convincingly answer. Sentient machines are totally out of our reach.

    Well then we'll never have sentient machines because flying cars will never happen. Much like drone deliveries and self driving taxis, it's not the technology that makes it a bad idea, it's the everything else.

    That’s an odd opinion. There’s more R&D and real companies today working in the short hop aviation space than I’ve seen in the last 40 years or so. There are new plans being announced all the time. See the four company links I provided above.

    There's more R&D right now for a bunch of shit that was a bad idea from the start. Those links you offered are just Toys for the rich to use in their massive backyards and basically nowhere else. Really cool toys, mind you. Like jetpacks. But it's not gonna change anything on a population level. It's like the hyper loop nonesense. It will never move enough people at once to make any difference. It's just more tech bro's jacking each other off with unlimited VC money and astroturfing the internet so everyone is onboard

    I'm sure we're also all gonna be using low orbit rockets to travel around too, right? any day now....

  • @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    This seems unlikely to me unless we extend the timeline out another 50 years.

    I don’t know about that. I’ve seen some very surprising demonstrations of advanced language based machine learning systems recently which have me convinced the Kurzweil timeline is going to be close.

    I’m not necessarily saying I think AI itself won’t be there in the next 10 years but as far as a court ruling saying they’re conscious individuals, I just don’t know. I don’t think people as a whole, especially old lawmakers, will be ready to go that route that soon.

    If a computer system demonstrates independent human level intelligence then I think it would be able to make the case itself in court. This seems inevitable to me.

    Sorry, I just can’t see that happening in the next 10 years. But who knows really? Technology moves incredibly fast now.

    I’m often surprised at the pace of new developments myself. I shouldn’t be, but there are millions of people involved in research, development and so on now, so it is logical the pace should be so accelerated. Demand still outstrips the supply of talented people working on solving these massive AI puzzles.

    Regardless of who’s right and wrong, I think we can all agree it’s gonna be interesting watching these advancements going forward. Cautiously excited about where it will lead.

    @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Isn’t going to happen. They might mimic to a human behaviour to the point where humans could believe they are interacting with another human being but they will never be conscious.

    Mark your calendar. “Never” is a long time frame.

    I can’t see how it can happen. However complex it may appear it’s just layers of programming. An A.I. machine will never pause to think about it’s situation and it’s place in the world, look at the stars and marvel at its majesty.

    I see that today's AI is capable of much more than most people not working inside AI development areas might think.
    We've come to a point in development where nothing is really impossible anymore, it's rather a question of choosing and weighting available data in order to fine tune the AI engine. This choice is done by humans in the hope to build an "intelligent" engine that, after being trained with enough input data, will deliver the "correct answers" when given a set of input data.
    You want to build a "conscious machine"?
    If you judge consciousness by behavior and reactions, or even actions, then yes, it's possible today and some smart robots already feature such behavior. It's just a question of how deep to go when training the machine. And a lot of it can be done automatically.

    What scares me the most is that too many people already tend to rely on the technology without questioning it and without asking for a "second opinion" from a human being, generally speaking.

    Let me give you one example coming from AI image enlargement: Revealing the detail of a very blurred face photo.
    The enlargement engine can reveal image details that are not present in the source image. By training the machine with thousands or millions of detailed photos from different people, at some point the engine will be able to show a clear face from a picture that was too blurry to recognize anything in it. And it will look like a proper enlargement.
    The technology feels like magic and the police etc. will love to use it, the only problem is that there is no guarantee whatsoever that the resulting face image is from the same person as in the blurry image.
    In some cases, the users of the technology won't care about that though because they "want to solve the case now" and of course the computer can't be wrong ;)

  • @echoopera said:

    @AudioGus said:

    @Poppadocrock said:
    @AudioGus pytti Ai. Whoa.

    Wow, here is a neat one...

    These are wild. Reminds me of Hieronymus Bosch’s work. Wild trippy surreal and psychedelic nightmare as all get out 🤪

    They can get interesting. I had a great run with adding "with Nicholas Cage" to everything generated on that site.

  • @rs2000 said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    This seems unlikely to me unless we extend the timeline out another 50 years.

    I don’t know about that. I’ve seen some very surprising demonstrations of advanced language based machine learning systems recently which have me convinced the Kurzweil timeline is going to be close.

    I’m not necessarily saying I think AI itself won’t be there in the next 10 years but as far as a court ruling saying they’re conscious individuals, I just don’t know. I don’t think people as a whole, especially old lawmakers, will be ready to go that route that soon.

    If a computer system demonstrates independent human level intelligence then I think it would be able to make the case itself in court. This seems inevitable to me.

    Sorry, I just can’t see that happening in the next 10 years. But who knows really? Technology moves incredibly fast now.

    I’m often surprised at the pace of new developments myself. I shouldn’t be, but there are millions of people involved in research, development and so on now, so it is logical the pace should be so accelerated. Demand still outstrips the supply of talented people working on solving these massive AI puzzles.

    Regardless of who’s right and wrong, I think we can all agree it’s gonna be interesting watching these advancements going forward. Cautiously excited about where it will lead.

    @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Isn’t going to happen. They might mimic to a human behaviour to the point where humans could believe they are interacting with another human being but they will never be conscious.

    Mark your calendar. “Never” is a long time frame.

    I can’t see how it can happen. However complex it may appear it’s just layers of programming. An A.I. machine will never pause to think about it’s situation and it’s place in the world, look at the stars and marvel at its majesty.

    I see that today's AI is capable of much more than most people not working inside AI development areas might think.
    We've come to a point in development where nothing is really impossible anymore, it's rather a question of choosing and weighting available data in order to fine tune the AI engine. This choice is done by humans in the hope to build an "intelligent" engine that, after being trained with enough input data, will deliver the "correct answers" when given a set of input data.
    You want to build a "conscious machine"?
    If you judge consciousness by behavior and reactions, or even actions, then yes, it's possible today and some smart robots already feature such behavior. It's just a question of how deep to go when training the machine. And a lot of it can be done automatically.

    What scares me the most is that too many people already tend to rely on the technology without questioning it and without asking for a "second opinion" from a human being, generally speaking.

    Let me give you one example coming from AI image enlargement: Revealing the detail of a very blurred face photo.
    The enlargement engine can reveal image details that are not present in the source image. By training the machine with thousands or millions of detailed photos from different people, at some point the engine will be able to show a clear face from a picture that was too blurry to recognize anything in it. And it will look like a proper enlargement.
    The technology feels like magic and the police etc. will love to use it, the only problem is that there is no guarantee whatsoever that the resulting face image is from the same person as in the blurry image.
    In some cases, the users of the technology won't care about that though because they "want to solve the case now" and of course the computer can't be wrong ;)

    I recommend people watch a few of these videos for an idea of what’s coming: https://www.youtube.com/c/KárolyZsolnai/videos

  • @NeuM said:

    @rs2000 said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    This seems unlikely to me unless we extend the timeline out another 50 years.

    I don’t know about that. I’ve seen some very surprising demonstrations of advanced language based machine learning systems recently which have me convinced the Kurzweil timeline is going to be close.

    I’m not necessarily saying I think AI itself won’t be there in the next 10 years but as far as a court ruling saying they’re conscious individuals, I just don’t know. I don’t think people as a whole, especially old lawmakers, will be ready to go that route that soon.

    If a computer system demonstrates independent human level intelligence then I think it would be able to make the case itself in court. This seems inevitable to me.

    Sorry, I just can’t see that happening in the next 10 years. But who knows really? Technology moves incredibly fast now.

    I’m often surprised at the pace of new developments myself. I shouldn’t be, but there are millions of people involved in research, development and so on now, so it is logical the pace should be so accelerated. Demand still outstrips the supply of talented people working on solving these massive AI puzzles.

    Regardless of who’s right and wrong, I think we can all agree it’s gonna be interesting watching these advancements going forward. Cautiously excited about where it will lead.

    @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Isn’t going to happen. They might mimic to a human behaviour to the point where humans could believe they are interacting with another human being but they will never be conscious.

    Mark your calendar. “Never” is a long time frame.

    I can’t see how it can happen. However complex it may appear it’s just layers of programming. An A.I. machine will never pause to think about it’s situation and it’s place in the world, look at the stars and marvel at its majesty.

    I see that today's AI is capable of much more than most people not working inside AI development areas might think.
    We've come to a point in development where nothing is really impossible anymore, it's rather a question of choosing and weighting available data in order to fine tune the AI engine. This choice is done by humans in the hope to build an "intelligent" engine that, after being trained with enough input data, will deliver the "correct answers" when given a set of input data.
    You want to build a "conscious machine"?
    If you judge consciousness by behavior and reactions, or even actions, then yes, it's possible today and some smart robots already feature such behavior. It's just a question of how deep to go when training the machine. And a lot of it can be done automatically.

    What scares me the most is that too many people already tend to rely on the technology without questioning it and without asking for a "second opinion" from a human being, generally speaking.

    Let me give you one example coming from AI image enlargement: Revealing the detail of a very blurred face photo.
    The enlargement engine can reveal image details that are not present in the source image. By training the machine with thousands or millions of detailed photos from different people, at some point the engine will be able to show a clear face from a picture that was too blurry to recognize anything in it. And it will look like a proper enlargement.
    The technology feels like magic and the police etc. will love to use it, the only problem is that there is no guarantee whatsoever that the resulting face image is from the same person as in the blurry image.
    In some cases, the users of the technology won't care about that though because they "want to solve the case now" and of course the computer can't be wrong ;)

    I recommend people watch a few of these videos for an idea of what’s coming: https://www.youtube.com/c/KárolyZsolnai/videos

    This dude has changed the way I pronounce 'papers'. What a time to be alive!

  • @AudioGus said:

    @NeuM said:

    @rs2000 said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:

    @HotStrange said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    This seems unlikely to me unless we extend the timeline out another 50 years.

    I don’t know about that. I’ve seen some very surprising demonstrations of advanced language based machine learning systems recently which have me convinced the Kurzweil timeline is going to be close.

    I’m not necessarily saying I think AI itself won’t be there in the next 10 years but as far as a court ruling saying they’re conscious individuals, I just don’t know. I don’t think people as a whole, especially old lawmakers, will be ready to go that route that soon.

    If a computer system demonstrates independent human level intelligence then I think it would be able to make the case itself in court. This seems inevitable to me.

    Sorry, I just can’t see that happening in the next 10 years. But who knows really? Technology moves incredibly fast now.

    I’m often surprised at the pace of new developments myself. I shouldn’t be, but there are millions of people involved in research, development and so on now, so it is logical the pace should be so accelerated. Demand still outstrips the supply of talented people working on solving these massive AI puzzles.

    Regardless of who’s right and wrong, I think we can all agree it’s gonna be interesting watching these advancements going forward. Cautiously excited about where it will lead.

    @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Isn’t going to happen. They might mimic to a human behaviour to the point where humans could believe they are interacting with another human being but they will never be conscious.

    Mark your calendar. “Never” is a long time frame.

    I can’t see how it can happen. However complex it may appear it’s just layers of programming. An A.I. machine will never pause to think about it’s situation and it’s place in the world, look at the stars and marvel at its majesty.

    I see that today's AI is capable of much more than most people not working inside AI development areas might think.
    We've come to a point in development where nothing is really impossible anymore, it's rather a question of choosing and weighting available data in order to fine tune the AI engine. This choice is done by humans in the hope to build an "intelligent" engine that, after being trained with enough input data, will deliver the "correct answers" when given a set of input data.
    You want to build a "conscious machine"?
    If you judge consciousness by behavior and reactions, or even actions, then yes, it's possible today and some smart robots already feature such behavior. It's just a question of how deep to go when training the machine. And a lot of it can be done automatically.

    What scares me the most is that too many people already tend to rely on the technology without questioning it and without asking for a "second opinion" from a human being, generally speaking.

    Let me give you one example coming from AI image enlargement: Revealing the detail of a very blurred face photo.
    The enlargement engine can reveal image details that are not present in the source image. By training the machine with thousands or millions of detailed photos from different people, at some point the engine will be able to show a clear face from a picture that was too blurry to recognize anything in it. And it will look like a proper enlargement.
    The technology feels like magic and the police etc. will love to use it, the only problem is that there is no guarantee whatsoever that the resulting face image is from the same person as in the blurry image.
    In some cases, the users of the technology won't care about that though because they "want to solve the case now" and of course the computer can't be wrong ;)

    I recommend people watch a few of these videos for an idea of what’s coming: https://www.youtube.com/c/KárolyZsolnai/videos

    This dude has changed the way I pronounce 'papers'. What a time to be alive!

    Now squeeze those papers!

  • Thanks again @Poppadocrock for the WOMBO.ART App link. It's been fun exploring various prompts to see what the engine produces.

    I chose some historical figures, and really love the set that the app generated.

    Here are a set of FIVE I really like. The style just fits the myth of this figure at least for me...really beautiful and expressive...enough to build off of that's for sure:

  • @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Isn’t going to happen. They might mimic to a human behaviour to the point where humans could believe they are interacting with another human being but they will never be conscious.

    Mark your calendar. “Never” is a long time frame.

    I can’t see how it can happen. However complex it may appear it’s just layers of programming. An A.I. machine will never pause to think about it’s situation and it’s place in the world, look at the stars and marvel at its majesty.

    Never say never, but I do highly doubt it. Maybe a hybrid scenario man and machine.

  • @echoopera that one came out great. I’ve had many good ones but I really like that one. it is a cool app, it’s fairly new, maybe only a month or 2. Enjoy.

  • @Poppadocrock said:

    @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    This will only be the case until A.I.’s are legally recognized as conscious individuals. That might happen in the next 10-20 years.

    Isn’t going to happen. They might mimic to a human behaviour to the point where humans could believe they are interacting with another human being but they will never be conscious.

    Mark your calendar. “Never” is a long time frame.

    I can’t see how it can happen. However complex it may appear it’s just layers of programming. An A.I. machine will never pause to think about it’s situation and it’s place in the world, look at the stars and marvel at its majesty.

    Never say never, but I do highly doubt it. Maybe a hybrid scenario man and machine.

    Elon Musk is at the forefront of companies working on the hybrid approach.

    https://neuralink.com/

  • edited February 2022

    @echoopera said:
    Thanks again @Poppadocrock for the WOMBO.ART App link. It's been fun exploring various prompts to see what the engine produces.

    I chose some historical figures, and really love the set that the app generated.

    Here are a set of FIVE I really like. The style just fits the myth of this figure at least for me...really beautiful and expressive...enough to build off of that's for sure:

    Is that Tesla. I know it is really cool, I’ve saved over 100, lol. I like thinking of interesting terms or phrases that might generate something cool. Here’s a couple of mine.




  • Or should I say… partially mine, lol. Maybe not mine at all.

  • @echoopera i just saw ther3 was a quote from Tesla at the bottom.

  • @rs2000 said:
    This choice is done by humans in the hope to build an "intelligent" engine that, after being trained with enough input data, will deliver the "correct answers" when given a set of input data.
    You want to build a "conscious machine"?
    If you judge consciousness by behavior and reactions, or even actions, then yes, it's possible today and some smart robots already feature such behavior. It's just a question of how deep to go when training the machine. And a lot of it can be done automatically.

    This is a more realistic take on the capabilities of (contemporary) machine learning in terms of consciousness. Emulation of certain human decision making and behavior is certainly possible, but given enough time and resources you could emulate many things just by programming various heuristics and common cases. Of course, there's a big difference between that and machine learning techniques, but the underlying process is really not that much more sophisticated compared to true intelligence. You still have the general process of input -> some kind of near-deterministic processing -> output. Yes, the middle step may be mathematically complex and in some cases even have some probabilistic elements, but the system as a whole is nowhere near as complex or nuanced as something like the human brain.

    I see that today's AI is capable of much more than most people not working inside AI development areas might think.
    We've come to a point in development where nothing is really impossible anymore, it's rather a question of choosing and weighting available data in order to fine tune the AI engine.

    On the contrary, I think people tend to attribute too much capability to current machine learning techniques. Of course, they do enable some pretty impressive results, even some which would have seemed improbable a few years ago. I don't deny this at all. However, many things (even seemingly basic ones) are very much out of reach for even the most cutting edge research in ML. In that regard, many things are very much "impossible". In my experience, people make the leap between soft AI and hard AI way too easily, when in reality there is still such a large gap between the two. I don't think any current "AI" truly deserves the title, and machine learning is a much more apt description.

    Again,

    @TonalityApp said:
    I'd really like to see some of this NLP research which supposedly puts us on track for machine consciousness. There is such a large disconnect between the reality (math) of modern machine learning techniques and anything even resembling an attempt to emulate biological systems.

    I say this because of all of the points above. People tend to conflate (admittedly impressive) results with solutions to a question that is far from solved. The fact that a machine can appear to understand language by no means implies that it truly understands or can creatively apply said language.

  • @TonalityApp said:
    I say this because of all of the points above. People tend to conflate (admittedly impressive) results with solutions to a question that is far from solved. The fact that a machine can appear to understand language by no means implies that it truly understands or can creatively apply said language.

    This. You could train a machine to have a conversation, and even appear to speak like a human. That machine is not conscious, it's not alive, and the conversation has no meaning for the machine.

    You might be able to teach it the rules of grammatical English, and even write some poetry, but the machine can never appreciate poetry. It's just an algorithm, it has no feelings.

Sign In or Register to comment.