Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Chat GPT-4o is my buddy

12467

Comments

  • @cyberheater said:

    @offbrands said:

    @cyberheater said:

    @offbrands said:
    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    AI has just started. Not sure what you are referring to relative to a burst.

    Goldman Sachs has already questioned its viability, therefore it’s worth.

    I think about how Crypto, Metaverse, it was all supposed to be the next big thing. Now they run away from talking about it.

    Pushing the goal post for unlimited growth in the tech sector. They’ve run out of ideas and this AI (Which to be clear I know it’s generative LLM’s) is the next wool they’re pulling over on the customers. It’s all bullshit.

    Side note - AI has not just started. Machine Learning, LM’s have been around for decades.

    The Goldman Sachs article is over a year old and well out of date.
    LLMs have come a long way since then and advances show no sign of slowing up. Quite the contrary.

    I've been following advances very closely. Huge amounts of money and effort are getting plowed into this. A.I is going to be deeply imbedded into every aspect of our lives if we want it or not.

    Yes, they will be embedded whether we want them or not. One of things that Jaron Lanier talks about in some of the articles that he has written is the misunderstanding of how LMM's work (even on the part of people that are computer scientists but not expert in machine learning) is that you can do all kinds of refining to improve what something like ChatGPT does -- but you can't categorically change what it does -- only how effectively it does what it does. There are a lot of smoke and mirrors on the part of people with billions and billions at stake -- who want to convince you and everyone (particularly investors) that particular AI systems are something other than what they are. You can improve how well an LLM does what it was set up to do (again: massively effective predictive text generator that generates text responsive to your prompts. It has a massive amount of text to draw from -- and generates the expected response extrapolated from its database. It can refine how it weights and filters things -- but there is nothing in it to determine truth -- and one of the things that people with no money to gain point out: the more that LLM output becomes (inadvertently) part of the corpus it trains on, the more prone to error it will be when erroneous data enter the corpus.

    So much money is at stake that you need to really take anything that any interested party has to say about the future capabilities. Lanier pointed out early on that there would be massive apparent improvement that is just the natural fruition of a better corpus and better filtering of the corpus and tweaking of the algorithms used -- and that LLMs will hit a wall and a new generation of systems will need to be generated. They are amazingly powerful tools -- they just aren't quite the tool that some people want us to think.

  • @cyberheater said:

    @offbrands said:
    Here’s the new one, apologies. This is within the last couple weeks.

    Thanks for the article. In conclusion it does say that A.I will pay off but at the moment it's constrained by GPU availability. There's a bit in there that states (conservatively) that in 10 years 25% of human jobs will be replaced. That's quite a decent return of investment. I think it will be quicker than that.

    @offbrands said:
    The money being plowed to it makes no difference to what I’m saying. Believe what you will, I’ll do the same.

    Don't get me wrong. A.I is going to be the biggest disrupter and impact to humans than any other technology. I'm not hugely optimistic that we will handle the transition well.
    But no. The bubble isn't about to burst. There is no bubble. It's only unrelenting progress. Quite frightening really.

    Unfortunately these companies don’t deal in decades. They’re short sighted, as they constantly prove time and time again.

    They’re ran by CEO’s who have vested interest in share holder value being the highest possible as quick as possible

    The market will not wait until then, imo. Therefore the huge hype and spend has turned it into a bubble. It’s gonna burst.

    The fact these ghouls hate creatives as much as they do should render every single creative using it as an opponent rather than an ally to these companies and their products that were again, trained on people’s creations without implicit permission, therefore stolen.

    They’ll never be able to afford the copyrights. They’ll never be able to get the usage under the necessary conditions to justify cost in time for the market to come for their profits.

    I’m cynical, I realize. But it seems you’re somewhat aware of the situation, in what world has the market ever waited 10 plus years for profits.

  • edited July 14

    @espiegel123 said:

    @cyberheater said:

    @offbrands said:

    @cyberheater said:

    @offbrands said:
    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    AI has just started. Not sure what you are referring to relative to a burst.

    Goldman Sachs has already questioned its viability, therefore it’s worth.

    I think about how Crypto, Metaverse, it was all supposed to be the next big thing. Now they run away from talking about it.

    Pushing the goal post for unlimited growth in the tech sector. They’ve run out of ideas and this AI (Which to be clear I know it’s generative LLM’s) is the next wool they’re pulling over on the customers. It’s all bullshit.

    Side note - AI has not just started. Machine Learning, LM’s have been around for decades.

    The Goldman Sachs article is over a year old and well out of date.
    LLMs have come a long way since then and advances show no sign of slowing up. Quite the contrary.

    I've been following advances very closely. Huge amounts of money and effort are getting plowed into this. A.I is going to be deeply imbedded into every aspect of our lives if we want it or not.

    Yes, they will be embedded whether we want them or not. One of things that Jaron Lanier talks about in some of the articles that he has written is the misunderstanding of how LMM's work (even on the part of people that are computer scientists but not expert in machine learning) is that you can do all kinds of refining to improve what something like ChatGPT does -- but you can't categorically change what it does -- only how effectively it does what it does. There are a lot of smoke and mirrors on the part of people with billions and billions at stake -- who want to convince you and everyone (particularly investors) that particular AI systems are something other than what they are. You can improve how well an LLM does what it was set up to do (again: massively effective predictive text generator that generates text responsive to your prompts. It has a massive amount of text to draw from -- and generates the expected response extrapolated from its database. It can refine how it weights and filters things -- but there is nothing in it to determine truth -- and one of the things that people with no money to gain point out: the more that LLM output becomes (inadvertently) part of the corpus it trains on, the more prone to error it will be when erroneous data enter the corpus.

    So much money is at stake that you need to really take anything that any interested party has to say about the future capabilities. Lanier pointed out early on that there would be massive apparent improvement that is just the natural fruition of a better corpus and better filtering of the corpus and tweaking of the algorithms used -- and that LLMs will hit a wall and a new generation of systems will need to be generated. They are amazingly powerful tools -- they just aren't quite the tool that some people want us to think.

    Well said. I literally have nothing to add but this is the exact thing I’m trying to say, We are being sold a product that’s marketed incorrectly by MBA’s in C-Suite levels of their companies.

    They have little to no knowledge of the product, all they want is a number to go up. They’ll lie egregiously to get there.

  • @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.

    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.

    Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.

    These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.

    Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.

    There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.

    I implore the lot of you to do some research into why it’s a horrible product.

    Or don’t, it’s your life.

    Either way, we all are, quite literally, being scammed.

    I don't disagree. But on the bright side, it's saved me a lot of time and effort at work.

  • @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks.

    The NY Times clearly doesn't understand how this works. Chips improve on a less than annual cycle. Their processing power improves and their power requirements drop every single time advances are made. This is how technology cycles work. The imaginary "all the energy a country can produce" claim is completely bogus. They need to look up Moore's Law.

  • @NeuM said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks.

    The NY Times clearly doesn't understand how this works. Chips improve on a less than annual cycle. Their processing power improves and their power requirements drop every single time advances are made. This is how technology cycles work. The imaginary "all the energy a country can produce" claim is completely bogus. They need to look up Moore's Law.

    Okay, the NYT linked a peer reviewed article with a plethora of sourcing from many people in academia Your vague mention of Moores Law as a gotcha is wild.

    Please cite any proof that Moores Law trumps this peer-reviewed study as it currently stands.

  • @Wrlds2ndBstGeoshredr said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.

    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.

    Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.

    These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.

    Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.

    There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.

    I implore the lot of you to do some research into why it’s a horrible product.

    Or don’t, it’s your life.

    Either way, we all are, quite literally, being scammed.

    I don't disagree. But on the bright side, it's saved me a lot of time and effort at work.

    Apathy is a feeling I yearn for, lucky you.

  • @offbrands said:

    @Wrlds2ndBstGeoshredr said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.

    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.

    Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.

    These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.

    Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.

    There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.

    I implore the lot of you to do some research into why it’s a horrible product.

    Or don’t, it’s your life.

    Either way, we all are, quite literally, being scammed.

    I don't disagree. But on the bright side, it's saved me a lot of time and effort at work.

    Apathy is a feeling I yearn for, lucky you.

    Erroneously presumptive. My default affect tends toward acceptance and non-attachment.

  • edited July 14

    @offbrands said:

    @NeuM said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks.

    The NY Times clearly doesn't understand how this works. Chips improve on a less than annual cycle. Their processing power improves and their power requirements drop every single time advances are made. This is how technology cycles work. The imaginary "all the energy a country can produce" claim is completely bogus. They need to look up Moore's Law.

    Okay, the NYT linked a peer reviewed article with a plethora of sourcing from many people in academia Your vague mention of Moores Law as a gotcha is wild.

    Please cite any proof that Moores Law trumps this peer-reviewed study as it currently stands.

    "Alex de Vries is a PhD candidate at the VU Amsterdam School of Business and Economics and the founder of Digiconomist, a research company dedicated to exposing the unintended consequences of digital trends. His research focuses on the environmental impact of emerging technologies and has played a major role in the global discussion regarding the sustainability of blockchain technology."

    Yeah, that sounds really "unbiased." Also, whatever are the results of this "peer-reviewed study" are hidden behind a paywall so the alleged results are not subject to discussion or debate.

  • edited July 14

    @Wrlds2ndBstGeoshredr said:

    @offbrands said:

    @Wrlds2ndBstGeoshredr said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.

    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.

    Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.

    These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.

    Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.

    There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.

    I implore the lot of you to do some research into why it’s a horrible product.

    Or don’t, it’s your life.

    Either way, we all are, quite literally, being scammed.

    I don't disagree. But on the bright side, it's saved me a lot of time and effort at work.

    Apathy is a feeling I yearn for, lucky you.

    Erroneously presumptive. My default affect tends toward acceptance and non-attachment.

    Hit a chord it seems?

    So you’re stating that you’re accepting yet non-attached.. if that’s not indifferent.. which is the definition of apathy.. then I suppose you’re well within your right to call it what you want. Doesn’t change the definitions of the word.

    I’m not here to judge and to be honest I couldn’t care less what your position is, or anyone else’s. I’m just sharing what I’ve seen with sources where it seems relevant to the community, being there’s a lot of the creatives here, should be in the know of. Embracing this technology is ridiculous for any creatives to be doing. Full stop.

    You made a point to say that you’ve gotten work done after I posted many sources about the harms of AI. You quoted me, and your response was “saved me time and effort at work”

    That’s apathetic, but as I’ve stated previously, I’m well aware self-reflection is a hard thing to do. It’s all good either way.

    I’ll continue to be “erroneously presumptive”, with the very little information you gave in a quoted response to me that was, yet again, apathetic.

    Cheers.

  • edited July 15

    @NeuM said:

    @offbrands said:

    @NeuM said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks.

    The NY Times clearly doesn't understand how this works. Chips improve on a less than annual cycle. Their processing power improves and their power requirements drop every single time advances are made. This is how technology cycles work. The imaginary "all the energy a country can produce" claim is completely bogus. They need to look up Moore's Law.

    Okay, the NYT linked a peer reviewed article with a plethora of sourcing from many people in academia Your vague mention of Moores Law as a gotcha is wild.

    Please cite any proof that Moores Law trumps this peer-reviewed study as it currently stands.

    "Alex de Vries is a PhD candidate at the VU Amsterdam School of Business and Economics and the founder of Digiconomist, a research company dedicated to exposing the unintended consequences of digital trends. His research focuses on the environmental impact of emerging technologies and has played a major role in the global discussion regarding the sustainability of blockchain technology."

    Yeah, that sounds really "unbiased." Also, whatever are the results of this "peer-reviewed study" are hidden behind a paywall so the alleged results are not subject to discussion or debate.

    Buddy. You’re whinging about a PhD candidates work being to show the unintended consequences of digital trends. We all live in a capitalist society. This person has a business and markets themselves to get research funding to publish papers showing that. This is how it works. They’re peer reviewed. Unlike marketing experts who just can literally say whatever they want.

    By your logic anyone who studies anything at length is biased. That’s delusional. If I didn’t know better I would assume you work for AI or big tech.

    But considering you find research biased, that’s the end of this conversation for me. You’ve lost the plot.

    Best.

  • @HolyMoses said:
    …or, as this women claims:

    (Favorite in reprise)

    Completely true.

  • edited July 15

    @offbrands said:

    @NeuM said:

    @offbrands said:

    @NeuM said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks.

    The NY Times clearly doesn't understand how this works. Chips improve on a less than annual cycle. Their processing power improves and their power requirements drop every single time advances are made. This is how technology cycles work. The imaginary "all the energy a country can produce" claim is completely bogus. They need to look up Moore's Law.

    Okay, the NYT linked a peer reviewed article with a plethora of sourcing from many people in academia Your vague mention of Moores Law as a gotcha is wild.

    Please cite any proof that Moores Law trumps this peer-reviewed study as it currently stands.

    "Alex de Vries is a PhD candidate at the VU Amsterdam School of Business and Economics and the founder of Digiconomist, a research company dedicated to exposing the unintended consequences of digital trends. His research focuses on the environmental impact of emerging technologies and has played a major role in the global discussion regarding the sustainability of blockchain technology."

    Yeah, that sounds really "unbiased." Also, whatever are the results of this "peer-reviewed study" are hidden behind a paywall so the alleged results are not subject to discussion or debate.

    Buddy. You’re whinging about a PhD candidates work being to show the unintended consequences of digital trends. We all live in a capitalist society. This person has a business and markets themselves to get research funding to publish papers showing that. This is how it works. They’re peer reviewed. Unlike marketing experts who just can literally say whatever they want.

    By your logic anyone who studies anything at length is biased. That’s delusional. If I didn’t know better I would assume you work for AI or big tech.

    But considering you find research biased, that’s the end of this conversation for me. You’ve lost the plot.

    Best.

    LOL. You cited a NYT link to a source which has no evidence... unless you pay for it. That's not evidence.

    In the interest of getting back to less controversial topics, I think this exchange is over.

    Have a nice day.

  • @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.

    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.

    Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.

    These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.

    Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.

    There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.

    I implore the lot of you to do some research into why it’s a horrible product.

    Or don’t, it’s your life.

    Either way, we all are, quite literally, being scammed.

    This post should be stickied as it was written like a grad school term paper (and no one asked him to).

  • edited July 15

    @NeuM said:

    @offbrands said:

    @NeuM said:

    @offbrands said:

    @NeuM said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks.

    The NY Times clearly doesn't understand how this works. Chips improve on a less than annual cycle. Their processing power improves and their power requirements drop every single time advances are made. This is how technology cycles work. The imaginary "all the energy a country can produce" claim is completely bogus. They need to look up Moore's Law.

    Okay, the NYT linked a peer reviewed article with a plethora of sourcing from many people in academia Your vague mention of Moores Law as a gotcha is wild.

    Please cite any proof that Moores Law trumps this peer-reviewed study as it currently stands.

    "Alex de Vries is a PhD candidate at the VU Amsterdam School of Business and Economics and the founder of Digiconomist, a research company dedicated to exposing the unintended consequences of digital trends. His research focuses on the environmental impact of emerging technologies and has played a major role in the global discussion regarding the sustainability of blockchain technology."

    Yeah, that sounds really "unbiased." Also, whatever are the results of this "peer-reviewed study" are hidden behind a paywall so the alleged results are not subject to discussion or debate.

    Buddy. You’re whinging about a PhD candidates work being to show the unintended consequences of digital trends. We all live in a capitalist society. This person has a business and markets themselves to get research funding to publish papers showing that. This is how it works. They’re peer reviewed. Unlike marketing experts who just can literally say whatever they want.

    By your logic anyone who studies anything at length is biased. That’s delusional. If I didn’t know better I would assume you work for AI or big tech.

    But considering you find research biased, that’s the end of this conversation for me. You’ve lost the plot.

    Best.

    LOL. You cited a NYT link to a source which has no evidence... unless you pay for it. That's not evidence.

    In the interest of getting back to less controversial topics, I think this exchange is over.

    Have a nice day.

    “You can’t fire me I quit!”

    Lol. Hilarious.

  • edited July 15

    @offbrands said:

    @NeuM said:

    @offbrands said:

    @NeuM said:

    @offbrands said:

    @NeuM said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks.

    The NY Times clearly doesn't understand how this works. Chips improve on a less than annual cycle. Their processing power improves and their power requirements drop every single time advances are made. This is how technology cycles work. The imaginary "all the energy a country can produce" claim is completely bogus. They need to look up Moore's Law.

    Okay, the NYT linked a peer reviewed article with a plethora of sourcing from many people in academia Your vague mention of Moores Law as a gotcha is wild.

    Please cite any proof that Moores Law trumps this peer-reviewed study as it currently stands.

    "Alex de Vries is a PhD candidate at the VU Amsterdam School of Business and Economics and the founder of Digiconomist, a research company dedicated to exposing the unintended consequences of digital trends. His research focuses on the environmental impact of emerging technologies and has played a major role in the global discussion regarding the sustainability of blockchain technology."

    Yeah, that sounds really "unbiased." Also, whatever are the results of this "peer-reviewed study" are hidden behind a paywall so the alleged results are not subject to discussion or debate.

    Buddy. You’re whinging about a PhD candidates work being to show the unintended consequences of digital trends. We all live in a capitalist society. This person has a business and markets themselves to get research funding to publish papers showing that. This is how it works. They’re peer reviewed. Unlike marketing experts who just can literally say whatever they want.

    By your logic anyone who studies anything at length is biased. That’s delusional. If I didn’t know better I would assume you work for AI or big tech.

    But considering you find research biased, that’s the end of this conversation for me. You’ve lost the plot.

    Best.

    LOL. You cited a NYT link to a source which has no evidence... unless you pay for it. That's not evidence.

    In the interest of getting back to less controversial topics, I think this exchange is over.

    Have a nice day.

    “You can’t fire me I quit!”

    Lol. Hilarious.

    What happened to "...that’s the end of this conversation for me"?

    Why don't you contact me via DM if you have additional issues you need to work out? This is not the place for this kind of derailing conversation.

  • edited July 15

    Not true that ‘there is no bubble’. The rate of development is impressive, but many highly informed observers state that it is by no means a certainty that the problem of “hallucinations” is solvable. See people like Gary Marcus on Twitter, for example, for more informed critique of optimistic outlooks. If the problem of hallucinations is not solved, AI will never end up disrupting society anywhere near as much as was earlier expected, and this will lead to a MASSIVE drop in value of shares in AI tech companies, exactly in the same way as happened with crypto metaverse projects. None of this is to say that AI is not useful, but we may have to limit our expectations drastically. Lots of other points Offbrands made also seem accurate, including the energy issues. I still find chatGPT and Claude valuable and inspiring in some ways, but their usefulness is drastically limited by the problem of hallucination (as well as by the poor ability of most humans using them to use them to full potential).

    @cyberheater said:
    There is no bubble. It's only unrelenting progress. Quite frightening really.

  • wimwim
    edited July 15

    My immediate reaction is there is some conflation going on between "success," as defined as profitability and viability as a business investment vs. disruptive impact. I don't believe there's a determinative correlation.

    Agreed it would be a crap shoot to expect return on investment. Profits could happen. It could look like they're happening and then crash. It could never take off. Some people are going to make fortunes. Some are going to lose their shirts. (I'm not touching related investments at this point, myself. It's too uncertain.)

    But that isn't going to stop AI gaining huge, scary, dangerous, wildly beneficial, practical, and absolutely disruptive capabilities. That much is guaranteed regardless of return on investment. To shrug that off based on whether it's a good investment or not is a mistake.

  • @ahallam said:

    @HolyMoses said:
    …or, as this women claims:

    (Favorite in reprise)

    Completely true.

    I'm greatly fond of my Shark robot vacuum, "Buddy". He's reasonably smart, never complains, and feels like a little friend puttering around the house doing the floors while I tidy up the house. He's respectful toward the cats, but also doesn't let them intimidate him, much to their annoyance.

    I look forward to a future with a couple other helpers like him. I prefer them not speech enabled, but I'm weird that way. Don't like talking (to humans either). 😂

  • @Gavinski said:
    Not true that ‘there is no bubble’. The rate of development is impressive, but many highly informed observers state that it is by no means a certainty that the problem of “hallucinations” is solvable.

    The next big leap is AI companies working on reasoning technologies. Couple that with an improvement of high quality training data will yield the next step. It’s going to be an interactive non ending process but in the short term it will deliver AI that can be relied on to perform a good subset of human related tasks autonomously reliably and with trustworthy results. Things will rapidly improve after that.

    AI scientists who are deeply imbedded working out the problems of AI say there’s no slowing down of the technology and super intelligence is just a matter of time. Observers of AI tech companies can state what they wish relative to AI advances but if they talk about bubbles then they don’t understand how much effort is being poured into this endeavour and the huge advances that are happening at a global level on a near daily basis.

  • @cyberheater said:

    @Gavinski said:
    Not true that ‘there is no bubble’. The rate of development is impressive, but many highly informed observers state that it is by no means a certainty that the problem of “hallucinations” is solvable.

    The next big leap is AI companies working on reasoning technologies. Couple that with an improvement of high quality training data will yield the next step. It’s going to be an interactive non ending process but in the short term it will deliver AI that can be relied on to perform a good subset of human related tasks autonomously reliably and with trustworthy results. Things will rapidly improve after that.

    AI scientists who are deeply imbedded working out the problems of AI say there’s no slowing down of the technology and super intelligence is just a matter of time. Observers of AI tech companies can state what they wish relative to AI advances but if they talk about bubbles then they don’t understand how much effort is being poured into this endeavour and the huge advances that are happening at a global level on a near daily basis.

    I don’t think people understand how fast things will change when AI is embedded in the millions of laptops, desktops, tablets and smart devices around the world….up until now AI has been opt in or service-based. While it has scrubbed the internet for what it can find, the greater advance is having the world actively feeding you data on how people talk, and think and feel, and connect. The things they search for and share. Schedules, mails, messages, location data, and all these other things that even at the lowest level of invasiveness will help it grow constantly.

  • edited July 15

    @chocobitz825 said:
    I don’t think people understand how fast things will change when AI is embedded in the millions of laptops, desktops, tablets and smart devices around the world….up until now AI has been opt in or service-based. While it has scrubbed the internet for what it can find, the greater advance is having the world actively feeding you data on how people talk, and think and feel, and connect. The things they search for and share. Schedules, mails, messages, location data, and all these other things that even at the lowest level of invasiveness will help it grow constantly.

    You raise a good point that I hadn't considered in that when all these AI systems are embedded into our laptops/phones etc... there is going to be a vast amount of additional high quality training data available. That's really going to catapult progress.

  • edited July 15

    @offbrands said:

    I’m just sharing what I’ve seen with sources where it seems relevant to the community, being there’s a lot of the creatives here, should be in the know of. Embracing this technology is ridiculous for any creatives to be doing. Full stop.

    the main reason I was impressed with this current version of chatgpt with voice to text was the fact that it just works extremely well at helping me extract data in a very quick and human way.... just by talking.

    all the creative stuff so far as been amusing at best and completely useless personally.

    i consider myself a creative however I'm not worried about AI in that field, up until now AI has been really boring to me, however using it as a information extraction tool via voice is incredible.

  • edited July 15

    Microsoft CTO Kevin Scott says we are not at diminishing returns and that scaling laws will continue to extend and debunks the twitter trolls who states it's all a scam.

    So no. There is no bubble.

  • Be very cautious about treating what technologists with billions of dollars at stake have to say about yhd state of AI and its benefits. The CTO of Microsoft, for example, has a very vested interest in having people (investors) think that AI should be in every product and has only upside and untapped potential…oh and by the way, we are making enormous profit on something that relied 100% on data we for which we didn’t compensate anyone…and we’ll worry about the enormous energy burden later

  • edited July 15

    @espiegel123 said:
    Be very cautious about treating what technologists with billions of dollars at stake have to say about yhd state of AI and its benefits. The CTO of Microsoft, for example, has a very vested interest in having people (investors) think that AI should be in every product and has only upside and untapped potential…oh and by the way, we are making enormous profit on something that relied 100% on data we for which we didn’t compensate anyone…and we’ll worry about the enormous energy burden later

    Here's a citation from an engineering-centric source regarding energy consumption and A.I.

    https://www.prnewswire.com/news-releases/epri-study-data-centers-could-consume-up-to-9-of-us-electricity-generation-by-2030-302157970.html

    But even this source is basing its 9% of US electricity generation by 2030 on current trends for something which is relatively new. I sincerely doubt the likelihood of these projections, especially since populations across the world are currently in decline. Declining populations mean there will be reduced power consumption needs. And reduced power requirements will come from more efficient programming and more efficient, lower power processors.

  • edited July 15

    @cyberheater said:
    Microsoft CTO Kevin Scott says we are not at diminishing returns and that scaling laws will continue to extend and debunks the twitter trolls who states it's all a scam.

    So no. There is no bubble.

    The CTO of a company that’s funneled billions of dollars into this exact technology said we haven’t hit diminishing returns and then proceeds to explain scaling laws while debunking Twitter trolls.

    Must be true! That settles that! No bubble.

    This is exactly my point, jokes aside.

    ANY C-Suite level executive whose company has a blatant bias with the technology working is going to sing, dance, and bullshit their way to get people to believe them.

    Silicon Valley is full of people who all want to be a version of Steve Jobs. The problem is only Steve Jobs was Steve Jobs. Most of these people I argue shouldn’t even be on camera. But their egos need to be fed.

    Break down what you said again and explain to yourself and then this thread how that wouldn’t be arguably the most biased opinionated source that’s been posted on here, I’m not trying to be rude, just wanting you and anyone reading this thread to think more critically.

    EDIT: this is a perfect opportunity for my 2nd main rule in life, if a man has his facial hair cut like fucking Colonel Sanders unironically, turn the video off immediately. They aren’t to be trusted.

  • @espiegel123 said:
    Be very cautious about treating what technologists with billions of dollars at stake have to say about yhd state of AI and its benefits. The CTO of Microsoft, for example, has a very vested interest in having people (investors) think that AI should be in every product and has only upside and untapped potential…oh and by the way, we are making enormous profit on something that relied 100% on data we for which we didn’t compensate anyone…and we’ll worry about the enormous energy burden later

    Pin this, sticky it, hang it in the Smithsonian.

  • @offbrands said:
    EDIT: this is a perfect opportunity for my 2nd main rule in life, if a man has his facial hair cut like fucking Colonel Sanders unironically, turn the video off immediately. They aren’t to be trusted.

    That did genuinely made me chuckle. Thanks :smile:

  • edited July 15

    @Danny_Mammy said:

    @offbrands said:

    I’m just sharing what I’ve seen with sources where it seems relevant to the community, being there’s a lot of the creatives here, should be in the know of. Embracing this technology is ridiculous for any creatives to be doing. Full stop.

    the main reason I was impressed with this current version of chatgpt with voice to text was the fact that it just works extremely well at helping me extract data in a very quick and human way.... just by talking.

    all the creative stuff so far as been amusing at best and completely useless personally.

    i consider myself a creative however I'm not worried about AI in that field, up until now AI has been really boring to me, however using it as a information extraction tool via voice is incredible.

    I understand. My views and sharing was just more of a, I really love this forum and I want to make sure anyone who comes across knows that it isn’t an echo chamber for AI talk.

    I personally know people who have lost jobs in Creative fields. Voice actors mostly, but a couple of copy writers as well. Soon it’ll be much more. I just don’t even want to engage with that technology at all.

    The CTO of OpenAI, Mira Murati, has some pretty weird views on creativity. I imagine because she’s never been a creative. She sees creativity as a barrier that only a few talented people can access. I find that the most baffling thing I’ve ever heard about creativity.

    She goes on to say the jobs overtaken will be ones that never should have existed in the first place.

    That for me, was the end of ever using these chat bots.

    (For some reason the time stamp won’t work - it’s at 27:53 for anyone interested)

Sign In or Register to comment.