Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Chat GPT-4o is my buddy

13567

Comments

  • @SevenSystems said:

    @Gavinski said:

    @SevenSystems said:

    @Danny_Mammy said:

    @SevenSystems

    Everything is only remembered per-session. In the most recent models, I think it's 8192 tokens (mostly equivalent to words). So anything either of you said more than 8192 words "ago" is "forgotten" by the model.

    yeah, all the data its uses comes from data up until jan 2022.

    It cant make any further adjustments to its learning until OpenAI give it more data in the next version.

    also this is why it has no Realtime data knowledge to pull from, like the Schedule for a basketball game... unless you wanna know the schedule from 2021.

    new discoveries after Jan 2022 will not be known by chatgpt 4 o

    I was referring to it remembering your personal conversations with it and what you taught it -- that is the 8192 token limit.

    The "static" "knowledge" it has is up to a certain point in time too yeah -- you can actually just ask it something like "What date is your knowledge cutoff" and it'll happily respond :)

    Now, chatgpt remembers all info across your chat history - this was added in a recent update!

    Oh! Thanks... is this different to Custom Instructions? (apparently I should read those more often 😂)

    It​ is different, yes. It simply remembers your entire conversation history. You can ask it anything about what you previously discussed and it should be able to recall it, analyse it etc. It also now has a feature where it will kind of create its own custom instructions based on things you say. For example, if you casually mention that you're a software dev, you might see a message popping up after that which says something like 'memorising'. It is marking that info as significant, to improve its future answers' relevance. But... It still tends to waffle on, answers are often still prone to hallucination. Far from perfect, still useful in skilled hands tho!

  • @MrStochastic said:
    In the latest Harper’s magazine an article pointed out this idea:

    “Over the past year, several AI companies have advertised positions for writers and poets. As it becomes more difficult to discreetly swallow immense quantities of copyrighted material, the dataset needs new high quality inputs. Why would a tech company pay for content, given the ocean of data still liberally accessible on the internet? Industry leaders realize that, more and more, the texts available online will be co-written or simply re-written, by their own tools, inevitably degrading the the quality of future iterations of the model.”

    My interpretation of that is there will be a kind of “in-breeding” without new input of real human imagination.

    Don’t give them too many ideas for hybridisation. ;)

  • @Wrlds2ndBstGeoshredr said:

    @NeuM said:

    @Wrlds2ndBstGeoshredr said:

    @NeuM said:

    @Wrlds2ndBstGeoshredr said:
    I want a cute little robot, not some scary thing that looks like it could kill me if it ever got the notion! And all because I was too lazy to take the dishes from the table and put them in the dishwasher.

    Speaking of which, there’s a new show on Apple TV+ called “Sunny” which addresses this issue of “homebots” in a sort of sci-fi drama series. It’s excellent so far (set in Japan, starring Rashida Jones and Japanese actors, mostly speaking English).

    I watched that British show "Humans" about lifelike androids a few years back. The show lost me when it presented a guy screwing a sexbot as being just inherently wrong. Isn't that why we build sexbots to begin with?

    I think you’ll like this series.

    What bugged me about Humans is that they centered the plot around the key question: At what level of sentience does a being deserve full civil rights? Obviously there is no clear answer to that. But instead of exploring the issue, the writers decided that of course robots deserve rights. The good guys supported robot rights; the bad guys opposed them. It was that simplistic. I prefer smart writers who assume their audience is equally smart.

    I’d say we have to sort out ‘human rights’ before anything else, certainly before the rights of a mechanical mannequin.

  • @Danny_Mammy said:

    @Poppadocrock said:
    Not that late @Danny_Mammy Came out maybe a month or so ago.

    Is version 4o only in paid version?

    @Nuggetz

    for free it seems like a little more than 10 messages and every 3 hours its renewed but basically you run out quick.

    even paid its limited so not ideal.

    its the most impressive AI tool i used so far, the art generation wasn't very useful to me at all but this voice conservation with AI is extremely useful. Google search is dead.

    Right on.

  • @SevenSystems said:

    @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Yep. Actually it's a basic requirement for evolutionary survival. It's a part of the wider concept of "pattern recognition", one of the most fundamental parts of cognition. Just goes to show you how crazy the times are we live in in the West, where any such endeavour is branded as "discrimination" and essentially means the societal death penalty 😃 (but I digress)

    The word discrimination was never used and there was no death penalty persecution happening. I still stand that person being so assumptive was wrong, but as a person who’s done plenty of work on myself, I could read a room and recgonize no one was feeling that so I let it go.

    There is no societal death penalty for anyone, they just rebrand, the world shines shit and calls it gold.

    You’re conflating pattern recognition and my point which was unnecessary assumptions being voiced. I ask anyone to put yourself in that original tweets shoes and tell yourself you would love seeing those kind of assumptions made about you on a forum. It’s a bad look. Full stop.

    My intentions were to be kinder to people who, for all we know, could happen upon this forum and I still stand it was a bad look to let that person just pull shit out of thin air.

    Honestly I’m completely over it. If you need the last response, it’s yours. Cheers.

  • @offbrands said:
    My intentions were to be kinder to people who, for all we know, could happen upon this forum and I still stand it was a bad look to let that person just pull shit out of thin air.

    Honestly I’m completely over it. If you need the last response, it’s yours. Cheers.

    I think you covered it. Assumptions are unavoidable. But as you say we should all strive to be kinder by considering what we say and exercizing some self control when typing away. I try to be better but it's small steps.

    @offbrands said:
    There is no societal death penalty for anyone, they just rebrand, the world shines shit and calls it gold.

    BTW, I think it was established in the 1986 documentary "Christine" where Will Darnell explained so well:

  • @Ailerom said:

    @offbrands said:
    My intentions were to be kinder to people who, for all we know, could happen upon this forum and I still stand it was a bad look to let that person just pull shit out of thin air.

    Honestly I’m completely over it. If you need the last response, it’s yours. Cheers.

    I think you covered it. Assumptions are unavoidable. But as you say we should all strive to be kinder by considering what we say and exercizing some self control when typing away. I try to be better but it's small steps.

    @offbrands said:
    There is no societal death penalty for anyone, they just rebrand, the world shines shit and calls it gold.

    BTW, I think it was established in the 1986 documentary "Christine" where Will Darnell explained so well:

    Great scene. Forgot about.

    Thanks for your input. Appreciate it. Cheers.

  • @chocobitz825 said:
    It’s not really lying…it doesn’t know truth from fiction..which is why it can present both as fact….it needs to provide an answer but it doesn’t really know if the answer is right or not. Contextually if it seems right, that’s good enough. That’s why people can’t be lazy and treat it like a search engine.

    I understand what you said, but it did create a story to fill in the void when it cannot find the answer.

    I asked it the following question: "Provide a detailed analysis of the third movement of Bartok's String Quartet No.4." I did the analysis myself (but never published it.) I searched the web for a good analysis of this movement, but couldn't find one that's good enough—basically, no real "detailed" analysis exists, thus the question. FYI the movement is tempo marked as "Non troppo lento" (slow, but not too much), and begins with a slow E Pentatonic scale descending passage, alternately played by the 2 violins and viola. All notes were held to form a sustained, static chord accompanying a melody played by the cello.

    And here's part of the answer:
    "...The first section, marked "Allegro" (fast and lively), begins with a lively and energetic melody in the first violin, which is accompanied by rapid arpeggios in the other three instruments..."

    No human errors, no matter how amateur he/she is, would rival this. Again, I agree with you completely that it's up to us to check whether ChatGPT gets its answer from reliable sources, etc., but in this case, I don't believe any human sources for the analysis could be this wrong. It looks more like it cannot find the analysis of this specific movement of the quartet, and so pulled other analyses of other quartets to create the answer instead.

  • It’s almost like chatGPT makes some assumptions.

  • @Ailerom said:

    @offbrands said:
    My intentions were to be kinder to people who, for all we know, could happen upon this forum and I still stand it was a bad look to let that person just pull shit out of thin air.

    Honestly I’m completely over it. If you need the last response, it’s yours. Cheers.

    I think you covered it. Assumptions are unavoidable. But as you say we should all strive to be kinder by considering what we say and exercizing some self control when typing away. I try to be better but it's small steps.

    @offbrands said:
    There is no societal death penalty for anyone, they just rebrand, the world shines shit and calls it gold.

    BTW, I think it was established in the 1986 documentary "Christine" where Will Darnell explained so well:

    …but you can roll it in glitter

  • no need for forum wars guys, i know we are all old geezers but let's give a good showing for the AI.

  • @Danny_Mammy said:
    no need for forum wars guys, i know we are all old geezers but let's give a good showing for the AI.

    Would you put on a good show for the reflection in the mirror, or rather the origination of that image?

  • edited July 14

    @knewspeak said:

    @Danny_Mammy said:
    no need for forum wars guys, i know we are all old geezers but let's give a good showing for the AI.

    Would you put on a good show for the reflection in the mirror, or rather the origination of that image?

    I rather impress the AI than the humans, don't bite the future hand that will feed you.

    I'm just jesting a little, no need for forum wars regardless.

  • edited July 14

    There is no need for a forum war. I will extend an invitation to any particular individual(s) to discuss my assumptions in a PM

  • My dad died last year and I had a screen shot of his medical prescription. I wanted to find out what meds where associated with a heart condition so I posted the screen shot and asked chatgpt what meds were heart meds.

    It gave me a detailed synopsis of each one and gave a summary of only the ones that were associated with a heart condition. It did it in seconds. Would have taken me a fair bit of time to pull that information myself. It's really quite amazing technology.

  • edited July 14

    @cyberheater said:
    My dad died last year and I had a screen shot of his medical prescription. I wanted to find out what meds where associated with a heart condition so I posted the screen shot and asked chatgpt what meds were heart meds.

    It gave me a detailed synopsis of each one and gave a summary of only the ones that were associated with a heart condition. It did it in seconds. Would have taken me a fair bit of time to pull that information myself. It's really quite amazing technology.

    sorry to hear about your Dad. RIP

    I haven't got into the visual side yet, but it sounds like the same mind-blowing feedback I'm getting from voice to text.... with the caveat of the (perceived) lying if it doesn't have the specific correct data.

  • Implicit in a lot of this thread is a notion that LLMs (of which ChatGPT is one) are designed for fact/truth discrimination. They are not; it is not what they are designed to do. They aren’t “intelligent “ in in the sense of being designed to analyze information for truth. They are designed to generate language consistent with the corpus they were trained on.

    They are essentially predictive text engines trained on an ENORMOUSLY (really really really enormous) LARGE amount of source material. If the corpus they are trained on has any bad information in it, that information will make its way into what it returns.

    LLMs are very good at generating text that SOUNDS accurate—and for the average person, the quality of the sentences will be better than what they might write themselves in terms of style. But they often spit out convincing sentences that are factually wrong.

    I have a few friends that find it useful for programming because the corpus seems to have sufficient material that it supplies reasonably relevant code —because these friends are expert coders, they quickly recognize when it gives them bad code. A couple of friends, also expert coders, work in areas for which the corpus must not have much relevant code, because they have found it not very useful except for code they don’t need help with.

    Little discussed is how much benefit these systems are compared to the energy they consume (lots) or the ethics of companies making profit that is 100% reliant on other people’s work (the corpus on which these systems train).

  • heshes
    edited July 14

    @espiegel123 said:
    Implicit in a lot of this thread is a notion that LLMs (of which ChatGPT is one) are designed for fact/truth discrimination. They are not; it is not what they are designed to do. They aren’t “intelligent “ in in the sense of being designed to analyze information for truth. They are designed to generate language consistent with the corpus they were trained on.

    It seems that there's also, implicit in your making this point, a suggestion that humans have some advantage over AI because humans have some innate superiority at identifying "truth". I would suggest that this is not an advantage humans have over LLM-AI. Or, if humans do have some advantage, far more is required to establish that than simply to say LLM-AIs "are not designed for fact/truth discrimination." Humans have evolved to adopt beliefs that maximize fitness, not truth. See, e.g, https://www.scientificamerican.com/article/did-humans-evolve-to-see-things-as-they-really-are/

  • edited July 14

    @espiegel123 said:
    Implicit in a lot of this thread is a notion that LLMs (of which ChatGPT is one) are designed for fact/truth discrimination. They are not; it is not what they are designed to do. They aren’t “intelligent “ in in the sense of being designed to analyze information for truth. They are designed to generate language consistent with the corpus they were trained on.

    They are essentially predictive text engines trained on an ENORMOUSLY (really really really enormous) LARGE amount of source material. If the corpus they are trained on has any bad information in it, that information will make its way into what it returns.

    LLMs are very good at generating text that SOUNDS accurate—and for the average person, the quality of the sentences will be better than what they might write themselves in terms of style. But they often spit out convincing sentences that are factually wrong.

    I have a few friends that find it useful for programming because the corpus seems to have sufficient material that it supplies reasonably relevant code —because these friends are expert coders, they quickly recognize when it gives them bad code. A couple of friends, also expert coders, work in areas for which the corpus must not have much relevant code, because they have found it not very useful except for code they don’t need help with.

    Little discussed is how much benefit these systems are compared to the energy they consume (lots) or the ethics of companies making profit that is 100% reliant on other people’s work (the corpus on which these systems train).

    its designed to give data to the user, if it gives false data then that is a fault in the system.

    it's just annoying how the algo responds to questions when the data is incorrect or absent. it gives a perceived image of lying. obviously, people understand it just a machine and not lying.

    its something the devs have to work on. infact, perceived lying is actually useful but not when you use a tool for precise work or if you wanna modulate to the correct key in music. clearly this is an issue.

  • @Danny_Mammy said:

    @espiegel123 said:
    Implicit in a lot of this thread is a notion that LLMs (of which ChatGPT is one) are designed for fact/truth discrimination. They are not; it is not what they are designed to do. They aren’t “intelligent “ in in the sense of being designed to analyze information for truth. They are designed to generate language consistent with the corpus they were trained on.

    They are essentially predictive text engines trained on an ENORMOUSLY (really really really enormous) LARGE amount of source material. If the corpus they are trained on has any bad information in it, that information will make its way into what it returns.

    LLMs are very good at generating text that SOUNDS accurate—and for the average person, the quality of the sentences will be better than what they might write themselves in terms of style. But they often spit out convincing sentences that are factually wrong.

    I have a few friends that find it useful for programming because the corpus seems to have sufficient material that it supplies reasonably relevant code —because these friends are expert coders, they quickly recognize when it gives them bad code. A couple of friends, also expert coders, work in areas for which the corpus must not have much relevant code, because they have found it not very useful except for code they don’t need help with.

    Little discussed is how much benefit these systems are compared to the energy they consume (lots) or the ethics of companies making profit that is 100% reliant on other people’s work (the corpus on which these systems train).

    its designed to give data to the user, if it gives false data then that is a fault in the system.

    it's just annoying how the algo responds to questions when the data is incorrect or absent. it gives a perceived image of lying. obviously, people understand it just a machine and not lying.

    its something the devs have to work on. infact, perceived lying is actually useful but not when you use a tool for precise work or if you wanna modulate to the correct key in music. clearly this is an issue.

    LLM's simply are not designed to do this -- any attempt to add 'truth discrimination' is essentially a hack. If you read the writing about LLMs from people that are both hugely knowledgeable AND have no vested interested (i.e. no profit motive) they have a lot of enlightening things to say about what this technology can and can't do -- even with refinement. Jaron Lanier has written some really good pieces going through this -- there are a lot of technologists that have a vested interest in selling LLMs as delivering more than they do/can -- because they have a huge profit motive.

  • @hes said:

    @espiegel123 said:
    Implicit in a lot of this thread is a notion that LLMs (of which ChatGPT is one) are designed for fact/truth discrimination. They are not; it is not what they are designed to do. They aren’t “intelligent “ in in the sense of being designed to analyze information for truth. They are designed to generate language consistent with the corpus they were trained on.

    It seems that there's also, implicit in your making this point, a suggestion that humans have some advantage over AI because humans have some innate superiority at identifying "truth". I would suggest that this is not an advantage humans have over LLM-AI. Or, if humans do have some advantage, far more is required to establish that than simply to say LLM-AIs "are not designed for fact/truth discrimination." Humans have evolved to adopt beliefs that maximize fitness, not truth. See, e.g, https://www.scientificamerican.com/article/did-humans-evolve-to-see-things-as-they-really-are/

    Humans with expertise in a field are by no means infallible -- but they are able to identify errors in a way that LLM's cannot. There are certainly areas where various types of AIs are less fallible than individual humans.

    When one switches freely between discussing AI and LLMs (a very particular, if impressive, application of machine learning/AI), it can be confusing. LLM's are a particular category of tool with particular limitations. Other AI tools have other applications and limitations. I think it is important not to treat LLMs as AI writ large. LLMs are amazing at what they were designed to do -- but they are not designed for the kind of analysis that experts in a field do. They just aren't. I don't mean "they aren't there yet", I mean that isn't what that tool does.

  • @Danny_Mammy said:
    I haven't got into the visual side yet, but it sounds like the same mind-blowing feedback I'm getting from voice to text.... with the caveat of the (perceived) lying if it doesn't have the specific correct data.

    The next big push relative to the advance in A.I is reasoning and eliminating hallucinations which I believe OpenAI are currently working on. The goal is to establish A.I as being reliable and trustworthy.

  • edited July 14

    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.

    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.

    Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.

    These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.

    Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.

    There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.

    I implore the lot of you to do some research into why it’s a horrible product.

    Or don’t, it’s your life.

    Either way, we all are, quite literally, being scammed.

  • @offbrands said:
    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    AI has just started. Not sure what you are referring to relative to a burst.

  • @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.

    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.

    Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.

    These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.

    Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.

    There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.

    I implore the lot of you to do some research into why it’s a horrible product.

    Or don’t, it’s your life.

    Either way, we all are, quite literally, being scammed.

    What an exemplary post with all reference articles, love that part of it and wish more people here (and elsewhere) would link to their claims. Not sure I agree with your take on it but that's another story.

  • edited July 14

    @cyberheater said:

    @offbrands said:
    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    AI has just started. Not sure what you are referring to relative to a burst.

    Goldman Sachs has already questioned its viability, therefore it’s worth.

    I think about how Crypto, Metaverse, it was all supposed to be the next big thing. Now they run away from talking about it.

    Pushing the goal post for unlimited growth in the tech sector. They’ve run out of ideas and this AI (Which to be clear I know it’s generative LLM’s) is the next wool they’re pulling over on the customers. It’s all bullshit.

    Side note - AI has not just started. Machine Learning, LM’s have been around for decades.

    Updated Goldman Sachs Article;

    https://www.goldmansachs.com/intelligence/pages/gs-research/gen-ai-too-much-spend-too-little-benefit/report.pdf?ref=404media.co

  • edited July 14

    @Pxlhg said:

    @offbrands said:
    As for the “AI” convo, Chat GPT is an LLM. Being impressed with it is like meeting a fully trained dog and asking it to sit. It’ll do it. Ask it to play dead, it’ll do that too. Mildly impressive.

    But now imagine that dog has the necessity of using all the energy a country can produce for a single year in order to do these few tricks. Then imagine that dog is currently driving the American stock market in the billions of market cap while not being profitable and also producing a shit product that venture capitalists are currently calling out.

    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    Those AI models have stolen creatives works from the internet, so image generation is just plagiarism with extra steps, as well as its prose and writing, and of course it’s facts it spits out. These people don’t believe creatives should exist. I’ll take one on the chin and admit to currently assuming that most of the people here are creatives. Creatives who want to create and not have a shitty LLM produce work while stealing jobs from other creatives.

    Those AI models blatantly lie, which the industry has labeled “hallucinations”, there is no solution to this to be solved. Conflating humans with a machine is a fools errand. Yes humans lie, mislead, manipulate, but people are able to call this out. With AI it just gets labeled an “hallucination”. Go eat a rock, it’s fine.

    These companies have no way of protecting your data either, as E2EE is currently impossible. So if you’re completely comfortable giving any information to a product that will use it to train future models, go right ahead.

    Entertaining this trash product for it to blatantly lie to you, while the new leaders of Silicon Valley currently lie to us, while using up valuable energy during our climate crises to enrich Silicon Valley, as they’ve run out of ideas, is in a word, fucked.

    There are no more worlds to conquer for them. So these MBAs are selling AI to us in order to grow and they’ll get rich while us, the poors, will suffer the consequences of their egregious actions.

    I implore the lot of you to do some research into why it’s a horrible product.

    Or don’t, it’s your life.

    Either way, we all are, quite literally, being scammed.

    What an exemplary post with all reference articles, love that part of it and wish more people here (and elsewhere) would link to their claims. Not sure I agree with your take on it but that's another story.

    Thank you that’s kind. I wish I would have linked more. Lord knows I have a bunch of them. But either way, even while not agreeing, I appreciate you taking the time to respond in kind.

  • @offbrands said:

    @cyberheater said:

    @offbrands said:
    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    AI has just started. Not sure what you are referring to relative to a burst.

    Goldman Sachs has already questioned its viability, therefore it’s worth.

    I think about how Crypto, Metaverse, it was all supposed to be the next big thing. Now they run away from talking about it.

    Pushing the goal post for unlimited growth in the tech sector. They’ve run out of ideas and this AI (Which to be clear I know it’s generative LLM’s) is the next wool they’re pulling over on the customers. It’s all bullshit.

    Side note - AI has not just started. Machine Learning, LM’s have been around for decades.

    The Goldman Sachs article is over a year old and well out of date.
    LLMs have come a long way since then and advances show no sign of slowing up. Quite the contrary.

    I've been following advances very closely. Huge amounts of money and effort are getting plowed into this. A.I is going to be deeply imbedded into every aspect of our lives if we want it or not.

  • edited July 14

    @cyberheater said:

    @offbrands said:

    @cyberheater said:

    @offbrands said:
    The dog is the LLM, that is about to burst a bubble, the “AI” bubble. Bubbles gonna burst. Relatively soon.

    AI has just started. Not sure what you are referring to relative to a burst.

    Goldman Sachs has already questioned its viability, therefore it’s worth.

    I think about how Crypto, Metaverse, it was all supposed to be the next big thing. Now they run away from talking about it.

    Pushing the goal post for unlimited growth in the tech sector. They’ve run out of ideas and this AI (Which to be clear I know it’s generative LLM’s) is the next wool they’re pulling over on the customers. It’s all bullshit.

    Side note - AI has not just started. Machine Learning, LM’s have been around for decades.

    The Goldman Sachs article is over a year old and well out of date.
    LLMs have come a long way since then and advances show no sign of slowing up. Quite the contrary.

    I've been following advances very closely. Huge amounts of money and effort are getting plowed into this. A.I is going to be deeply imbedded into every aspect of our lives if we want it or not.

    Here’s the new one, apologies. This is within the last couple weeks.

    The money being plowed to it makes no difference to what I’m saying. Believe what you will, I’ll do the same.

    Also AI, Machine Learning, has been part of our lives. If you actually believe people will use this new wave of generative AI technology to make it viable and sustainable, while being profitable, I implore you the opportunity to post proof from non-biased insiders who have vested interest in this being the next big thing.

  • @offbrands said:
    Here’s the new one, apologies. This is within the last couple weeks.

    Thanks for the article. In conclusion it does say that A.I will pay off but at the moment it's constrained by GPU availability. There's a bit in there that states (conservatively) that in 10 years 25% of human jobs will be replaced. That's quite a decent return of investment. I think it will be quicker than that.

    @offbrands said:
    The money being plowed to it makes no difference to what I’m saying. Believe what you will, I’ll do the same.

    Don't get me wrong. A.I is going to be the biggest disrupter and impact to humans than any other technology. I'm not hugely optimistic that we will handle the transition well.
    But no. The bubble isn't about to burst. There is no bubble. It's only unrelenting progress. Quite frightening really.

Sign In or Register to comment.