Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Chat GPT-4o is my buddy

24567

Comments

  • @dendy said:
    Claude 3.5 from Anthropic is significantly better. Completely stopped usinh GPT, using Claude only.

    Heard it’s very good for coding

  • @AlmostAnonymous said:

    @offbrands said:
    if it’s in jest

    my whole post was in jest, but you decided to take someplace else

    I’m gonna leave it, as you are now cherry picking responses not even part of our thread, to respond to. Not sure why you’re choosing to ignore my valid constructive criticism, but I’m aware self reflection can be difficult.

    Best of luck in your endeavors, cheers.

  • no need for this topic to get too salty, i know we are all old moaning gits here, but we have to do better and show AI that us humans are worth keeping alive.

    ongoing forum wars do us no good in the end.

  • edited July 13

    @NeuM said:

    @Wrlds2ndBstGeoshredr said:

    @NeuM said:

    @Wrlds2ndBstGeoshredr said:
    I want a cute little robot, not some scary thing that looks like it could kill me if it ever got the notion! And all because I was too lazy to take the dishes from the table and put them in the dishwasher.

    Speaking of which, there’s a new show on Apple TV+ called “Sunny” which addresses this issue of “homebots” in a sort of sci-fi drama series. It’s excellent so far (set in Japan, starring Rashida Jones and Japanese actors, mostly speaking English).

    I watched that British show "Humans" about lifelike androids a few years back. The show lost me when it presented a guy screwing a sexbot as being just inherently wrong. Isn't that why we build sexbots to begin with?

    I think you’ll like this series.

    What bugged me about Humans is that they centered the plot around the key question: At what level of sentience does a being deserve full civil rights? Obviously there is no clear answer to that. But instead of exploring the issue, the writers decided that of course robots deserve rights. The good guys supported robot rights; the bad guys opposed them. It was that simplistic. I prefer smart writers who assume their audience is equally smart.

  • @Danny_Mammy said:
    no need for this topic to get too salty, i know we are all old moaning gits here, but we have to do better and show AI that us humans are worth keeping alive.

    ongoing forum wars do us no good in the end.

    I agree with this. :) Let's keep it peaceful.

  • @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

  • @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Is this a fact, or your assumption about assumptions? 😏 (I'm kidding of course. Lol)

  • @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Yanno, You’re right. I agree. I guess what I meant was making random assumptions about this person felt gross to me. I would still push on saying that random assumptions shouldn’t be made online considering the topic. It had no relevance nor no factual merit, nor any sense. It was pulled out of thin air.

    Obviously the room feels different, but figured a space for music could do without assumptions on a person.

    I didn’t mean to be overly salty or negative, just wanted to be fair to this person. I would hate to see my tweet clipped out and have random forum say random assumptions about me and not one person mention it as being a bit unfair.

    Anyways, good point.

  • @jwmmakerofmusic said:

    @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Is this a fact, or your assumption about assumptions? 😏 (I'm kidding of course. Lol)

    Lol very funny.

    I have made a bit of an ass of myself it seems. Such is life. Hilarious.

  • edited July 13

    @offbrands said:

    @jwmmakerofmusic said:

    @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Is this a fact, or your assumption about assumptions? 😏 (I'm kidding of course. Lol)

    Lol very funny.

    I have made a bit of an ass of myself it seems. Such is life. Hilarious.

    It's a bit of a mine field. Not making assumptions is impossible. But there are some that should be resisted, and some that definitely should not be verbalized.

    And you are right as far as I can tell. On the internet, it's hard not to let those assumptions sneak into the mind, but typing the words out is something that can (and should) be considered beforehand.

  • @Ailerom said:

    @offbrands said:

    @jwmmakerofmusic said:

    @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Is this a fact, or your assumption about assumptions? 😏 (I'm kidding of course. Lol)

    Lol very funny.

    I have made a bit of an ass of myself it seems. Such is life. Hilarious.

    It's a bit of a mine field. Not making assumptions is impossible. But there are some that should be resisted, and some that definitely should not be verbalized.

    And you are right as far as I can tell. On the internet, it's hard not to let those assumptions sneak into the mind, but typing the words out is something that can (and should) be considered beforehand.

    Well said. Appreciate the responses. Back to making music.

    Cheers

  • @offbrands said:

    @jwmmakerofmusic said:

    @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Is this a fact, or your assumption about assumptions? 😏 (I'm kidding of course. Lol)

    Lol very funny.

    I have made a bit of an ass of myself it seems. Such is life. Hilarious.

    Trust me mate. I often make an ass of myself too. Part of the human condition, or at least that's the ass-umption I'm making. 😂 That, and bad puns.

  • @jwmmakerofmusic said:

    @offbrands said:

    @jwmmakerofmusic said:

    @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Is this a fact, or your assumption about assumptions? 😏 (I'm kidding of course. Lol)

    Lol very funny.

    I have made a bit of an ass of myself it seems. Such is life. Hilarious.

    Trust me mate. I often make an ass of myself too. Part of the human condition, or at least that's the ass-umption I'm making. 😂 That, and bad puns.

    Ass-toundingly awful, but brilliant. 😅

  • For me, the big problem with these machine-learning program is it lies when it doesn't know the answer while instead of admitting that it doesn't know...

  • edited July 13

    @Artj said:
    For me, the big problem with these machine-learning program is it lies when it doesn't know the answer while instead of admitting that it doesn't know...

    Totally agree, it was by far the most annoying part of the program.
    The developers should definitely remove lying from AI when it doesn't know the answer or have the data to help.

    The human trait of lying cannot be in AI, even if you could have it as an option for more human like conversations. Still a bad idea in the long run IMO.

  • @Danny_Mammy said:

    @Artj said:
    For me, the big problem with these machine-learning program is it lies when it doesn't know the answer while instead of admitting that it doesn't know...

    Totally agree, it was by far the most annoying part of the program.
    The developers should definitely remove lying from AI when it doesn't know the answer or have the data to help.

    The human trait of lying cannot be in AI, even if you could have it as an option for more human like conversations. Still a bad idea in the long run IMO.

    It’s not really lying…it doesn’t know truth from fiction..which is why it can present both as fact….it needs to provide an answer but it doesn’t really know if the answer is right or not. Contextually if it seems right, that’s good enough. That’s why people can’t be lazy and treat it like a search engine. That’s not what it’s for on its own. It’s an assistant, and like any assistant, it can help you get a job done quicker, but you need to be able to know whether or not it’s actually doing a good job or if its full of sh*t. Most people don’t use chatGPT or the other AIs to their best capabilities because they assume its supposed to be some smart version of google search.learn to make it automate tasks for you. Teach it all about your iPad music studio set up and let it help you organize, or make new things. The tool is powerful, the users unfortunately are not most of the time…

  • @chocobitz825 said:

    @Danny_Mammy said:

    @Artj said:
    For me, the big problem with these machine-learning program is it lies when it doesn't know the answer while instead of admitting that it doesn't know...

    Totally agree, it was by far the most annoying part of the program.
    The developers should definitely remove lying from AI when it doesn't know the answer or have the data to help.

    The human trait of lying cannot be in AI, even if you could have it as an option for more human like conversations. Still a bad idea in the long run IMO.

    It’s not really lying…it doesn’t know truth from fiction..which is why it can present both as fact….it needs to provide an answer but it doesn’t really know if the answer is right or not. Contextually if it seems right, that’s good enough. That’s why people can’t be lazy and treat it like a search engine. That’s not what it’s for on its own. It’s an assistant, and like any assistant, it can help you get a job done quicker, but you need to be able to know whether or not it’s actually doing a good job or if its full of sh*t. Most people don’t use chatGPT or the other AIs to their best capabilities because they assume its supposed to be some smart version of google search.learn to make it automate tasks for you. Teach it all about your iPad music studio set up and let it help you organize, or make new things. The tool is powerful, the users unfortunately are not most of the time…

    let me add some context, I use the word lying as its best describes the AI's action in certain situations. it might not be technically lying per se however the outcome is pretty much the same to the end user.

    In my example i asked AI what key bar 5 of piece of music modulated to, AI response with b major, this response is incorrect. I tell the AI its incorrect, AI apologises and tell me that is A minor which is incorrect and so forth.

    My point is the AI at this point needs to respond that it doesn't have specific data on that question, I not saying that technically this is easy to do.

    To just give another incorrect answer is worse to me than AI sticking to its original answer and saying its correct. I understand both outcomes will lead me to making a mistake.

    A nonresponse or an an admission that the AI is incapable to give a definite answer is perfectly acceptable and infact should be of the highest priority

  • @Danny_Mammy said:
    no need for this topic to get too salty, i know we are all old moaning gits here, but we have to do better and show AI that us humans are worth keeping alive.

    ongoing forum wars do us no good in the end.

    Humans are not smart enough to entertain AI. The countdown clock is already ticking for us. They'll breed the few human specimen left and keep them in a Zoo for pure enjoyment: watch the male kill each other with rudimentary weapons to get the female favours. Seems like a lot of fun for a Sunday afternoon stroll.

  • @JanKun said:

    @Danny_Mammy said:
    no need for this topic to get too salty, i know we are all old moaning gits here, but we have to do better and show AI that us humans are worth keeping alive.

    ongoing forum wars do us no good in the end.

    Humans are not smart enough to entertain AI. The countdown clock is already ticking for us. They'll breed the few human specimen left and keep them in a Zoo for pure enjoyment: watch the male kill each other with rudimentary weapons to get the female favours. Seems like a lot of fun for a Sunday afternoon stroll.

    like a battle to the death in a Colosseum? I got a captain Kirk fights Spock with sharp shovels image in mind.

  • @Danny_Mammy said:

    @SevenSystems said:

    @Danny_Mammy said:

    @Poppadocrock said:
    Not that late @Danny_Mammy Came out maybe a month or so ago.

    Is version 4o only in paid version?

    @Nuggetz

    for free it seems like a little more than 10 messages and every 3 hours its renewed but basically you run out quick.

    even paid its limited so not ideal.

    its the most impressive AI tool i used so far, the art generation wasn't very useful to me at all but this voice conservation with AI is extremely useful. Google search is dead.

    Not sure if it's still as good, but when they first released Voice Chat, what I found most impressive was the extreme authenticity of the voice, including very subtle emotional tone with slight pitch variations, little "subconscious breathing noises" depending on context (no idea what they're called), etc... I was blown away because it did actually, 100%, sound like a human conversation partner.

    its the recognition of my voice and the human like response from the AI which got me in this version, it just works. i asked a question in my own way of talking and the thing understood and replied.

    Yes the recognition is also very good. I first thought that voice recognition and output are somehow tied / integrated directly into the "main" LLM in some novel way because they were both so good and natural, but apparently, they're still using "traditional", separate (although very good) speech-to-text and text-to-speech ANNs.

    what stunned me was it made a mistake in dutch and i corrected it, it then repeated the sentence in the way i wanted! now i don't know if that data will actually be correct for a future conversation but for my session it worked.

    Everything is only remembered per-session. In the most recent models, I think it's 8192 tokens (mostly equivalent to words). So anything either of you said more than 8192 words "ago" is "forgotten" by the model.

  • @offbrands said:

    @SevenSystems said:

    @offbrands said:

    @AlmostAnonymous said:

    @HolyMoses said:
    …or, as this women claims:

    (Favorite in reprise)

    that woman just doesnt know whats available to her or where the world was even 8 years ago....

    Do you know this person? What makes you so sure they don’t know what’s available to them or what was available 8 years ago?

    (the banana peal thing right after gets me everytime)

    (shes also prolly one of those anti subscription or 'i wont pay more than 5$ for an app' people, cause shes already looking at a free service to do her dishes....)

    This is a lot of assumptions to make off of a single post, unless you know them more so, and to imply that they wouldn’t pay for a subscription or any apps for more than $5.

    It’s weird that you felt the need to identify the tweet as “that woman” and then proceed to twist yourself into pretzel with assumptions is just an astounding amount of mental gymnastics to punch down on a short tweet with a pretty fair point that AI is being marketed at taking over creative work rather than mundane tasks, she also never mentioned cost of any kind being an issue within apps or subscriptions.

    Strange response all around. I would hate if I happened to be this person and come across this great community and this be the impression I get of weird response with a plethora of assumptions attached to a thread from a tweet I made.

    I think there’s a better way to conduct our opinions and conversations throughout this forum.

    You seem like this woman is your wife 😂

    Not my wife, nor am I married.

    Just know this forum has a much nicer version of discussion and feel that standard being upheld is more important than that deeply strange response being ignored or be left not being called out.

    I think this has already been settled a few posts later so let's keep it at that 😊

    Not enough context to read into your comment in either direction, if it’s in jest, or calling me out, so I’ll leave it.

    The concept of "calling out" someone is completely alien to me. Meanwhile, saying everything in jest is my standard mode of operation, and it gets worse the worse the world around us becomes 😁 (so I'm now at roughly 700% on the Jest Richter Scale)

    I enjoy your app, well I believe it’s yours, Meow Editor a lot, use it often and was part of the test flight. Well done.

    Thank you, much appreciated! 😌

  • @Ailerom said:

    @offbrands said:
    I don’t think anyone anywhere should make any kind of assumptions on anyone, anywhere.

    Sorry, but that is just crazy talk. Not you being crazy. Just the statement. Assumption is a natural and inbuilt instinctual part of the human mind and is an important part of human interaction.

    Yep. Actually it's a basic requirement for evolutionary survival. It's a part of the wider concept of "pattern recognition", one of the most fundamental parts of cognition. Just goes to show you how crazy the times are we live in in the West, where any such endeavour is branded as "discrimination" and essentially means the societal death penalty 😃 (but I digress)

  • edited July 13

    @Danny_Mammy said:

    @chocobitz825 said:

    @Danny_Mammy said:

    @Artj said:
    For me, the big problem with these machine-learning program is it lies when it doesn't know the answer while instead of admitting that it doesn't know...

    Totally agree, it was by far the most annoying part of the program.
    The developers should definitely remove lying from AI when it doesn't know the answer or have the data to help.

    The human trait of lying cannot be in AI, even if you could have it as an option for more human like conversations. Still a bad idea in the long run IMO.

    It’s not really lying…it doesn’t know truth from fiction..which is why it can present both as fact….it needs to provide an answer but it doesn’t really know if the answer is right or not. Contextually if it seems right, that’s good enough. That’s why people can’t be lazy and treat it like a search engine. That’s not what it’s for on its own. It’s an assistant, and like any assistant, it can help you get a job done quicker, but you need to be able to know whether or not it’s actually doing a good job or if its full of sh*t. Most people don’t use chatGPT or the other AIs to their best capabilities because they assume its supposed to be some smart version of google search.learn to make it automate tasks for you. Teach it all about your iPad music studio set up and let it help you organize, or make new things. The tool is powerful, the users unfortunately are not most of the time…

    let me add some context, I use the word lying as its best describes the AI's action in certain situations. it might not be technically lying per se however the outcome is pretty much the same to the end user.

    In my example i asked AI what key bar 5 of piece of music modulated to, AI response with b major, this response is incorrect. I tell the AI its incorrect, AI apologises and tell me that is A minor which is incorrect and so forth.

    My point is the AI at this point needs to respond that it doesn't have specific data on that question, I not saying that technically this is easy to do.

    It's not as straightforward as many might think. Most people still think that stuff like GPT or Claude is some form of "program" that has been "programmed" by humans, with "data" that gets "searched" and then "output" and thus could be "filtered" in some way. That is mostly wrong. There is some traditional code involved, but that's maybe a few hundred or thousand lines that just "run" the neural network. The intelligence, AND all knowledge, comes from billions and even trillions of numbers that basically no-one, not even the folks at OpenAI, have any idea what they are or why they're causing intelligence. (I'm dumbing this down a little but not a lot 😉)

    When a neural network like GPT-4 or a brain generates an action upon stimuli (inputs), it is due to a (complicated and nested) neural "pathway" being followed from input to output. A neural network does "know" how "certain" it is with the outputs it's generating, as this is a function of the "strength" of the connections it is following (every connection in a neural network has a certain "strength").

    So, for stuff that GPT isn't "sure" about, the strength of the connections it followed will be lower, i.e. it will be less "certain" about the answer. It's a matter of adjusting the thresholds of what connections to follow and which to discard, etc. -- this can already be tuned to a degree in the API, i.e. developer version of GPT (called the "Temperature" of the model).

    OK, enough boring talk (again it's all not totally technically accurate but this is a music forum 😜)

  • edited July 13

    @SevenSystems

    Everything is only remembered per-session. In the most recent models, I think it's 8192 tokens (mostly equivalent to words). So anything either of you said more than 8192 words "ago" is "forgotten" by the model.

    yeah, all the data its uses comes from data up until jan 2022.

    It cant make any further adjustments to its learning until OpenAI give it more data in the next version.

    also this is why it has no Realtime data knowledge to pull from, like today's Schedules for basketball games... unless you wanna know the schedule from 2021.

    new discoveries after Jan 2022 will not be known by chatgpt 4 o

  • @Danny_Mammy said:

    @SevenSystems

    Everything is only remembered per-session. In the most recent models, I think it's 8192 tokens (mostly equivalent to words). So anything either of you said more than 8192 words "ago" is "forgotten" by the model.

    yeah, all the data its uses comes from data up until jan 2022.

    It cant make any further adjustments to its learning until OpenAI give it more data in the next version.

    also this is why it has no Realtime data knowledge to pull from, like the Schedule for a basketball game... unless you wanna know the schedule from 2021.

    new discoveries after Jan 2022 will not be known by chatgpt 4 o

    I was referring to it remembering your personal conversations with it and what you taught it -- that is the 8192 token limit.

    The "static" "knowledge" it has is up to a certain point in time too yeah -- you can actually just ask it something like "What date is your knowledge cutoff" and it'll happily respond :)

  • @SevenSystems said:

    @Danny_Mammy said:

    @chocobitz825 said:

    @Danny_Mammy said:

    @Artj said:
    For me, the big problem with these machine-learning program is it lies when it doesn't know the answer while instead of admitting that it doesn't know...

    Totally agree, it was by far the most annoying part of the program.
    The developers should definitely remove lying from AI when it doesn't know the answer or have the data to help.

    The human trait of lying cannot be in AI, even if you could have it as an option for more human like conversations. Still a bad idea in the long run IMO.

    It’s not really lying…it doesn’t know truth from fiction..which is why it can present both as fact….it needs to provide an answer but it doesn’t really know if the answer is right or not. Contextually if it seems right, that’s good enough. That’s why people can’t be lazy and treat it like a search engine. That’s not what it’s for on its own. It’s an assistant, and like any assistant, it can help you get a job done quicker, but you need to be able to know whether or not it’s actually doing a good job or if its full of sh*t. Most people don’t use chatGPT or the other AIs to their best capabilities because they assume its supposed to be some smart version of google search.learn to make it automate tasks for you. Teach it all about your iPad music studio set up and let it help you organize, or make new things. The tool is powerful, the users unfortunately are not most of the time…

    let me add some context, I use the word lying as its best describes the AI's action in certain situations. it might not be technically lying per se however the outcome is pretty much the same to the end user.

    In my example i asked AI what key bar 5 of piece of music modulated to, AI response with b major, this response is incorrect. I tell the AI its incorrect, AI apologises and tell me that is A minor which is incorrect and so forth.

    My point is the AI at this point needs to respond that it doesn't have specific data on that question, I not saying that technically this is easy to do.

    It's not as straightforward as many might think.

    its definitely not straightforward, however its annoying to have the AI lie and very annoying, a weak aspect of the system currently.

  • @SevenSystems said:

    @Danny_Mammy said:

    @SevenSystems

    Everything is only remembered per-session. In the most recent models, I think it's 8192 tokens (mostly equivalent to words). So anything either of you said more than 8192 words "ago" is "forgotten" by the model.

    yeah, all the data its uses comes from data up until jan 2022.

    It cant make any further adjustments to its learning until OpenAI give it more data in the next version.

    also this is why it has no Realtime data knowledge to pull from, like the Schedule for a basketball game... unless you wanna know the schedule from 2021.

    new discoveries after Jan 2022 will not be known by chatgpt 4 o

    I was referring to it remembering your personal conversations with it and what you taught it -- that is the 8192 token limit.

    The "static" "knowledge" it has is up to a certain point in time too yeah -- you can actually just ask it something like "What date is your knowledge cutoff" and it'll happily respond :)

    yep, I understood what you said

  • @SevenSystems said:

    @Danny_Mammy said:

    @SevenSystems

    Everything is only remembered per-session. In the most recent models, I think it's 8192 tokens (mostly equivalent to words). So anything either of you said more than 8192 words "ago" is "forgotten" by the model.

    yeah, all the data its uses comes from data up until jan 2022.

    It cant make any further adjustments to its learning until OpenAI give it more data in the next version.

    also this is why it has no Realtime data knowledge to pull from, like the Schedule for a basketball game... unless you wanna know the schedule from 2021.

    new discoveries after Jan 2022 will not be known by chatgpt 4 o

    I was referring to it remembering your personal conversations with it and what you taught it -- that is the 8192 token limit.

    The "static" "knowledge" it has is up to a certain point in time too yeah -- you can actually just ask it something like "What date is your knowledge cutoff" and it'll happily respond :)

    Now, chatgpt remembers all info across your chat history - this was added in a recent update!

  • @Gavinski said:

    @SevenSystems said:

    @Danny_Mammy said:

    @SevenSystems

    Everything is only remembered per-session. In the most recent models, I think it's 8192 tokens (mostly equivalent to words). So anything either of you said more than 8192 words "ago" is "forgotten" by the model.

    yeah, all the data its uses comes from data up until jan 2022.

    It cant make any further adjustments to its learning until OpenAI give it more data in the next version.

    also this is why it has no Realtime data knowledge to pull from, like the Schedule for a basketball game... unless you wanna know the schedule from 2021.

    new discoveries after Jan 2022 will not be known by chatgpt 4 o

    I was referring to it remembering your personal conversations with it and what you taught it -- that is the 8192 token limit.

    The "static" "knowledge" it has is up to a certain point in time too yeah -- you can actually just ask it something like "What date is your knowledge cutoff" and it'll happily respond :)

    Now, chatgpt remembers all info across your chat history - this was added in a recent update!

    Oh! Thanks... is this different to Custom Instructions? (apparently I should read those more often 😂)

  • In the latest Harper’s magazine an article pointed out this idea:

    “Over the past year, several AI companies have advertised positions for writers and poets. As it becomes more difficult to discreetly swallow immense quantities of copyrighted material, the dataset needs new high quality inputs. Why would a tech company pay for content, given the ocean of data still liberally accessible on the internet? Industry leaders realize that, more and more, the texts available online will be co-written or simply re-written, by their own tools, inevitably degrading the the quality of future iterations of the model.”

    My interpretation of that is there will be a kind of “in-breeding” without new input of real human imagination.

Sign In or Register to comment.