Loopy Pro: Create music, your way.
What is Loopy Pro? — Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.
Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.
Download on the App StoreLoopy Pro is your all-in-one musical toolkit. Try it for free today.
Comments
GPT-4 used to have access to the internet including Bing searches, but it was removed due to abuse.
All that means is its database needs more information.
I don’t use the OpenAI products themselves but I do have GPT4ALL installed on my i9 MacBook Pro, which uses currently fashionable LLMs (which you download, you’ve a choice of them and the choice changes often as new ones emerge)
Being a local LLM which doesn’t use GPU, it’s hilariously slow but that’s okay, I set it off and come back or watch the words gradually appear one by one
A few months ago after I was redundant but before I got really busy I experimented with using it to write a film treatment I fancied doing – it isn’t capable of writing the thing for me (good) but it does suggest things I might have forgotten about, and the overall structure and rules etc. I was doing all the work though. It’s kind of like having a co-writing assistant who just came out of school
Now I’m much busier on a publishing thing I’m doing I tried it out doing dummy article copy, but it’s really too middle of the road and generic, all I wanted to use it for was to flood fill galleys on the page with something a bit more relevant than lorem ipsum and for that it was ok, but for real publishing work it’s same as above – a good reminder of structure and strategy but very vapid for content
I won’t be using it in any recognisable form for actual writing, any contribution it makes is at a much earlier stage and by the final edit it’ll all be the human author
Well, nothing, actually. While it obviously has its uses, I don’t see the point if you want accurate information. If you have to go and double check it’s not spouting rubbish, you might as well just look the info up normally. Why do the job twice?
You haven't used it yet, have you?
ChatGPT “hallucinations” are well documented. I don’t know if this has been mitigated
https://en.wikipedia.org/wiki/Hallucination_(artificial_intelligence)
No, it has not. What will have to happen is some sort of "ground truth" will have to be provided for these systems so they aren't always making up things just to fit a request.
Sorry. Yes
Nvm completely misread. Im hallucinating sorry.
@NeuM: I take it you weren't trying to imply that ChatGPT is a reliable source of accurate information, but that it does have its uses, and it's fun.
As one example of why you wouldn't want to rely on it, here's a little exchange I just whipped up trying to teach ChatGPT to play a simple number game. I won't go into the blatant and silly errors in its answers, which are several. Fun to play with, just don't rely on it:
https://chat.openai.com/share/06f032d8-434b-4db4-810b-9ff571ca1166
With math it’s a bit more understandable because it’s just a language model, but it can be “dangerous “ to rely on it in other areas as well.
https://fortune.com/2023/06/23/lawyers-fined-filing-chatgpt-hallucinations-in-court/
I would blame the lawyers themselves.
If you're using it to supplement your coding on a project for a nuclear power plant's control systems, you might want to triple-check everything first. But if you're using it for non-life threatening uses today, you'll probably be fine.
Yes, math inaccuracies more understandable to anyone who has some understanding of what a language model is and isn't. Not sure what what percentage of ChatGPT users fit that description.
It seems to me that's what the problem is: these AI systems are being rolled out for use by people who have basically zero conception of what they are. (And what's more, and potentially more dangerous than their unreliability, these systems are trained in a way that has them mimic human emotions in an effort to make them feel more "human".)
Yeah, if you use ChatGPT for help with something simple, like remodeling your kitchen, nobody's going to die. Or if you, say, have it do scheduling for you and it ends up leaving something out, or scheduling you twice for same event. Nobody's going to die. But you might end up with wasted time, effort, money. Why not just use a reliable method from the start, instead of using something you have to double- and or triple-check?
All that's not to say that there aren't uses for ChatGPT, just that the lack of reliability is often a problem. Lots of uses don't require reliability (e.g., generate a plot for this short story I'm thinking of). Even very useful for lots of coding, where reliability is important, but where you have more expertise than ChatGPT and can review and test its code before committing it.
Why use these systems today? Use them and find out for yourself. Their value should be self-evident. And if they hold no value for your purposes, just move on.
Have you looked at the two math problems I posted above? It has solved both perfectly. (Granted, this was GPT-4).
I think in general it's important to emphasize that GPT-4 is at least an order of magnitude more capable and smarter than GPT-3.5 (what everyone uses here in this thread). So, discussing the future impact of AI with GPT-3.5 as a reference is already "outdated" so to speak.
GPT-4 has instantly solved this problem perfectly:
It has its uses in brainstorming ideas. I asked it a bunch of hypothetical questions about the world like what would happen if ...x,y, and z. Nothing too specific, but it did a good job of breaking down the scenarios and listed some bullet point possibilities which I found believable. Again it was all hypothetical. I wouldn’t rely on it for actual facts. It might be good at telling you what sorts of sources would be best to consult for those facts however. You gotta get to know it a bit basically, what it’s good and not so good at.
Yes, I understand GPT-4 has improved things a lot. The problem is, though, I'm sure I could trip GPT-4 up. Probably it would be on some more difficult problem, maybe one for which I actually don't know the answer or how to figure it out. Should I then trust GPT-4's answer? Or should I maybe use a different method of finding the answer that I can actually rely on? (Maybe GPT-4 itself can show how to check its work -- that would be good, if I can undertand and thus trust it -- but maybe it won't be able to do that.)
I'm not suggesting that AI is not useful, even in its current state. I'm just suggesting that its lack of reliability is a problem, and a problem that shouldn't be glossed over.
I guarantee in the next several iterations (version 5 or 6) almost no one will be able to trip these models up with anything you can throw at them.
Well, but is a human expert 100% reliable? I think some form of expectation management is needed here. We have, in the span of 7 months, gone from basically "real AI doesn't exist" (apart from categorizing cat photos) to "AI can understand and solve almost every conceivable cognitive problem with 95% reliability".
I think people tend to forget that all of this was completely unthinkable 7 months ago. It's just the beginning of the journey, and it will be awesome!
(look again at what you said there: You're asking an AI to figure out a problem that you as a human can't solve in an effort to "trip" it. Read that again! 😄)
Haha.
But it's also useful for more "mundane" stuff. I think it will take time until people realize all its uses.
Sometimes it can help solving problems that are doable for an averagely intelligent person, but just very tedious.
For example, the other day I wanted to know how much money I could save by taking cold instead of hot showers.
While this question is relatively trivial to answer, it is not completely obvious, especially not if you don't have at least some basic physics knowledge.
It automatically looked up all the data, asked for all necessary clarifications, showed me the whole calculation including all conversions between kWh, °C, and other necessary physical units, and then said (in Euro) how much I can save by having cold showers.
That is the real genius of these things... amassing disparate datasets and sources and "creatively" combining information to get answers.
This summer I wrote an OER textbook using chatgpt. Its output was not good enough, but it gave me a first draft I could rewrite and saved me a lot of time.
I also used it to write employee self-assessment and departmental self-assessment documents. Its output was much better than what I could do, in the sense that it is vapid, overly verbose, and totally full of shit. Which is right on the money for this kind of pointless paperwork administrivia.
Newsflash: I expect millions of people are going to be asking AI questions that they themselves can't solve. Just like people to math on their calculators (or spreadsheets) that they couldn't actually do themselves.
The issue isn't whether I can find a 100% reliable answer some other way. It's whether there's a suitable way to find an answer that's more reliable than the one I'd get from, e.g., ChatGPT. And the answer as of now is often, "Yes".
I would also add that it seems a pretty big problem that ChatGPT seems unable to indicate when it might be wrong. It's like it often says, "Here you go! I got it!" then spews out some wrong answer. It would be much more helpful if it said, "I'm not sure, but this is at least an attempt at an answer." I assume all this will come as AI get more advanced.
On a different note, here's an exchange with ChatGPT that I actually thought was quite helpful. It doens't give me anything I couldn't have found fairly easily in other ways, but it does package things up neatly to give mostly just what I wanted:
"Can you help me build a synthesizer with a teensy board?"
https://chat.openai.com/share/ee1875b0-2c76-4879-94ed-d6cc283fca48
i used it the other day to remind me of the chose i needed to hat in ordered to 3d print a part
Yes. There's a word that experts use to describe this kind of thing: "Intelligence"
Now don't forget that 50% of white-collar jobs are "pointless paperwork administrivia", and watch out for the next unemployment statistics
Yes that's a good example of what I mean by more "mundane" things. Even if you just use it to collate information which you would otherwise have to painstakingly Google every bit of yourself, it's still very useful.