Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

The A.I bloodbath is upon us.

If you can, watch to the end. It’s frankly unbelievable.

I thought it would be another 2 years before we got here.

Comments

  • I’ll watch the above tomorrow as I’m just waiting for jam night pickup, but this AI story amused me greatly earlier;

    https://futurism.com/screenplay-chatgpt-procrastinated

  • That felt like watching a sales pitch to join some kind of cult.

  • edited December 2024

    Coming soon from the folks at Ai... "The Great Depression Mk II".

    And when the military starts to use Ai, it will be a double feature with "World War III".

    Can't wait...

  • @Hel said:
    That felt like watching a sales pitch to join some kind of cult.

    Indeed!

    The her parting reveal is a bit scary. I wonder who she thinks all those "competitors" are that she assumes the smart ai users will win over. Won't they also all be using the same ai tools? It's going to be ai vs ai. For the great "wealth transfer" she's imagining.

  • @MrStochastic said:

    @Hel said:
    That felt like watching a sales pitch to join some kind of cult.

    Indeed!

    The her parting reveal is a bit scary. I wonder who she thinks all those "competitors" are that she assumes the smart ai users will win over. Won't they also all be using the same ai tools? It's going to be ai vs ai. For the great "wealth transfer" she's imagining.

    It's just a sales pitch. "Make Ai great again".

    I wonder where her 100 writers are now...? The dole office?

  • wimwim
    edited December 2024

    @Simon said:
    And when the military starts to use Ai, it will be a double feature with "World War III".

    Yeh, somehow I don't think having jobs or businesses fall by the wayside is the main thing we need to worry about. Just wait until someone tasks it with taking over our energy or communications infrastructure.

    Forget nukes. The next "war" is gonna be a ransomware attack.

  • An excerpt from this article about open AI's 03

    https://theconversation.com/an-ai-system-has-reached-human-level-on-a-test-for-general-intelligence-heres-what-that-means-246529

    We do know that OpenAI started with a general-purpose version of the o3 model (which differs from most other models, because it can spend more time “thinking” about difficult questions) and then trained it specifically for the ARC-AGI test.

    French AI researcher Francois Chollet, who designed the benchmark, believes o3 searches through different “chains of thought” describing steps to solve the task. It would then choose the “best” according to some loosely defined rule, or “heuristic”.

    This would be “not dissimilar” to how Google’s AlphaGo system searched through different possible sequences of moves to beat the world Go champion.

    You can think of these chains of thought like programs that fit the examples. Of course, if it is like the Go-playing AI, then it needs a heuristic, or loose rule, to decide which program is best.

    There could be thousands of different seemingly equally valid programs generated. That heuristic could be “choose the weakest” or “choose the simplest”.

    However, if it is like AlphaGo then they simply had an AI create a heuristic. This was the process for AlphaGo. Google trained a model to rate different sequences of moves as better or worse than others.

    What we still don’t know
    The question then is, is this really closer to AGI? If that is how o3 works, then the underlying model might not be much better than previous models.

    The concepts the model learns from language might not be any more suitable for generalisation than before. Instead, we may just be seeing a more generalisable “chain of thought” found through the extra steps of training a heuristic specialised to this test. The proof, as always, will be in the pudding.

    Almost everything about o3 remains unknown. OpenAI has limited disclosure to a few media presentations and early testing to a handful of researchers, laboratories and AI safety institutions

  • edited December 2024

    A couple of old guys talk about Ai:

  • edited December 2024

    @cyberheater said:
    If you can, watch to the end. It’s frankly unbelievable.

    I thought it would be another 2 years before we got here.

    Beware of this channel, I noticed she often looks like doesn’t understand much about topic, she is just parroting phrases she did hear somewhere, often misinterprets details of informations or puts things out of context..

    o3 is definitely NOT AGI .. in almost all benchmarks is only 20-30% better than o1, just in ONE particular benchmark excels - but there is big problem with those benchmarks how they are designed, the fact that it is greatin one benchmark doesn’t mean it will be great in gemeralmsolvimg of problems..

    All that hype around o3 is VASTLY overblown .. iťs nice update but it is NOT revolutionary, neither game changing and definitely NOT EVEN REMOTELY CLOSE to AGI …

    I am staying with my 5 years old guess for true AGI - 2030 .. Nothing i saw till now makes me believe it will be sooner.. pure LLMs will NEVER lead to AGI, there is needed different approach .. Intersting approach is nee LCM (Large Context Model) from Meta but also this just on it’s own is not final solution for AGI .. In my opimion solution will be combining multiple different types of models (with multimodality) - and this is still at least 2-3 years away - i believe…

  • As we get closer the technical definition of AGI is going to get blurred but in general I wouldn’t be surprised if we get something robust enough to replace a human on a huge array of tasks this year.

  • Google CEO says over 25% of new Google code is generated by AI

    https://arstechnica.com/ai/2024/10/google-ceo-says-over-25-of-new-google-code-is-generated-by-ai/

    And this is now. I wouldn’t want to be a student starting a degree in programming.

  • @cyberheater said:
    Google CEO says over 25% of new Google code is generated by AI

    https://arstechnica.com/ai/2024/10/google-ceo-says-over-25-of-new-google-code-is-generated-by-ai/

    And this is now. I wouldn’t want to be a student starting a degree in programming.

    I wouldn't want to be a student of anything. Wish I were twenty years older actually.

  • edited January 13

    @cyberheater said:
    Google CEO says over 25% of new Google code is generated by AI

    https://arstechnica.com/ai/2024/10/google-ceo-says-over-25-of-new-google-code-is-generated-by-ai/

    And this is now. I wouldn’t want to be a student starting a degree in programming.

    If would say same number (maybe even more) for my code in recent months .. Claude Sonnet 3.5 is pretty amazing in generating code :smile: It saves me tremendous amount of time ( so i have more time to shitpost on social networks)

    And this is now. I wouldn’t want to be a student starting a degree in programming.

    Wrong angle of view. Yeah, traditional coding will slowly fade away in next 5-10 years. But people with analytical thinking (which is core of learning to be a good coder) will do different kind of work with LLM (and other model types which will come). Importance of somebody with deeply analytical thinking, capable of do advanced prompt magic, will NOT fade for quite some time. As time will come, coders will evolve to some kind of "translators" between AI and humans. Of course, just absolute top league of us, average and bellow average coders will just loose their job in few years, that's for sure.

  • @dendy said:

    @cyberheater said:
    Google CEO says over 25% of new Google code is generated by AI

    https://arstechnica.com/ai/2024/10/google-ceo-says-over-25-of-new-google-code-is-generated-by-ai/

    And this is now. I wouldn’t want to be a student starting a degree in programming.

    If would say same number (maybe even more) for my code in recent months .. Claude Sonnet 3.5 is pretty amazing in generating code :smile: It saves me tremendous amount of time ( so i have more time to shitpost on social networks)

    And this is now. I wouldn’t want to be a student starting a degree in programming.

    Wrong angle of view. Yeah, traditional coding will slowly fade away in next 5-10 years. But people with analytical thinking (which is core of learning to be a good coder) will do different kind of work with LLM (and other model types which will come). Importance of somebody with deeply analytical thinking, capable of do advanced prompt magic, will NOT fade for quite some time. As time will come, coders will evolve to some kind of "translators" between AI and humans. Of course, just absolute top league of us, average and bellow average coders will just loose their job in few years, that's for sure.

    Agree. It's a good time to focus on learning the architectural side of software engineering. Working with "agentic" AI with integrated tools like Cursor is a lot like pair programming with an entry level engineer. You have to know what should be done, and you have to know when the work is done correctly. You need to be in control of the architectural vision for the codebase. If you can do that, the % of code you have to hand-write will go down.

    I think this is what people interested in this technology are embracing right now. It's very powerful, and as long as we continue existing in the current paradigm, productivity improvements on this scale are going to be extremely important for staying employed.

  • @Simon said:
    "Generative AI will soon generate millions of tonnes of electronic waste":
    https://www.abc.net.au/news/science/2024-10-29/generative-ai-generating-millions-tonnes-electronic-waste-data/104514376

    "But it's not all bad news.

    The authors found extending the life span of existing computer infrastructure, reusing reusable parts and recycling valuable materials like copper and gold could reduce e-waste generation by up to 86 per cent."

    Nothin' like building in just a little wiggle room in a study eh?

  • edited January 16

    @Hel said:
    That felt like watching a sales pitch to join some kind of cult.

    Sounded like hype from the beginning but soon as I heard the overtired slogan of “this is the worst AI will ever be”, the likelihood went up by a lot. For all we know, this is the best AI will ever be for a long long time. Idk why there is this assumption that it’ll keep improving exponentially in a short timescale.

    Ai will “master reality”? Ok, I want whatever gummies the Jeff bezos lookalike is eating.

Sign In or Register to comment.