Loopy Pro: Create music, your way.

What is Loopy Pro?Loopy Pro is a powerful, flexible, and intuitive live looper, sampler, clip launcher and DAW for iPhone and iPad. At its core, it allows you to record and layer sounds in real-time to create complex musical arrangements. But it doesn’t stop there—Loopy Pro offers advanced tools to customize your workflow, build dynamic performance setups, and create a seamless connection between instruments, effects, and external gear.

Use it for live looping, sequencing, arranging, mixing, and much more. Whether you're a live performer, a producer, or just experimenting with sound, Loopy Pro helps you take control of your creative process.

Download on the App Store

Loopy Pro is your all-in-one musical toolkit. Try it for free today.

Time Magazine June cover. The End of Humanity (A.I).

That is a very thought provoking cover.

«1

Comments

  • As a graphic designer I must say it's a hell of a nice cover, probably made by the remAIns of humanity.

  • @marcuspresident said:
    As a graphic designer I must say it's a hell of a nice cover, probably made by the remAIns of humanity.

    Yes. It's a very nice cover.

  • When was the last time Time was right about anything?

  • @NeuM said:
    When was the last time Time was right about anything?

    They only have to be right about this once.

  • @cyberheater said:

    @NeuM said:
    When was the last time Time was right about anything?

    They only have to be right about this once.

    LOL. Pretty sure people will remain the biggest risk for some time.

  • @NeuM said:

    @cyberheater said:

    @NeuM said:
    When was the last time Time was right about anything?

    They only have to be right about this once.

    LOL. Pretty sure people will remain the biggest risk for some time.

    Well it's peoples stupidity and greed that will cause the AI crisis.

  • @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    When was the last time Time was right about anything?

    They only have to be right about this once.

    LOL. Pretty sure people will remain the biggest risk for some time.

    Well it's peoples stupidity and greed that will cause the AI crisis.

    And there’s plenty of that to go around these days. Lol

  • @mtenk said:

    @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    When was the last time Time was right about anything?

    They only have to be right about this once.

    LOL. Pretty sure people will remain the biggest risk for some time.

    Well it's peoples stupidity and greed that will cause the AI crisis.

    And there’s plenty of that to go around these days. Lol

    You are not wrong.

  • Time Magazine still exists?

  • @Simon said:
    Time Magazine still exists?

    @Simon said:
    Time Magazine still exists?

    Exactly. One of the great unsolved mysteries.

  • @NeuM said:

    @Simon said:
    Time Magazine still exists?

    @Simon said:
    Time Magazine still exists?

    Exactly. One of the great unsolved mysteries.

    Not impressed with it.

    It's called "Time" magazine and yet it has never once done a story on Doctor Who... :smiley:

  • I think much of the A.I. fear mongering has more to do with promotional advertising for the various platforms, rather than actual alarm bells.

    I mean, it just sounds better to say “our technology is so powerful that it could mean the end of the human race!” versus “our large language model has been trained on even more data and it can make a website all by itself with just the right creative prompts!”

    Potential human destruction is a much better sell. ;)

  • @skiphunt said:
    I mean, it just sounds better to say “our technology is so powerful that it could mean the end of the human race!” versus “our large language model has been trained on even more data and it can make a website all by itself with just the right creative prompts!”

    If Ai were only to be used to make web sites then there would be no fear.

    The concerns are about the other uses for it.

  • AI is probably only a danger for those that stare at their phones all day.

  • @skiphunt said:
    I think much of the A.I. fear mongering has more to do with promotional advertising for the various platforms, rather than actual alarm bells.

    I mean, it just sounds better to say “our technology is so powerful that it could mean the end of the human race!” versus “our large language model has been trained on even more data and it can make a website all by itself with just the right creative prompts!”

    Potential human destruction is a much better sell. ;)

    While there is no doubt some self promotion going on some of these guys are very prominent researchers who have left Google etc because they believe there is a real danger. We are probably not likely to be hunted down by robots any time soon but AI will cause a lot of social and political upheaval as many jobs are eliminated in the next few years.

    Like most technologies AI will be weaponized. This may be where it all goes wrong - a super weapon with a mind of its own.

    “The human race is the biological bootloader for digital intelligence” - Elon Musk.

  • I do think there’s a very real danger of bad people doing bad stuff with A.I. - I’m sure some of it is already in the works.

    I’m just not that concerned about A.I. tech becoming autonomous and or sentient with a beef against humankind.

  • @skiphunt said:
    I do think there’s a very real danger of bad people doing bad stuff with A.I. - I’m sure some of it is already in the works.

    I’m just not that concerned about A.I. tech becoming autonomous and or sentient with a beef against humankind.

    Same. It's the humans that will be responsible if / when, most likely when, things go badly badly wrong

  • @cyberheater said:

    @NeuM said:

    @cyberheater said:

    @NeuM said:
    When was the last time Time was right about anything?

    They only have to be right about this once.

    LOL. Pretty sure people will remain the biggest risk for some time.

    Well it's peoples stupidity and greed that will cause the AI crisis.

    Ultimately we are responsible for our creations, they are a reflection of ourselves, to expect AI to somehow behave nicely is quite naive and an ever increasing number of its creators are warning of this possible danger. Humans do like to play with matches.

  • Oh come on

  • We developed language.

    We harnessed the power of fire.

    We navigated the globe.

    We created the technology to fly.

    We harnessed the power of the electron.

    We spilt the atom.

    We decoded DNA and can edit its sequences.

    We wired the world for instant communications.

    Are they saying we can’t debug code?

  • edited June 2023

    Meanwhile, let’s just hope North Korea don’t get their hands on a decent computer:

    https://www.sciencealert.com/ai-experiment-generated-40-000-hypothetical-bioweapons-in-6-hours-scientists-warn

    The same trick works with viruses, btw:

    “a research team at the State University of New York in Stony Brook chemically synthesized an artificial polio virus from scratch (Cello et al., 2002). They started with the genetic sequence of the agent, which is available online, ordered small, tailor-made DNA sequences and combined them to reconstruct the complete viral genome. In a final step, the synthesized DNA was brought to life by adding a chemical cocktail that initiated the production of a living, pathogenic virus.

    In principle, this method could be used to synthesize other viruses with similarly short DNA sequences. This includes at least five viruses that are considered to be potential biowarfare agents, among them Ebola virus, Marburg virus and Venezuelan equine encephalitis virus.”

    Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1326447/

    And anyone can now mail order custom DNA sequences. So a bedroom fantasist incel with grad level lab skills, a knowledge of CRISPR, kit you can legitimately buy off Amazon without freaking anyone out, and a bad attitude could probably make an Omega virus (as in the last virus any of us as a species would ever get), if they were of a mind to.

    So, technical question: is it still an AI apocalypse if it was used to develop the Omega virus that wiped us all out? Or does that only apply to Skynet type stompy Terminator things?
    Asking for a friend.

  • @McD said:
    We developed language.

    We harnessed the power of fire.

    We navigated the globe.

    We created the technology to fly.

    We harnessed the power of the electron.

    We spilt the atom.

    We decoded DNA and can edit its sequences.

    We wired the world for instant communications.

    Are they saying we can’t debug code?

    Side by side, human and AI, we are somewhat slow.

  • @Svetlovska said:
    Meanwhile, let’s just hope North Korea don’t get their hands on a decent computer:

    https://www.sciencealert.com/ai-experiment-generated-40-000-hypothetical-bioweapons-in-6-hours-scientists-warn

    @ervin said:
    Oh come on

    We will do it, before our adversaries do it.

  • edited June 2023

    @knewspeak

    “We will do it before our adversaries do it”

    The alien puzzled over this strange phrase, chiselled into the weathered stone. The place appeared to be some kind of ritual space, judging from the numbers of the extinct species gathered here, on this long-dead planet. A brief communing with the hive mind suggested a translated meaning for the words. A - ’gravestone’?

    Ah. Now they understood.

    “We will do it before our adversaries do it.”

    It was an epitaph.

  • @McD said:
    We developed language.

    We harnessed the power of fire.

    We navigated the globe.

    We created the technology to fly.

    We harnessed the power of the electron.

    We spilt the atom.

    We decoded DNA and can edit its sequences.

    We wired the world for instant communications.

    Are they saying we can’t debug code?

    All of these technological advances caused huge numbers of problems. Basically side effects due to people not seeing the bigger picture but thinking in an atomised and redictionistic way. The book The Ascent of Humanity is a really interesting read on this aspect of technology. His critical arguments are much more interesting than his proposed solutions, but even a read of just the first few chapters gives food for thought:

    https://ascentofhumanity.com/

    The above link contains links to free copies of the book and ebook - although it was published the author wanted to get his ideas out more than he cared about making money from it, I guess.

    The good reads page has some nice summaries of the basic argument.

    https://www.goodreads.com/book/show/482505.The_Ascent_of_Humanity

  • @Gavinski said:
    All of these technological advances caused huge numbers of problems. Basically side effects due to people not seeing the bigger picture but thinking in an atomised and redictionistic way. The book The Ascent of Humanity is a really interesting read on this aspect of technology. His critical arguments are much more interesting than his proposed solutions, but even a read of just the first few chapters gives food for thought:

    https://ascentofhumanity.com/

    Great link. Thank you!

  • @McD said:
    Are they saying we can’t debug code?

    Sorry. Not sure what point you are making?

  • edited June 2023

    @McD said:
    Are they saying we can’t debug code?

    It sounds like while code is involved there is a black box natural to these things in that while neural networks can provide accurate predictions or classifications, it can be challenging to comprehend how they arrive at those results. The internal workings of neural networks involve intricate calculations and transformations of data, which may not be readily interpretable by humans.

    Neural networks employ non-linear activation functions, allowing them to model complex relationships between inputs and outputs. While non-linearities enhance the network's ability to learn and generalize, they can also make it challenging to intuitively understand how the network is making decisions.

    Neural networks can be difficult to explain because they learn representations and patterns directly from data without explicit human programming. The internal representations learned by neural networks are often distributed across multiple neurons and layers, making it challenging to pinpoint precisely what features or factors contribute to their decisions.

    (Thanks chatgpt!)

  • @GrimLucky said:

    Could be when it costs billions of dollars and becomes the foundation of a company that simply turning it off becomes a problem. I mean companies already would rather do payouts for whoopsies than fix them in many cases. Car manufacturers factor payouts for kids heads crushed by airbags into development budgets already etc. If we get AIs talking to each other like virtual lobbyists, judges and ceos it is pretty easy to imagine all sorts of 20th century styled dehumanized machine horror etc etc oh god why edibles in the morning!?

Sign In or Register to comment.