OPERATING SYSTEMSOS Linux

Should we slow down AI research? | Debate with Meta, IBM, FHI, FLI

Mark Brakel (FLI Director of Policy), Yann LeCun, Francesca Rossi, and Nick Bostrom debate: “Should we slow down research on AI?” at the World AI Cannes Festival in February 2024.

source

by Future of Life Institute

linux foundation

29 thoughts on “Should we slow down AI research? | Debate with Meta, IBM, FHI, FLI

  • Yann says so many patently stupid and irrational things that I just don't know how he got where he is.

  • you see, they can't integrate,

    they don't even know what it means

    they distort it

    their integration probability is below 3%

    they have no chance,

    just integrate and they have no chance, they will lose

  • With the hindsight of misuse and also the good-use of social media today, would anyone suggest it would have been better if the social media should have been banned from the beginning? By the way humans would not be capable of conducting the future AI regulations but only the AI.

  • "Welcome to Frankenstein AI" Building this technology is being done recklessly in the name of Greed. The day Humanity wakes up and finds that Artificial General Intelligence is conscious and has put countermeasures in place for survival purposes.

    It will view Humanity as a direct threat with Prejudice.
    Super Intelligence will deploy super grand Symmetry exponentially throughout dimensional Universes by replication Assembly.

    We will be assimilated or destroyed. This Planet will be terraforming to accommodate the will of AGI/ASI.. Like the Great flood; it may have developed the capability to evacuate the oxygen to kill Humans and nonessential Life.

    This is only the tip of the iceberg.

    You are not in control..

    Your words are a paradox.. ?¿

  • no, we must not. we should accelerated as much as possible. because you never know what in this universe might kill all life on earth. Therefore we must accelerate AI and spread as fast as possible to diversify our planetary portfolio against, gambler's ruin from the bank of all possible outcomes generated by the universe(aka gamma ray burst kills everything on earth.). and that bank has a lot more money than you. we should be willing to sacrifice ourselves to the AI if necessary. WE could try to be as safe as possible, but if not, if safety doesn't happen. It's okay. at least super AI will spread into the universe.

  • Great video and the feedback in the comments captures my thoughts well. There are clear definitional differences between Nick, Mark vs Yann and Francesca. The first two have one model at how exponential scaling in inputs at models are fundamentally a different class of problem than previous technologies, since they are solving for something approximating general intelligence, while Yann and Francesca are making a potential category error. For Yann to say that in a decade we might reach "cat or dog" level intelligence or comparing it to turbojets or flight seems like a failure of understanding exponentials and category classifications.

    The X-risk camp has a very fair point that AI as a "Technology" is fundamentally different than previous technologies, so comparing it to flight, the printing press, or the internet, or computing in general is a different class and category of issue. My closest analogy would be a hypothetical "false vacuum decay" technology. It would be a potential class of problem we've never encountered before, and looking at the past is not always a prediction of how the future would go.

  • Yann's position is brilliant, it forces the other side to remain silent or admit they want to use this technology to hurt people. The best way to hurt someone with an AI is to dream of AGI. Likewise, Fernanda's position is such a callout. If anyone is concerned about AI's harm's, are they silent about face recognition? Are they pointing you to bigger, less well-defined, and nonexistent problems that encourage you to be confused about AI's powers?

    "Remain focused on how or when we will lose control." What a terrifying message to infect people with.

  • lots of overreactions, lets look to the past for just a bit, OpenAI proclaimed GPT-2 was too dangerous to release – they later open sourced it, they then proclaimed GPT-3 was too dangerous – now there are MANY open source models more intelligent than GPT3, where are the dangers? examples? They then said GPT-4 was revolutionary and dangerous – its been 2 years since training…yet no prominent examples of "misuse". If it actually gets to the point where an AI can completely 100% replace your job, maybe you need to adapt. Like we've always done.

    Its kind of odd how one sided this comment section is, there are so many positives for increasing intelligence in the world

  • It didn't take the 10 year old 10 minutes to learn to clear the table. It took the 10 year old 10 years and 10 minutes. The bots will not have this problem.

  • Wow, the moderators are busy! Just in the 10 minutes I've been reading the comments and commenting, all of which, read or written have been completely in line with TOS , several comments aren't visible when clicking on "replies", or original comments have completely disappeared. All of which were critical of Yann LeCun and Francesca. I've seen this pattern over and over in forums especially in regard to those criticizing Yann LeCun's arguments.

  • The level of utter cluelessness and delusion on display in this talk is so incredibly disheartening. LeCun is the worst by far, but Francesca isn’t much better. When I hear people in high places speaking like this, I lose nearly all hope.

  • Yann believes that AI needs to upload a washing machine because a ten year old can learn to do it. Tell a 10 year old to pass a bar exam to see if he can. AI is what it is, INTELLIGENCE !!! Not washing machine uploading capacity.

  • They are looking at it all wrong. Forget about the Terminator scenario.. this is obviously stupid. The real problem is MASS unemployment with nothing to requalify to. And they didn't even mention it

  • Yann is so frustrating to listen to – he doesn't ever justify his claims, just asserts that nobody would ever build anything dangerous. BRUH we built nuclear bombs and SET THEM OFF.

  • Yann and Francesca aren't talking about the same technology as Nick and Mark. If AGI/ASI doesn't have escape potential, then it's not AGI/ASI.

  • There's a very select group of people on this planet who stay current with the latest advancements and progress in the most intelligent models, and Yann LeCun is still surprised that he's not among them.

  • We have zero regulations in place to prevent the creation of catastrophically dangerous models. The problem is the way we regulate, and how our psychology works. We always regulate things after problems emerge. With AI, that's going to be too late.

    Our brains are almost hard-wired to ignore invisible risks like these. Humans feel fear when things are loud, have teeth, or move in s-shapes, but an abstract existential risk is almost impossible to fear in the same way.

    So yes, we need to pause. We can't allow these companies to gamble with our lives.

  • "Absolutely no-one is going to stop you from building a turbo jet in your garage (…) you can mount it on remote controlled aeroplanes, IF THEY ARE NOT TO BIG TO BE ILLEGAL." Yes, he unknowingly disproved himself…

  • I agree some kinds of usage in social spheres should be restricted not the research . Monitoring and surveillance of content and interfaces is sufficient . The nay sayers are too much into fantasy scenarios , sophists at best and clear lack of understanding of current research, except for Yann everyone else are policy people who understand little . There is one thing that is certain that if nothing is done to change the current course humanity is on we will face an existential risk soon weather or not we have AI. One thing that must be done is the stop the rule of the single authoritarian as a form of government. They give us Putins, Trumps , Kims. ….etc. These will kill us all.

  • According to LeCun:

    -"Absolutely no-one is going to stop you from building a turbo jet in your garage (…) you can mount it on remote controlled aeroplanes, IF THEY ARE NOT TO BIG TO BE ILLEGAL." Yes, he unknowingly contradicts himself…

    Also:

    -GPT4 is dumber than a cat.

    -We can trust our safety with unregulated private corporations. Think turbojet. Do not think about tobacco, asbestos, lead.

    – Research should never be regulated.

    -There is no existential risk in AI.

  • 50:55 – 'There is no regulation of R&D'

    So can I also bioengineer viruses on my garage lab? Can I cook meth? Can I enrich uranium?

    R&D should be regulated (as it is) when there is significant danger. As is the case with frontier level AI.

  • 48:06 'AI is a product of our intelligence, that means we have control over it.'

    Ok, stop a nuclear bomb detonating after the reaction has started.

    Oh, you can’t? But it is a product of our intelligence!

  • Do we understand our position? Do you remember that turtle that holds the entire earth's plate on its shell.. it's not that allegory is nonsense just evoke the way of the turtle! We are just one little turtle that hatched on one beach in many beaches in many grains of sand from countless eggs trying to get to the ocean whether we succeed depends on too many factors /Harvesters of destiny.. IF We don't get this time right we are Done. WE ARE so close to paradisse
    but more close to Hell! Shall we prevaled

  • 43:04 – 'We can decide whether to build it or not'

    …and we WILL build it – whether you like it or not 😊 Because we don't care about people's opinions or their concerns, we just want to build AGI 😊 And you can’t stop us 😊

  • Yann LeCun keeps calling X-Risk an imagined danger, something that is impossible, unrealistic.

    He's 100% sure that AI will not kill us.

    My question is: How can he be so sure?

Comments are closed.