Categories Technology

AI Experts Urgently Call on Governments to Think About Maybe Doing Something

Everyone seems to recognize the fact that artificial intelligence is a rapidly developing and emerging technology that has the potential for immense harm if operated without safeguards, but basically no one (except for the European Union, sort of) can agree on how to regulate it. So, instead of trying to set up a clear and narrow path for how we will allow AI to operate, experts in the field have opted for a new approach: how about we just figure out what extreme examples we all think are bad and just agree to that?

On Monday, a group of politicians, scientists, and academics took to the United Nations General Assembly to announce the Global Call for AI Red Lines, a plea for the governments of the world to come together and agree on the broadest of guardrails to prevent “universally unacceptable risks†that could result from the deployment of AI. The goal of the group is to get these red lines established by the end of 2026.

The proposal has amassed more than 200 signatures thus far from industry experts, political leaders, and Nobel Prize winners. The former President of Ireland, Mary Robinson, and the former President of Colombia, Juan Manuel Santos, are on board, as are Nobel winners Stephen Fry and Yuval Noah Harari. Geoffrey Hinton and Yoshua Bengio, two of the three men commonly referred to as the “Godfathers of AI†due to their foundational work in the space, also added their names to the list.

Now, what are those red lines? Well, that’s still up to governments to decide. The call doesn’t include specific policy prescriptions or recommendations, though it does call out a couple of examples of what could be a red line. Prohibiting the launch of nuclear weapons or use in mass surveillance efforts would be a potential red line for AI uses, the group says, while prohibiting the creation of AI that cannot be terminated by human override would be a possible red line for AI behavior. But they’re very clear: don’t set these in stone, they’re just examples, you can make your own rules.

The only thing the group offers concretely is that any global agreement should be built on three pillars: “a clear list of prohibitions; robust, auditable verification mechanisms; and the appointment of an independent body established by the Parties to oversee implementation.â€

The details, though, are for governments to agree to. And that’s kinda the hard part. The call recommends that countries host some summits and working groups to figure this all out, but there are surely many competing motives at play in those conversations.

The United States, for instance, has already committed to not allowing AI to control nuclear weapons (an agreement made under the Biden administration, so lord knows if that is still in play). But recent reports indicated that parts of the Trump administration’s intelligence community have already gotten annoyed by the fact that some AI companies won’t let them use their tools for domestic surveillance efforts. So would America get on board for such a proposal? Maybe we’ll find out by the end of 2026… if we make it that long.

Original Source: https://gizmodo.com/ai-experts-urgently-call-on-governments-to-think-about-maybe-doing-something-2000662325

Original Source: https://gizmodo.com/ai-experts-urgently-call-on-governments-to-think-about-maybe-doing-something-2000662325

Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.

More From Author

Leave a Reply

Your email address will not be published. Required fields are marked *