AI has surged to the top of the diplomatic agenda in the past couple of years.

And one of the leading topics of discussion among researchers, tech executives, and policymakers is how open-source models — which are free for anyone to use and modify — should be governed.

At the AI Action Summit in Paris earlier this year, Meta’s chief AI scientist, Yann LeCun, said he’d like to see a world in which “we’ll train our open-source platforms in a distributed fashion with data centers spread across the world.” Each will have access to its own data sources, which they may keep confidential, but “they will contribute to a common model that will essentially constitute a repository of all human knowledge,” he said.

This repository will be larger than what any one entity, whether a country or company, can handle. India, for example, may not give away a body of knowledge comprising all the languages and dialects spoken there to a tech company. However, “they would be happy to contribute to training a big model, if they can, that is open source,” he said.

To achieve that vision, though, “countries have to be really careful with regulations and legislation.” He said countries shouldn’t impede open-source, but favor it.

Even for closed-loop systems, OpenAI CEO Sam Altman has said international regulation is critical.

“I think there will come a time in the not-so-distant future, like we’re not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm,” Altman said on the All-In podcast last year.

Altman believes those systems will have a “negative impact way beyond the realm of one country” and said he wanted to see them regulated by “an international agency looking at the most powerful systems and ensuring reasonable safety testing.”



Read the full article here

Share.
Leave A Reply