
Renowned computer scientist Yoshua Bengio, a pioneer of artificial intelligence, has warned of the potential negative effects of the emerging technology on society and called for more research to mitigate its risks.
Bengio, professor at the University of Montreal and director of the Montreal Institute of Learning Algorithms, has won numerous awards for his work in deep learning, a subset of AI which attempts to mimic human brain activity to learn to recognize complex patterns in data.
But he worries about the technology and warns that some people with “a lot of power” might even want to see humanity replaced by machines.
“It’s really important to look ahead to a future where we have machines that are as smart as us in many ways, and what would that mean for society,” Bengio told CNBC’s Tania Bryer at the summit One Young World in Montreal, a gathering of young leaders facing the challenges facing the world today.
Machines could soon possess most of humans’ cognitive abilities, he said: Artificial general intelligence (AGI) is a type of AI technology that aims to match or improve on human intellect.
“Intelligence gives power. So who will control this power?” he said. “Having systems that know more than most people can be dangerous in the wrong hands and create more instability on a geopolitical level, for example, or on a terrorist level.”
According to Bengio, a limited number of organizations and governments will be able to afford to build powerful AI machines, and the bigger the systems, the smarter they become.
“These machines, you know, cost billions to build and train (and) very few organizations and very few countries will be able to do it. That’s already happening,” he said.
“There is going to be a concentration of power: economic power, which can be bad for markets; political power, which could be bad for democracy; and military power, which could be bad for the geopolitical stability of our planet. open questions that we must study carefully and begin to mitigate as soon as possible. »
We don’t have methods to ensure that these systems won’t harm people or backfire on people… We don’t know how to do that.
Joshua Bengio
Director of the Montreal Institute of Learning Algorithms
Such results are possible within a few decades, he said. “But if it’s five years, we’re not ready… because we don’t have methods to ensure that these systems won’t harm people or backfire on people… We don’t know not how to do that,” he added. .
Some arguments suggest that the way AI machines are currently trained “would lead to systems that backfire on humans,” Bengio said.
“Besides, there are people who might want to abuse this power, and there are people who might be happy to see humanity replaced by machines. I mean, it’s a margin, but these people “They can have a lot of power, and they can unless we put the right safeguards in place now,” he said.
AI Guidance and Regulation
Bengio approved a open letter in June entitled: “A right to alert on advanced artificial intelligence”. It was signed by current and former employees of Open AI, the company behind the viral AI chatbot ChatGPT.
The letter warned against “serious risks” linked to advances in AI and sought advice from scientists, policymakers and the public on how to mitigate them. OpenAI has been the subject of increasing security concerns in recent months, with its The “AGI Readiness” team is disbanded in October.
“The first thing governments need to do is have regulations that require (companies) to register when they build these border systems that are like the biggest ones, and that cost hundreds of millions of dollars to build trained,” Bengio told CNBC. “Governments should know where they are, you know, know the specifics of these systems.”
With AI evolving so quickly, governments need to “get a little creative” and develop legislation that can adapt to technological changes, Bengio said.
It is not too late to guide the evolution of societies and humanity in a positive and beneficial direction.
Joshua Bengio
Director of the Montreal Institute of Learning Algorithms
Companies developing AI must also be responsible for their actions, according to the computer scientist.
“Accountability is also another tool that can force (companies) to behave, because… if it’s their money, the fear of being sued, that’s going to push them to do things that protect the public if they know they can’t be prosecuted, because right now it’s kind of a gray area, then they won’t necessarily behave well,” he said. he declared. “(The companies) are competing against each other and, you know, they think the first one to get to AGI is going to dominate. So it’s a race, and it’s a danger race.”
The process of legislation to secure AI will be similar to how rules have been developed for other technologies, such as planes or cars, Bengio said. “To realize the benefits of AI, we need to regulate. We need to put safeguards in place. We need to have democratic control over how the technology is developed,” he said.
Disinformation
The spread of disinformation, particularly around elections, is a growing concern as AI develops. In October, OpenAI said it disrupted “more than 20 deceptive operations and networks from around the world that have attempted to use our models.” These include social posts from fake accounts generated before elections in the United States and Rwanda.
“One of the biggest concerns in the short term, but one that will grow as we move toward better systems, is misinformation, disinformation, the ability of AI to influence policy and opinions,” he said. Bengio said. “As we move forward, we will have machines that can generate more realistic images, more realistic voice imitations, more realistic videos,” he said.
This influence could extend to interactions with chatbots, Bengio said, referring to a study by Italian and Swiss researchers demonstrating that OpenAI’s large GPT-4 language model can persuade people to change their minds better than a human. “This is just a scientific study, but you can imagine there are people reading this who want to do this to interfere with our democratic processes,” he said.
The “hardest question of all”
According to Bengio, the “hardest question of all” is: “If we create entities that are more intelligent than us and have their own goals, what does that mean for humanity? Are we in danger?
“These are all very difficult and important questions, and we don’t have all the answers. We need a lot more research and precautions to mitigate potential risks,” Bengio said.
He urged people to act. “We have the power to act. It is not too late to steer the evolution of societies and humanity in a positive and beneficial direction,” he said. “But for that, we need enough people who understand both the benefits and the risks, and we need enough people to work on the solutions. And the solutions can be technological, they can be political. .politics, but we need enough effort in these directions right now,” Bengio said.
– CNBC’s Hayden Field and Sam Shead contributed to this report.
