A ‘godfather’ he is raising a red flag over the agents of he

  • Talking about agents and he is everywhere in Davos. The pioneer of he Yoshua Bengio warned against them.
  • Bengio said agents with Agi power can lead to “catastrophic scenarios”.
  • Bengio is researching how to build non-agent systems to keep agents under control.

Artificial intelligence pioneer Yoshua Bengio has been at the World Economic Forum in Davos this week with a message: agents he can end badly.

The theme of the agents of he – artificial intelligence that can act independently of human contribution – has been one of the most beloved in this year’s meeting in snowy Switzerland. The event has attracted a collection of pioneer scholars to argue where he goes elsewhere, how to go governed, and when we can see signs of cars that can reason, as well as people – a historical moment known as general artificial intelligence (Agi).

“All catastrophic scenarios with agi or supervision occur if we have agents,” Bengio Bi told in an interview. He said he believes that it is possible to achieve Agi without the construction of agents’ systems.

“All for science and medicine, all the things they care about is not an agent,” Bengio said. “And we can continue to build more powerful systems that are non-agents.”

Bengio, a Canadian research scientist, whose early research in deep learning and nerve networks laid the foundations for the modern boom of him, is considered one of the “gods of he” along with Geoffrey Hinton and Yann Lecun. Like Hinton, Bengio has warned against possible damage to him and has called for collective action to alleviate the risks.

After two years of testing, businesses recognize the tangible return of the investments provided by the agents of he, which can enter the workforce significantly as soon as possible this year. Openai, who does not have a presence in Davos this year, this week revealed an agent of one who can browse online for you and perform tasks such as booking restaurants or adding groceries to your basket. Google has looked at a similar tool of its own.

The problem that Bengio sees is that people will keep construction agents no matter what, especially when competing companies and countries worry about that others will go to that agent before them.

“The good news is that if we build non-agent systems, they can be used to control agents’ systems,” he told BI.

One way would be to build the most sophisticated “monitors” that could do so, though this would require considerable investment, Bengio said.

He also called for national regulations that would prevent companies from building an agent models without first prove that the system would be safe.

“We can progress our science of safe and capable, but we must accept risks, understand scientifically where it comes from, and then make technological investments to make it happen before it is late, and we build things that can destroy us, ”Bengio said.

‘I want to raise a red flag’

Before talking to BI, Bengio spoke on a panel about the security of the Google CEO Deepmind Demis Hassabis.

“I want to raise a red flag. This is the most dangerous way,” Bengio told the audience when asked about him agents. He showed how he can be used for scientific discovery, such as Deepmind’s progress in protein folding, as examples of how it can still be deep without being an agent. Bengio said he believes it is possible to go to Agi without giving him the agency.

“It’s a bet, I agree,” he said, “but I think it’s a valuable bet.”

Hassabis agreed with Bengio that measures should be taken to mitigate risks, such as online security protection or experimentation with agents in simulations before dropping them. This would only work if everyone agreed to build them the same way, he added.

“Unfortunately I think there is an economic gradient, beyond science and workers, that people want their systems to be agents,” Hassabis said. “When you say” recommend a restaurant “, why wouldn’t you like the next step, which is, booked the table.”