The team behind the popular artificial intelligence (AI) chatbot ChatGPT, says it will be forming a team to rein in and manage the risks of superintelligent AI systems.
In an announcement on the OpenAI blog on July 5, the company says the new team will be created to “steer and control AI systems much smarter than us.”
The non-profit said in its statement that it believes superintelligence will be “the most impactful technology humanity has ever invented” and help solve many problems, though there are risks.
“The vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”OpenAI said it believes superintelligence could arrive this decade.
OpenAI said it will be dedicating 20% of the already secured compute power to the effort and aims to create a “human-level” automated alignment researcher. The automated researcher would, in theory, help the team manage superintelligence safety and align it with “human intent.”
Currently, it named its own chief scientist Ilya Sutskever and the research lab's head of alignment Jan Leike as the co-leaders of the effort. It made an open call to machine learning researchers and engineers to join the team.
Related: OpenAI pauses ChatGPT’s Bing feature, as users were jumping paywalls
This announcement from OpenAI comes as governments around the world consider measures to control the development, deployment and use of AI systems.
Regulators in the European Union are among the furthest along in the implementation of AI regulations. On June 14 the European Parliament passed its initial EU AI Act - legislation that would make it mandatory for systems like ChatGPT to disclose all AI-generated content, along with other measures.
The bill still needs the details to be discussed further in the Parliament prior to its implementation. Nonetheless, the bill sparked an outcry from AI developers regarding its potential to halt innovation.
In May, OpenAI CEO Sam Altman went to Brussels to speak with EU regulations in regard to the potential negative effects of over-regulation.
Lawmakers in the United States have also introduced a “National AI Commission Act” to establish a commission that will decide on the nation’s approach to AI. Regulators in the U.S. have also been outspoken on their desires to regulate the technology.
On June 30, Senator Michael Bennet drafted a letter to major tech companies, including OpenAI, urging the labeling of AI-generated content.
Magazine: BitCulture: Fine art on Solana, AI music, podcast + book reviews