Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
US AI Safety Body to Get Early Access to OpenAI's Next Model
OpenAI Has Previously Been Criticized for Prioritizing Profits Over SafetyOpenAI is "excited" to provide early access to its next artificial intelligence foundational model to a U.S. federal body that assesses safety of the technology, founder Sam Altman said on Thursday.
See Also: The future is now: Migrate your SIEM in record time with AI
Altman's post on X, formerly Twitter, about the company's agreement with the U.S. AI Safety Institute to "push forward the science of AI evaluations" was sparse on details. The latest deal, and a similar one it made with the United Kingdom, appear to be in response to criticism that the company was prioritizing building powerful, profitable AI technology over safety.
The announcement comes on the heels of OpenAI's endorsement of a Senate bill dubbed the Future of Innovation Act, which gives the AI Safety Institute the authority to set standards and guidelines for AI models. "We want the U.S. AI Safety Institute to be the global leader in this emerging field, and we welcome its growing collaboration with its counterparts in other countries," said Anna Makanju, vice president of global affairs at OpenAI, in a Wednesday LinkedIn post.
The company reportedly expedited the release of its latest AI model powering ChatGPT to meet a May deadline despite employee concerns about insufficient security testing, mere months after pledging to the White House to rigorously safety-test new versions to ensure its technology could not be misused.
The criticism of OpenAI's approach to safety also comes in light of the company essentially disbanding a "superalignment" security team set up to prevent AI systems from going rogue. The team's leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, quit the company over their disagreement with its approach to security, as did policy researcher Gretchen Krueger. Both Sutskever and Leike worked on addressing the long-term safety risks facing the company and the technology, and Leike in a social media post criticized OpenAI's lack of support for the superalignment security team.
"Over the past years, safety culture and processes have taken a back seat to shiny products," Leike said at the time. Sutskever was among the board members who in November removed Sam Altman from OpenAI only to see him reinstated as CEO five days later. Krueger said she decided to resign a few hours before her other two colleagues did, as she shared their security concerns.
Last month, it came to light that the company failed to disclose a data breach in which a hacker reportedly stole information on new technologies by breaking into the internal messaging systems.
Altman said in his latest post that OpenAI will also eliminate its overly restrictive whistleblower policy that allegedly requires employees to get the company's consent before disclosing information to federal authorities. Whistleblowers from the company reportedly complained in a seven-page letter to the U.S. Securities and Exchange Commission last month, alleging that the company unlawfully restricted employees from alerting regulators of the AI technology's potential risks to humanity.
"We want current and former employees to be able to raise concerns and feel comfortable doing so. This is crucial for any company, but for us especially and an important part of our safety plan," Altman said. The company in May voided the non-disparagement terms for current and former employees and provisions that gave OpenAI the right to cancel vested equity, an option that he said was "never used."
Altman added that OpenAI would dedicate 20% of its compute to safety research - a promise also made - and left unfulfilled - by the disbanded safety team.