OpenAI Staffers Demand Right to Warn the Public About AI Dangers



A group of former and current OpenAI staffers wants to ensure that employees can publicly disclose the potential dangers of artificial intelligence.On Tuesday, the group of 13 workers published an open letter arguing that AI companies “have strong financial incentives to avoid effective oversight,” including whether their technologies might cause societal upheaval. “They currently have only weak obligations to share some of this information with governments, and none with civil society,” the staffers wrote. “We do not think they can all be relied upon to share it voluntarily.”

This Tweet is currently unavailable. It might be loading or has been removed.

In response, the workers are calling on AI companies to commit to four principles designed to give their employees a way to notify the public about such dangers if they were ever to arise. Importantly, the first principle would require AI companies to refrain from using contractual agreements to punish employees for speaking out about AI risks. Seven former OpenAI employees signed the letter, along with four anonymous current staffers. The two other signees include a former Google DeepMind researcher and a current DeepMind staffer. In addition, AI pioneer Geoffrey Hinton endorsed the document. The open letter comes a few weeks after OpenAI was criticized for forcing employees to sign NDAs that prevented departing workers from disparaging the company for life. If they did, they’d lose their vested equity. OpenAI later said it ditched the policy after CEO Sam Altman claimed: “I did not know this was happening and I should have.”Ex-employees allege that OpenAI’s leadership was fully aware of the NDA policy. “It’s concerning that they engaged in these intimidation tactics for so long and only course-corrected under public pressure,” tweeted Daniel Kokotajlo, a former OpenAI employee who signed the open letter. “It’s also concerning that leaders who signed off on these policies claim they didn’t know about them.”

This Tweet is currently unavailable. It might be loading or has been removed.

The open letter goes on to say it’s crucial that employees at today’s AI companies can warn the public about potential dangers, especially since “no effective government oversight of these corporations” is currently in place. As a result, the group is calling on AI companies to facilitate an anonymous process for workers to raise AI risk concerns with corporate boards, regulators, and independent organizations. 

Recommended by Our Editors

Two other principles also demand that AI companies “support a culture of open criticism” and refrain from punishing employees for publicly sharing “risk-related confidential information after other processes have failed.”OpenAI didn’t immediately respond to a request for comment. But last week, the company announced it was forming a new “Safety and Security Committee” meant to oversee future AI projects after the previous leaders of OpenAI’s long-term safety team resigned. Altman will lead the new committee alongside OpenAI Board Chairman Bret Taylor, co-creator of Google Maps, and Nicole Seligman, general counsel for Sony. The same committee will also take input from third-party experts, such as former NSA Cybersecurity Director Rob Joyce and former US Assistant Attorney General for National Security John Carlin.

OpenAI Reveals Its ChatGPT AI Voice Assistant

Get Our Best Stories!
Sign up for What’s New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.

We will be happy to hear your thoughts

Leave a reply

Pulsethrivehub
Logo
Shopping cart