Ex-OpenAI exec calls out Sam Altman for choosing ‘shiny products’ over AI safety

  • A top OpenAI executive researching safety quit on Tuesday.

  • Jan Leike said he reached a “breaking point.”

  • Adding that Sam Altman‘s company was prioritizing “shiny products” over safety.

A former top safety executive at OpenAI is laying it all out.

On Tuesday night, Jan Leike, a leader on the artificial intelligence company’s superalignment group, announced he was quitting with a blunt post on X: “I resigned.”

Now, three days later, Leike shared more about his exit — and said OpenAI isn’t taking safety seriously enough.

“Over the past years, safety culture and processes have taken a backseat to shiny products,” Leike wrote in a lengthy thread on X on Friday.

In his posts, Leike said he joined OpenAI because he thought it would be the best place in the world to research how to “steer and control” artificial general intelligence (AGI), the kind of AI that can think faster than a human.

“However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” Leike wrote.

The former OpenAI exec said the company should be keeping most of its attention on issues of “security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics.”

But Leike said his team — which was working on how to align AI systems with what’s best for humanity — was “sailing against the wind” at OpenAI.

“We are long overdue in getting incredibly serious about the implications of AGI,” he wrote, adding that, “OpenAI must become a safety-first AGI company.”

Leike capped off his thread with a note to OpenAI employees, encouraging them to shift the company’s safety culture.

“I am counting on you. The world is counting on you,” he said.

OpenAI CEO Sam Altman responded to Leike’s thread on X.

“I’m super appreciative of [Jan Leike]’s contributions to OpenAi’s alignment research and safety culture, and very sad to see him leave,” Altman said on X. “He’s right we have a lot more to do; We are committed to doing it. I’ll have a longer post in the next couple of days.”

He ended the message with an orange heart emoji.

Table of Contents

Resignations at OpenAI

Both Leike and Ilya Sutskever, the other superalignment team leader, left OpenAI on Tuesday within hours of each other.

In a statement on X, Altman praised Sutskever as “easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend.”

“OpenAI would not be what it is without him,” Altman wrote. “Although he has something personally meaningful he is going to go work on, I am forever grateful for what he did here and committed to finishing the mission we started together.”

On Friday, Wired reported that OpenAI had disbanded the pair’s AI risk team. Researchers who were investigating the dangers of AI going rogue will now be absorbed into other parts of the company, according to Wired.

The AI company — which recently debuted a new large language model GPT-4o — has been rocked by high-profile shakeups in the last few weeks.

In addition to Leike and Sutskever’s departure, Diane Yoon, vice president of people, and Chris Clark, the head of nonprofit and strategic initiatives, have left, according to The Information. And last week, BI reported that two other researchers working on safety quit the company.

One of those researchers later wrote that he had lost confidence that OpenAI would “behave responsibly around the time of AGI.”

Read the original article on Business Insider

Source link

Lucas Anderson

You might also like

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More