Why was OpenAI’s Sam Altman Fired? These New Details Worry Me

For better or worse, Sam Altman-led OpenAI is almost always hitting the headlines. Last year, Altman got fired from the company, only to be appointed back a couple of days later. Recently, there was quite the kerfuffle with the hot AI startup allegedly using actressScarlett Johansson’s voicefor the new conversational mode on GPT-4o without her consent.

While that controversy has still not subsided, OpenAI has taken the internet by storm for all the wrong reasons, all over again. Now, ex-OpenAI board members have brought to light the actual reasons behind Altman’s firing in the past, hinting at why it should have stayed that way.

From Non-Profit to For-Profit?

From Non-Profit to For-Profit?

So,OpenAI started out as a non-profit body, with the vision of making AGI (Artificial General Intelligence) accessible and beneficial to humanity. While it did eventually have a profit-making unit to get the required funding, it was the non-profit nature that dominated the company’s ethos.

However,under Altman’s leadership, the profit-making vision has started taking over instead. That’s what the ex-board members, Helen Toner and Tasha McCauley suggest. A newexclusiveinterview of Toner on the TED AI Show is making rounds on the internet.❗EXCLUSIVE: “We learned about ChatGPT on Twitter.“What REALLY happened at OpenAI? Former board member Helen Toner breaks her silence with shocking new details about Sam Altman’s firing. Hear the exclusive, untold story on The TED AI Show.Here’s just a sneak peek:pic.twitter.com/7hXHcZTP9e— Bilawal Sidhu (@bilawalsidhu)May 28, 2024

❗EXCLUSIVE: “We learned about ChatGPT on Twitter.“What REALLY happened at OpenAI? Former board member Helen Toner breaks her silence with shocking new details about Sam Altman’s firing. Hear the exclusive, untold story on The TED AI Show.Here’s just a sneak peek:pic.twitter.com/7hXHcZTP9e

Toner says,

“When ChatGPT came out November 2022, the board was not informed in advance about that. We learned about ChatGPT on Twitter. Sam didn’t inform the board that he owned the OpenAI startup fund, even though he was constantly claiming to be an independent board member with no financial interest in the company.”

This does hit like a truck, especially since ChatGPT basically was the inflection point of the AI chaos we’re seeing today. Such an important revelation being hidden from the board members themselves is undeniably shady.

She further states thatAltman fed the board with “inaccurate information” on “multiple occasions”, masking the safety processes that were at work behind the company’s AI systems. As a result, theOpenAI board was completely oblivious to how well these safety processes even workin the first place. You can listen to the complete podcasthere.

No Safety for the AI Trigger

No Safety for the AI Trigger

Building AI responsibly should always be one of the topmost priorities of companies, especially since things can go “horribly wrong”. This is not something I’m saying though, and comes straight from the mouth of Altman, ironically.

Most importantly, this surprisinglyfalls in line with Musk’s side of the story. Not too long ago,Elon Musk sued OpenAI, claiming that the company had abandoned its original mission and has now become profit-oriented.

In an interview withThe Economist, the ex-board members state that their concerns with how Sam Altman’s return led to thedeparture of safety-focused talent, making OpenAI’s self-governance policies take a serious hit.

They also believe thatthere should be government intervention for AIto be built responsibly. Following the controversy, OpenAI recentlyformeda Safety and Security Committee, stating that,“This new committee is responsible for making recommendations on critical safety and security decisions for all OpenAI projects; recommendations in 90 days.”

And, guess what? This vital committee includes Sam Altman too. While I don’t want to believe all the accusations, if they’re true, we’re in serious trouble. I don’t think any of us want Skynet to become a reality.

Besides, a week ago, Jan Leike, the co-head of Superalignment at OpenAI resigned over safety concerns and now he has joined Anthropic, a rival firm. However, he didn’t leave silently and dropped his side of the story in detail on his X handle.

Of all the things he said,“OpenAI must become a safety-first AGI company,”was another hard pill to swallow, for it clearly implicates that the company is currently not on the right trajectory.To all OpenAI employees, I want to say:Learn to feel the AGI.Act with the gravitas appropriate for what you’re building.I believe you can “ship” the cultural change that’s needed.I am counting on you.The world is counting on you.:openai-heart:— Jan Leike (@janleike)May 17, 2024

To all OpenAI employees, I want to say:Learn to feel the AGI.Act with the gravitas appropriate for what you’re building.I believe you can “ship” the cultural change that’s needed.I am counting on you.The world is counting on you.:openai-heart:— Jan Leike (@janleike)May 17, 2024

He also emphasizes the fact that we really need to buckle up and“figure out how to steer and control AI systems much smarter than us.”However, that’s not all the reason Leike left. He also wrote,

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.

A Toxic Exit for Employees

While Toner and the other ex-OpenAI folks have been publicly revealing shocking facts about the company lately, they also suggest that they “can’t say everything”.

Last week, aVoxreportrevealed how former OpenAI employees were forced to sign severe non-disclosure and non-disparagement agreements, a breach of which will lead them to lose all vested equity in the company. We’re talking in millions here, and I don’t think anyone would want to lose that.

Specifically, this agreement prevents former OpenAI employees from criticizing the company and talking to the media. While Altman took to X to say that he didn’t know of this clause in OpenAI’s NDA, I don’t think anyone buys it.

Even if we consider Altman’s point, it goes on to show how disorganized an important body like OpenAI is, which only further proves the point of all those accusations.in regards to recent stuff about how openai handles equity:we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop.there was…— Sam Altman (@sama)May 18, 2024

in regards to recent stuff about how openai handles equity:we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop.there was…— Sam Altman (@sama)May 18, 2024

Is the Future of AI in the Wrong Hands?

It’s sad that the very board that joined hands with the company’s vision is now against it. While it may or may not have anything to do with Altman firing them upon his return to the company, if these accusations are to be believed, they’re quite frightening.

We have several movies and TV shows that showcase how AI can getout of hand. Moreover, it’s not just OpenAI trying to achieve AGI. Industry giants like Google DeepMind and Microsoft are also injecting AI into almost all of their products and services. This year’s Google I/O even hilariously revealed the number of timesAIwas stated throughout the event, which is 120+ times.

On-device AIis the next big step forward and we’re already seeing some implementations of it with theRecallfeature for the next-genCopilot Plus PCs. That raised a whole lot of privacy concerns too, since the feature actively takes screenshots of the screen to create a local vector index.

In other words, AI is here to stay, whether you like it or not. However, what truly matters is how responsibly we develop and use AI, ensuring that it serves us rather than governs us.Is the future of AI in the wrong hands?Especially when AI labs are not pulling stops at giving it more power and data, and AI is multimodal now, to remind you.

What do you think of these new revelations? Do they take away your night’s sleep like they did for me? Let us know your opinion in the comments down below.

Sagnik Das Gupta

Sagnik is a tech aficionado who can never say “no” to dipping his toes into unknown waters of tech or reviewing the latest gadgets. He is also a hardcore gamer, having played everything from Snake Xenzia to Dead Space Remake.

Add new comment

Name

Email ID

Δ

01

02

03

04

05