13 former and current OpenAI employees with endorsements by ‘The Godfathers of AI’ outline 4 key measures to address AI risks
OpenAI employees are taking it upon themselves to regulate and address some of the risks arising from the rapid prevalence of AI.
When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.
What you need to know
In May, a handful of OpenAI employees departed from the company, including super alignment lead Jan Leike, after it unveiled its’magical' new flagship GPT-4o modelat its Spring Update event.
The top executive indicated his departure was fueled by constant disagreements over security, monitoring, andprioritizing shiny products. Consequently, this opened a can of worms for the hot startup, with former OpenAI board members reportingincidents of psychological abuse involving CEO Sam Altman.
There are major concerns around generative AI, including theimminent end of humanitywith progressive advancements in the landscape coupled with reports ofAI taking over our jobs and turning work into hobbies. Current and former employees at top AI companies, including OpenAI, Anthropic, and DeepMind, have penned a letter addressing some of the risks centered on the technology (viaBusiness Insider).
The letter seeks protection for whistleblowers on issues that may pose imminent danger to humanity:
“We are current and former employees at frontier AI companies, and we believe in the potential of AI technology to deliver unprecedented benefits to humanity. We also understand the serious risks posed by these technologies. These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction. AI companies themselves have acknowledged these risks, as have governments across the world, and other AI experts.”
The letter has been signed by 13 employees from top AI firms and endorsed by self-proclaimed “Godfathers of AI,” Yoshua Bengio and Geoffrey Hinton. Daniel Kokotajlo, a former OpenAI employee, indicated that he left the company because he’d lost hope In its values, specifically its responsibility while making AI advances:
“They and others have bought into the ‘move fast and break things’ approach and that is the opposite of what is needed for technology this powerful and this poorly understood.”
Get the Windows Central Newsletter
All the latest news, reviews, and guides for Windows and Xbox diehards.
The AI evangelists highlight four core demands that could potentially address some of the issues and risks riddling the technology, including:
This is in the wake ofOpenAI reportedly forcing departing employees to sign NDAs, preventing them from criticizing the company or risking losing their vested equity. OpenAI CEO Sam Altman admitted he was embarrassed about the situation but indicated the company never clawed back anyone’s vested equity.
While speaking to Business Insider, an OpenAI spokesman indicated that the debate around the technology is important and raises crucial points. As such, OpenAI will work closely with relevant entities to ensure it continues “providing the most capable and safest A.I. systems” to bolster its scientific approach to addressing these risks.
Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You’ll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.