Tech & Science

Ex-OpenAI Worker Reveals How Company ‘Runs on Twitter Vibes’

62
OpenAI

A former OpenAI engineer has shared rare insights into the artificial intelligence company, describing how rapid expansion has created significant operational challenges while maintaining a fast-moving startup culture.

Calvin French-Owen, who worked on OpenAI’s Codex coding tool for one year before leaving three weeks ago, published a detailed blog post about his experiences at the company behind ChatGPT. French-Owen previously co-founded customer data startup Segment, which Twilio acquired for $3.2 billion in 2020.

OpenAI tripled in size from 1,000 to 3,000 employees during French-Owen’s time there, fuelled by ChatGPT’s success as the fastest-growing consumer product in history with over 500 million active users by March.

The explosive growth has caused widespread problems across the organisation. “Everything breaks when you scale that quickly: how to communicate as a company, the reporting structures, how to ship product, how to manage and organize people, the hiring processes,” French-Owen explained.

Despite becoming a large company, OpenAI retains a startup atmosphere with minimal red tape that allows employees to pursue ideas quickly. However, this approach has resulted in teams duplicating work and inconsistent code quality, as staff range from experienced Google engineers to newly graduated PhD students with limited practical coding skills.

French-Owen described the company’s main code system as “a bit of a dumping ground” where programmes frequently malfunction or run slowly. He noted that senior engineering managers are aware of these issues and working to address them.

The company’s rapid development pace was evident when French-Owen’s 20-person team built and launched Codex in just seven weeks with almost no sleep. The product immediately gained users simply by appearing in ChatGPT’s interface, demonstrating the platform’s enormous influence.

OpenAI operates as what French-Owen called a “secretive fishbowl,” maintaining strict internal confidentiality to prevent information leaks while closely monitoring public discussions on social media. “A friend of mine joked, ‘this company runs on twitter vibes,'” he wrote.

Responding to common criticisms, French-Owen suggested the biggest misunderstanding about OpenAI involves its safety approach. While external critics worry about theoretical threats to humanity, he said the company focuses on practical safety concerns including hate speech, abuse, political manipulation, and self-harm prevention.

The company faces scrutiny from governments and competitors while serving hundreds of millions of users who depend on its technology for tasks ranging from medical advice to therapy sessions. “The stakes feel really high,” French-Owen concluded.

Written by
Sazid Kabir

I've loved music and writing all my life. That's why I started this blog. In my spare time, I make music and run this blog for fellow music fans.

Stay updated with nomusica.com. Add us to your preferred sources to see our latest updates first.

Related Articles

Playstation
Tech & Science

Sony to Drop PlayStation Network Name by 2026

Sony Interactive Entertainment is retiring the “PlayStation Network” and “PSN” branding by...

Google AI Studio
AITech & Science

Google Moves Firebase Studio Toward AI Studio in Major Developer Shift

Google is making a significant change to its developer ecosystem by transitioning...

Security Risk - Hack - Threat
Tech & Science

Hacktivist Group INDOHAXSEC Claims to Have Leaked Private Data of 8.3 Million Israelis

A hacktivist group called INDOHAXSEC (Indonesian Haxor Security) claims it has leaked...