- I’ve got a new framework for you: for every project that matters to you, determine if time is on your side or against you.
- We discuss the mess that happened at OpenAI over the weekend, resulting in CEO Sam Altman being fired from his own company.
- Is there anything early-stage founders can learn from the coup at OpenAI regarding co-founder relationships and setting up company structure?
When building something like a foundational AI model, time is your foe. As time goes on, regulation tightens, and competition intensifies. The industry gravitates towards a few dominant players due to economies of scale in this computing-intensive market, access to funding dwindles, and venture capitalists, together with cloud providers, increasingly back proven winners over time.
But when building a vertical business that uses AI, time is probably your friend. The reason is that as technology evolves, it becomes simpler and more efficient to develop new products.
In 1990, a group of top designers and developers, many from Apple, started a company called General Magic. They dreamed of a future where computers were easy-to-use, handheld devices that could connect wirelessly to online services. One of the bright minds there was Tony Fadell, who later helped make the iPod. But back in the 1990s, to make their dream a reality, they needed to invent a lot of basic hardware and software that didn’t exist yet and wouldn’t for another 10 to 20 years. They managed to make some of it and even released a product. Ultimately, they failed. In a way, they were trying to make something like the iPhone—but they were about fifteen years too early.
My previous startup didn’t succeed, partly for a similar reason. Back in 2015, we spent more than two years creating a chatbot, which turned out to be less effective than what you could easily build in under an hour now in 2023.
In 2023, entering the AI industry as an employee is challenging because time is your foe. The longer you wait, the more crowded the field gets. This mirrors my experience with product engineering. I began my professional career during the early days of the startup boom, after the rise of Facebook and Twitter but before the emergence of companies like Uber, Airbnb, Snap, Slack, Stripe, and others. It was easier to get in then. There was such a high demand for engineers that tech companies would interview almost anyone with basic skills. However, a decade and some years later, as my sister tries to break into the market as a junior product designer, the competition is far tougher, and the required skill level is much higher. It’s not enough to be lucky now; you have to be skilled, even great, to get an opportunity—even though you’re just starting out!
This is why I suggest a straightforward framework: for every project that matters to you, determine if time is on your side or against you, and make your moves based on that. Some folks always act urgently just for the sake of it, but as the examples above show, haste doesn’t always lead to success. On the other hand, being cautious and taking your time doesn’t always work either, especially when quick and decisive actions are needed.
Markets are changing more rapidly today than they did in the past. This swift transformation can be due to factors like new technologies becoming more affordable or significant shifts in human behavior. Investors are especially keen on these rapidly evolving markets because a quickly changing market presents larger opportunities for investment growth. That’s why many investors focus on the “Why now” section in pitch decks from potential investment companies.
However, it’s also true that some human behaviors never change. Projects that assume they will often fail, particularly in consumer tech, an industry reliant on significant shifts in consumer habits. Most founders in consumer tech will tell you that while customer needs remain largely constant, the methods to meet these needs can change rapidly. The issue arises when you confuse a temporary solution for a fundamental need or think that time isn’t working against you. Misjudging these can lead to a swift exit from the market.
Perfect timing is tricky to master, but once you get it right, you can be unstoppable.
The coup at OpenAI
I'm usually cautious about discussing topics like this on Before Growth since our main focus is on startups still searching for product-market fit, and OpenAI is clearly beyond that stage. Generally, I bring up big companies only when they create tools that small startups can use. OpenA’'s API platform and models, including GPT-4, fit this category, having become a key tech layer many startups rely on.
On Friday, in an unexpected turn of events, OpenAI dismissed CEO Sam Altman. This decision prompted President Greg Brockman and three top scientists to resign. The move also caught Microsoft, a major investor and minority owner, off guard, reportedly angering CEO Satya Nadella. As the night progressed, there were reports suggesting that Chief Scientist and co-founder Ilya Sutskever, who is also on the board, might have orchestrated the ousting due to worries about the safety and pace of OpenAI’s technology deployment.
“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the board’s announcement reads, in part. “The board no longer has confidence in his ability to continue leading OpenAI.”
It seems that Sam was dismissed by the board without explanation to the rest of the company. The leadership team called for the board’s resignation, which didn’t happen, and Mira, who was chosen to be the interim CEO, was replaced.
The board approached Anthropic CEO Dario Amodei, ex-GitHub CEO Nat Friedman, and Scale AI CEO Alex Wang for the role, but they all declined. Emmett Shear, former Twitch CEO, was appointed as interim CEO.
Over 700 employees are now insisting on the board’s resignation, threatening to move to Microsoft otherwise. However, they are open to rejoining under OpenAI with a competent board.
For early-stage founders, this situation presents a learning opportunity for two reasons:
- Founder dynamics
- Corporate structure and board alignment
The full details aren’t public yet, and there’s a lot of ongoing reporting and speculation. However, it’s increasingly looking like the core issue behind the upheaval at OpenAI is a disagreement among the founders about the company’s direction.
Actually, I began writing this article early Saturday but paused as the situation was still evolving. Now, there’s even talk about the board wanting to bring Altman back. Geez. (Hence the daily annotations.)
After weekend discussions about possibly reinstating Altman at OpenAI, Microsoft CEO Satya Nadella declared that Altman and Brockman would head Microsoft’s new cutting-edge AI research team. Altman will serve as the CEO of this new division.
OpenAI began as a non-profit research lab, with Altman, Brockman, Sutskever, and Elon Musk as co-founders. As CEO, Sam led a shift towards more commercial activities. This change intensified after Elon Musk was pushed out in 2018 by the rest of the team, and OpenAI adopted a capped profit model in 2019. That same year, they received a $1 billion investment from Microsoft. In return for their investment, there was a fairly substantial limit set on the potential returns for investors: specifically, a cap of 100 times their investment. This means that Microsoft anticipate a maximum return of $100 billion.
Altman played a key role in developing ChatGPT, now a standout in generative AI, at a time when many at the company were skeptical about its potential. The technology behind GPTs, known as transformers, wasn't new; they were developed by Google in 2018. Also, the idea of chatbots seemed unpromising, especially after their largely unsuccessful surge in 2015.
However, ChatGPT turned out to be a massive success, and since then, OpenAI has rapidly accelerated its development. They built upon ChatGPT with GPT-4, followed by Whisper, DALL-E 3, and other models in a series of quick advancements that amazed much of the tech industry.
Setting aside rumors and extreme speculation, such as the unlikely claim that OpenAI has developed AGI, I’ve been able to assemble three credible scenarios based on the available information:
- Sutskever may have been alarmed by how the commercialization efforts were undermining the original mission of the then non-profit company to ensure the safe and responsible deployment of AI. There’s no denying that OpenAI’s rapid pace of product releases hints at a potential cultural split within the company, possibly between the Product and Research teams, or more broadly, between those favoring rapid development and those advocating for a more cautious approach.
- Sam Altman hails from the startup world, while Ilya Sutskever has an academic background. I’ve observed another potential divide on Twitter this week, stemming from these differing perspectives. People in the startup sphere were thrilled about OpenAI’s recent developer day, where new products and APIs were unveiled for creating innovative downstream products. There’s excitement over developing thousands of bots, like a Steve Jobs-inspired chatbot that offers design feedback mimicking the Apple founder’s style. However, some dedicated researchers I follow on Twitter view this as, well, bullshit. Generative AI requires significant computing power, and they question the value of using it for toys like bots that teach creative writing. Sutskever might have been critical of the resources spent on such projects, seeing them as a deviation from the quest for AGI and a misuse of Azure credits.
- While leading OpenAI, Altman was also reportedly seeking to raise substantial funds from Middle Eastern sovereign wealth funds to start a company specializing in AI chips, aiming to rival NVIDIA. Additionally, he was engaging with Jony Ive and SoftBank chairman Masayoshi Son for a multibillion-dollar investment in this new venture focused on AI hardware. If these projects were separate from OpenAI and not subject to its non-profit governance structure, Sutskever and the board might have been concerned. They could have seen this as an accumulation of too much power in one person’s hands or as a means to bypass OpenAI’s safety protocols for AI, prioritizing speed and personal influence over responsible AI development.
I’m not sure which, if any, of these scenarios is accurate. They might all have some truth to them, or perhaps none are completely correct. My role isn’t to pass judgment, and frankly, the exact details may not be crucial for our discussion. What’s evident is that there seems to be a fundamental misalignment among the company’s founders, leading to the board becoming entangled in this dispute.
Altman’s transition to Microsoft is not yet finalized, and Ilya Sutskever’s recent flip to support Altman indicates that two board members still need to reconsider their positions. Utter craziness.
Ilya expressed on Twitter his profound regret for being involved in the board's decisions. He clarified that he never meant to damage OpenAI, expressing his deep affection for their collective achievements. He committed to doing everything in his power to bring the company back together.
Might be too little, too late. I’m uncertain about what to believe at the moment. It could be buyer’s remorse, but it appears that Sutskever’s vote was pivotal, given the rest of the board’s firm stance. Had there been complete agreement among the co-founders, this likely wouldn’t have happened.
This is not unheard of. Starting and successfully running a business is challenging, with statistics not in your favor. About 20% of new businesses fail within the first two years, 45% within five years, and 65% don’t make it past ten years. These figures have remained fairly consistent since the 1990s. And according to Noam Wasserman, author of The Founder’s Dilemmas, 65% of startup failures are due to conflicts between founders. This highlights that for your venture to succeed, learning how to effectively collaborate and, crucially, how to handle disagreements with your business partners is essential.
I’ve also faced conflicts with co-founders in my own experiences. From what I’ve seen, these disputes often trace back to the early stages of a company’s life. Take OpenAI as an example: two of the three likely causes for their recent issues are tied to the division between their non-profit and for-profit segments, a decision made early in their journey. And remember, we’re talking about highly skilled and experienced founders here!
A typical situation among founders goes like this: Caught up in the excitement and potential, they overlook certain differences, bending their long-standing principles to make the project succeed. Once they secure funding and launch the product, things start to look up, so any underlying tension is swiftly brushed aside. Why risk a thriving venture over a few disagreements? However, as time passes and growth slows, challenges arise, fatigue sets in, or significant financial stakes come into play, these once minor disagreements transform into major conflicts.
Arguably, the ideal way to start a business might be to do it solo, if feasible, ensuring total alignment. Indeed, some, like Jeff Bezos, have successfully launched startups on their own. However, any founder will tell you that the early stages are incredibly demanding, with many tasks to manage. Having a co-founder to support you during tough times can be invaluable. Additionally, investors often feel more confident with multiple founders as it reduces the risk associated with relying on a single person. So, unless you're like Jeff Bezos—and if you are, congratulations and please share your contact details so that I can send you a wire to invest—the reality is you’ll need to navigate the complexities of working with co-founders, and it’s not always going to be smooth sailing.
OpenAI’s structure is… unique. Initially, it was founded as a genuine non-profit, aiming to safely advance AGI. Later, under the leadership of Sam Altman—who, it’s worth noting, also seems to be a firm believer in many of OpenAI’s non-commercial objectives—the organization transformed into a hybrid model. This shift allowed OpenAI to generate profits and provide returns to investors, even potentially considering an IPO in the future. However, investors were cautioned that OpenAI might never be profitable, as its primary focus isn't on profit generation. They were advised to view their investments more as philanthropic contributions than as typical investments expecting financial returns.
Even OpenAI’s major investors don't have board representation. Sam Altman himself held no equity in the company, a scenario that’s quite rare in our industry. The idea behind this setup was to attract investors who aligned with the company’s unique vision. By accepting this unusual governance model, it was believed that these investors would help OpenAI maintain its focus on its core mission.
Again, there are two perspectives to consider regarding this structure:
- From a startup-focused viewpoint: A common advice from investors to startups is to pinpoint a single key area of disruption and limit innovation to it, while keeping other aspects conventional. For instance, if you’re a real estate startup, focus solely on revolutionizing real estate, rather than also trying to innovate in corporate structure or HR practices. Disrupting one area is challenging enough, and standardization in other areas can actually be beneficial, particularly for young companies. Imagine creating a unique career progression system for your employees. If any problems arise with this system, your investors or industry experts might struggle to assist, as it’s an uncommon setup. This could end up being a distraction.
- From a nerd-oriented perspective: Unique companies like OpenAI aren’t built by traditional methods. People driven solely by financial gain and operating within a standard Delaware C Corporation structure likely couldn’t have developed something like ChatGPT. For innovators, the mission is crucial. It’s probably fair to say that OpenAI would have found it much harder to attract top AI researchers from a place like Google, too, if it operated just like Google. Its mission-focused structure, though potentially volatile to investors, was appealing to the builders deeply committed to the cause—often the most talented individuals in their fields. If you’re hired to develop AGI, truly believe in its feasibility, and think that p(doom) is non-zero, then working under a non-profit board that can apply brakes when needed seems like a sensible choice.
There's also the aspect of soft power to think about. Although OpenAI’s structure was intentionally designed to differ from that of Big Tech, it seems this distinction was more theoretical than practical. Despite what charters and agreements might suggest, their real-world impact was limited. Theoretically, the board had the authority to dismiss Altman, but in reality, he was the figure that both investors and employees had invested their trust and resources in when they committed to OpenAI.
For founders, there’s a key takeaway as well. My suggestion is to stick to the basics if you can. Let’s say you’re a standout compared to the average founder—perhaps you’ve already successfully exited a major startup—so attracting top talent is easier for you than for a newcomer. However, if you’re not in that position, or even if you are a high-achiever but aim to recruit the kind of elite talent that giants like Google, Microsoft, and Facebook fight for, you might need to make some concessions. Presenting a distinctive and compelling vision could be the key to winning them over.
Just make sure not to blow up your own company.
💵 Consider showing your love in the tip jar if you enjoy my newsletter.
🎙️ Book a 1:1 call with me. Are you considering a startup idea? Do you want to deep dive into the topics we discuss here? Let’s talk!
Did a friend forward this to you? Subscribe now to get the latest updates delivered straight to your inbox every week.
Ready to catch the next wave first?
If you’re serious about startups and want to get shit done, you’re in the right place. Subscribe for hand-picked startup intel that’ll put you ahead of the curve, straight from one founder to another.