Here’s another productive fiction I swear by: No work is ever wasted. For those who missed my earlier article, “productive fictions” are beliefs or moral frameworks that might not withstand intense scrutiny, but we embrace them because they drive positive outcomes.
I almost folded this idea into my last post, but it’s so critical for early–stage entrepreneurs that it deserves its own spotlight. When you’re just starting out, you’re essentially swimming in a sea of potentially wasted effort. Sure, work that never leaves your desk doesn’t cost much, but what about when you’ve launched, poured your heart into it, and still crashed and burned? The fallout isn’t just emotional—it can be financial, too.
As for me, navigating my previous startup was like walking a jagged trail. Though we moved forward, we never seemed to hit our stride; it was as if we had to wrest success from the unforgiving world. Our early years were a cycle of iterations with no revenue until we finally developed a prototype that gained traction. Despite securing funding, we paid ourselves meager salaries, choosing to reinvest everything back into the business. Meanwhile, as the tech landscape evolved with remote work, the paychecks of my friends in more stable jobs skyrocketed—some even earning tenfold what I made. For nearly seven years, I barely managed to take two consecutive days off. Adding to the stress, a politics–related delay in our next round of government funding for our AI research left us financially strapped but committed to our team. Despite our best efforts, we ultimately couldn't make it work.
It’s easy to slide into the mindset that all that hard work was for nothing and that, to save face, it’s better not to try again. This kind of thinking is a trap—rooted in binary, defeatist attitudes that unfairly project past failures onto future possibilities. After wrestling with it myself, I’ve chosen not to believe it. Here’s why:
No work is ever wasted because:
- You accumulate invaluable knowledge that sticks with you, even if the current project tanks.
- First–hand experience informs your future decisions, offering insights you wouldn’t have otherwise.
- New connections can emerge from any endeavor, and you never know which relationship might flourish down the line.
- True builders—those who’ve been in the trenches—respect the hustle and will remember you as one of their own, not as a failure.
- There’s a cumulative effect to improving your knowledge, skills, and experience that can give you a leg up on those just starting out.
- Previous efforts often spark future innovations—successful pivots or even entirely new projects can arise from recognizing gaps that your past ventures exposed. For instance, if you fail at launching a gaming company, you might realize the industry lacks efficient ways to prototype creative assets. That insight could lead you into another solution.
The upside to adopting this productive fiction is monumental, while the downside is minimal. What’s the worst–case scenario for believing that no work is ever wasted? Even if you don’t strike gold, you’ll amass a wealth of knowledge that makes you an attractive candidate in the job market. Maybe it’s not your ultimate dream, but it’s not a bad place to land, is it?
In my case, I got a role at Plane, a Y Combinator-backed startup that’s further along than my own startup ever got—but not too far. I joined intentionally to learn how to scale a business beyond what I achieved before. My founder experience was a plus for them; they value that mindset in their team.
Not that bad, after all.
Sequoia revisits their Gen AI thesis
Sequoia has followed up on its one–year–old hypothesis regarding the game–changing potential of generative AI, dubbing this new analysis as Act Two.
The firm’s primary insight? While generative AI has no shortage of use cases or customer interest, it’s struggling to maintain user retention and daily engagement. In terms of one–month mobile app retention, AI–centric apps lag behind established companies. Even when it comes to daily active users as a percentage of monthly active users, generative AI apps have a median ratio of just 14%, well below the 60–65% seen in top consumer companies and WhatsApp’s 85%. (The exception lies in the “AI Companionship” category, represented by apps like Character.)
In essence, the real challenge for generative AI isn’t creating demand; it’s in proving sustained value to convert users into daily members. This aligns with what I wrote in Generative Product–Market Fit a few months ago: I predicted that we’re nowhere near a consensus on product-market fit of AI products creating stable, novel value propositions or business models. They’re still mostly experiments.
Generative AI market overview
Sequoia’s report includes a market map for generative AI, and interestingly, it’s organized by use case rather than by the type of AI model. This decision underscores the market’s shift from a technology–first approach to one focused on practical applications and value. It also highlights the growing trend of multimodal applications in the field. Take a look.
But like many reports from leading venture capital firms, this one also serves as a rallying cry. These firms are aware of their substantial sway in the industry, and by spotlighting this trend, they’re signaling where entrepreneurs should focus if they aim to secure funding.
Chain–of–thought, tree–of–thought, and reflexion
The report discusses advanced reasoning approaches like chain–of–thought, tree–of–thought, and reflexion. These methods are enhancing the model’s capacity for deeper, more nuanced reasoning, bridging the divide between what users expect and what the models can actually do.
In the case of chain–of–thought, this is a novel strategy that prompts the language model to articulate its thought process. By exposing the model to a handful of examples where the reasoning is clearly spelled out, the model learns to similarly outline its reasoning when responding to new prompts. (We can also ask it to explain its reasoning, drawing from the examples it encountered during its training phase.) This tends to result in more precise answers. We examined this methodology in Smart Reactivity.
Reflexion, on the other hand, is designed to curb the issue of hallucination in current generative models. It employs a feedback loop that corrects errors autonomously, creating a “model–in–the–loop” framework as opposed to the traditional “human–in–the–loop” system. In essence, one language model reviews and refines the output of another. This is a technique I had already touched upon in Self–Reviewing Agents, though at the time, I didn’t know it already has the catchy name of “reflexion.” I’ll gladly update my mental model.
This post is for paid subscribers
Ready to stay ahead of the curve? This post offers just a taste. Unlock the full experience and gain access to hand-picked tech industry intel. Don’t miss out on the ideas that will shape the next decade—subscribe to Before Growth now.Subscribe
Already have an account? Log in