Skip to content

Is ChatGPT the New Alexa?

Did any custom GPTs get traction or is that playing out like Alexa skills?

8 min read
Is ChatGPT the New Alexa?
🦉
Before Growth is a weekly newsletter about startups and their builders before product–market fit, by 3x founder and programmer Kamil Nicieja.
  • OpenAI suggests that AGI is close, but then they launch something like the GPT Store. You’d think if they were really close to making AGI, they’d do something bigger or more important with it, right?
  • Staying on the topic, another new platform: Apple Vision Pro and its use cases.

🙏 My work is reader–supported. You can get a membership here!

📣 Before Growth has grown through word of mouth. Want to help? Share it on Twitter here, Facebook here, or LinkedIn here.

📚 My new ebook Generative AI in Product Design offers case studies on AI and just enough theory for you to build your next app with gen AI. Get your copy here!


Do you use custom GPTs?

A few months back, OpenAI introduced the ability to customize ChatGPT with specific instructions, additional knowledge, and various skills. These custom GPTs can assist in learning board game rules, teaching math to children, or designing stickers. Following this, OpenAI launched the GPT Store, making it accessible to ChatGPT Plus, Team, and Enterprise users. This store offers a selection of popular and helpful GPTs.

I haven't talked much about the store yet, but I did have some thoughts on GPTs themselves at their launch:

  • Creators were attracting even up to 8,000 users with some successful bots on ChatGPT’s platform. They benefited from SEO as OpenAI’s public catalog ranks high on Google, too.
  • Some users felt the new features weren’t very useful, believing they can create similar prompts themselves. This mirrored early views on Dropbox, where tech-savvy users felt they could replicate its services. In my opinion, the challenge lied in making GPTs' advanced features more accessible to those with less technical expertise.
  • I wasn’t certain about whether GPTs are apps, chatbots, or autonomous agents. The evolution of the concept of GPTs itself might have been based on the plugin concept—but the original plugins weren’t highly successful.
  • Some started using custom GPTs to integrate company documents, showing potential as knowledge bases.
  • GPTs might be evolving into Character AI, focusing on artificial personas, though their potential to become platforms for autonomous agents is uncertain, with Actions allowing GPTs to interact with the real world through APIs, potentially evolving into platforms performing tasks independently.

Did any of this happen?

OpenAI reports that users have created more than 3 million custom versions of ChatGPT. However, I haven’t come across any that have went viral, say, taking over Twitter in a single night. It seems that these customizations are primarily used for internal workflows—which is exactly how I use this feature myself. Let me show you.

I've developed three GPTs for my personal use: Summarize, Rewrite, and Density.

  • The first two aren't overly complicated. Summarize does just that—it summarizes articles into bullet points for busy, intelligent readers. I use it to assist in drafting Bits for this newsletter.
  • Rewrite was also straightforward to create: it rewrites text to sound as if it were written by a native English speaker. I draft all my articles by hand, but editing takes up a significant amount of time because English is not my first language. It’s not that my English skills are lacking, but for some reason, when I edit on my own, I spend hours tweaking and adjusting, never quite satisfied with the outcome. Rewrite solves this.
  • Density is the most intricate of the three. It’s a technique developed by the Salesforce AI team, offering a new method for summarizing text using LLMs. Given that many people use LLMs for summarization, the chain-of-density method stands out due to its strong performance in human preference studies, highlighting its value. Remarkably, this approach integrates smoothly with the standard GPT-4 without any need for fine-tuning, underscoring the potential for discovering effective prompting strategies. I turn to it when the basic Summarize doesn’t work very well.

But they’re not apps, chatbots, or autonomous agents as I anticipated. They are shortcuts. That’s precisely how I created them for my use—I integrated them into my custom instructions:

Treat “/rewrite” as a shortcut for “Rewrite as a native speaker would:”

Treat “/summarize” as a shortcut for "Summarize the following article using bullet points. Keep in mind I have limited time and need a concise, intelligent overview.”

Now, I don’t even have to type the command; I can simply select a custom GPT from the sidebar or, if I'm already in a conversation with ChatGPT, summon any specific GPT using @, similar to mentioning someone in a group chat. This feature is cool and useful since custom instructions are capped at 1500 characters—yet this approach isn’t exactly revolutionary. A similar point came up in my article about AI-powered hardware when I compared it to the previous generation of devices such as smart speakers:

I brought up Alexa for a reason. Although I’m keen to try them, I haven’t yet experienced Meta’s smart glasses firsthand, so my thoughts are speculative. I suspect that even with the integration of a multi–modal large language model, this product may face challenges similar to those encountered by Amazon.

I own an Echo smart speaker and mainly use it for basic tasks like setting alarms, reminders, playing music, and checking the weather—nothing transformative. This limited scope of use is one reason why Alexa hasn’t established a sustainable business model, incurring an annual loss of about $10 billion. It was only with the advent of ChatGPT that a mass–market product of this genre truly took off, rapidly becoming the fastest-growing consumer app ever.

This raises an intriguing question: Will smart glasses follow the trajectory of Alexa or that of ChatGPT?

It appears that even ChatGPT struggles to match the success of its base version. Though the product remains highly useful, the platform doesn’t seem as appealing—not just to me, but likely to the broader audience as well.

I've discussed ChatGPT with my friends who use it for various purposes—some for coding as technical users, and others for more casual tasks. None of them use custom GPTs, likely because they don’t deal with highly repetitive tasks often enough to feel the need—and see the benefit. For instance, if you’re a programmer, you don’t really need a specialized GPT; chatting with the base model or using your text editor’s Copilot does the job well enough. (And if you’re a casual, you’ll use ChatGPT to help you draft emails or do homework for you, which the base model does great, too.)

This leads me to believe that custom GPTs may carve out a niche in the enterprise market. Picture a typical company where every team has highly repetitive workflows or tasks they’re looking to automate. These could be shared internally, making them accessible to all employees. Some of these GPTs might also function as knowledge bases. For example, the HR department could upload frequently asked questions about company policies to the platform. This seems like a practical application. While not groundbreaking, it’s a solid product that OpenAI could successfully offer to many companies.

However, regarding consumer-oriented apps, I’m not as convinced.

  • Low customer awareness remains a challenge. ChatGPT, being a general tool, and GPT-4, currently the top model globally, are so effective—even GPT-3.5 handles simple tasks well—that many individuals don’t see a need for a custom GPT. This presents a conflict of interest for OpenAI: maintaining the quality of the base model is crucial to keep users engaged.
  • The ability to market effectively is constrained. Text does not serve as an effective user interface for sales, impacting various e-commerce sectors that are unlikely to see significant benefits from adopting the GPT Store. From my experience—I’ve given it a shot. Not with ChatGPT, but I attempted to sell real estate using the Messenger platform. It was unsuccessful because chat platforms don’t offer a better UI for browsing inventory.
  • The limited ability to deep-link presents a significant hurdle. Everything that is written using ChatGPT stays in ChatGPT. However, developers aim to leverage platforms for user acquisition, trying to then guide users towards their own apps. This introduces another conflict of interest—as OpenAI will prefer to retain user engagement within its own ecosystem. And unlike Apple which doesn’t make all the apps for iOS, OpenAI’s main product already can do most of the things that GPTs made by others can do!
  • The absence of analytics is another notable limitation. For example, a significant area poised for development is the attribution of media, specifically crediting the underlying content that fuels AI queries. This involves determining how revenue should be allocated among publishers. However, we have yet to reach this level. In fact, GPT Store apps feature hardly any analytics!
🤔
If you’ve got any questions about this week’s essay, feel free to respond to this email or post a comment once you’ve upgraded your subscription.

Another new platform: Apple Vision Pro

While the majority of reviews are positive, people are still exploring and trying to understand the most effective applications of spatial computing.

I think Apple isn’t viewing this as a new platform or a step towards the metaverse, but rather as an incredibly advanced type of monitor. I get it, because so many industries obsess over monitors—graphic designers, programmers with 3-4 monitors at work, and I myself spent a lot on my gaming monitor… yeah, yeah, I know. If Vision Pro is comfortable and has good battery life, instead of buying 4 monitors, I’d just sit down, put it on my head, and have a whole wall as a monitor for my MacBook. That’s why I believe Apple invested in top-class lenses that eliminate the pixelated view like in cheaper VR devices, which also explains the high price.

For me, this makes sense—when I work remotely, I have my entire setup at home. But when I travel, say to London, I’m stuck working on a small laptop. With this device, it's like carrying an infinite number of monitors with me. Maybe the high price isn’t a huge barrier at the moment, considering the market and applications they’re targeting?


Related posts

Intelligence as an API

AI models are getting hard to beat when it comes to getting simple answers right at scale.

Intelligence as an API

Pairing with AI

Raise your floor with large language models.

Pairing with AI

The Economics of LLMs

How to avoid bankruptcy scaling up large language models

The Economics of LLMs