Skip to content

Smart Reactivity

With the rise of AI, apps are set to become more reactive without much user input.

6 min read
Smart Reactivity
a chemist mixing ingredients in a dark laboratory
🦉
Before Growth is a weekly newsletter about startups and their builders before product–market fit, by 3x founder and programmer Kamil Nicieja.

There’s a limit to what you can achieve using just tags, keywords, likes, votes, and other “simple” metadata.

The first wave of metadata–driven products emerged with the advent of social networks. Platforms encouraged users to “like” various items and operated on the naive assumption that if individuals within your network appreciated something, you would likely enjoy it as well. And so we got Digg, Facebook, Twitter, YouTube, and many, many more…

The second generation of smarter reactivity leveraged classification algorithms. Consider TikTok, for example. It cleverly employs AI to determine the content you engage with, then curates more of what might appeal to you, bypassing your social connections. Just watch the stuff you like—we’ll figure out the rest on our own. While this was groundbreaking at a large scale, it’s only scratching the surface of what’s next.

Enter large language models.

The upcoming wave will pivot from mere classification to reasoning and cognition. Even though LLMs sometimes err and glitch, they exhibit a semblance of reasoning in many straightforward scenarios. The debate on whether this mirrors human–level thought is ongoing, but for many applications, even current capabilities suffice. Let me illustrate with a personal example.

As a proof of concept, I built Changepack, an open–source changelog tool integrated with ChatGPT. Changepack syncs with your GitHub activity, streamlining progress tracking. Every month, Changepack selects the most noteworthy updates, crafting a release note draft for your perusal and dissemination.

This selection process harnesses ChatGPT. In essence, I task the algorithm with sifting through recent changes, selecting the most pertinent ones, and justifying its choices for optimized outcomes. This allows me to proactively compose a draft for release notes without any human input, compelling the AI to scrutinize the core content, and deduce conclusions for me, sidestepping the need for behavior–based metadata.

This process involves two steps. Initially, AI needs to assess each change:

As an AI language model, your assignment is to evaluate an outstanding task related to a product called [name]. The task’s explanation is technical jargon and is geared toward the organization’s in–house departments.

[Introducing the product here…]

[Introducing the target audience's description of the product here…]

Your task has two aspects:

1. Assess the task, underlining parts that could be unclear to the target audience. Propose changes to enhance readability and improve understanding, while maintaining a professional yet accessible tone. Identify any mentions of specific staff members or proprietary tools including but not limited to feature management platforms like LaunchDarkly, customer messaging tools like Intercom, user behavior analytics platforms like FullStory, project management software like Jira, and others.

2. Craft a clear and succinct summary of this task. This summary should not exceed 600 characters and should not include any reference to specific staff members or proprietary tools such as LaunchDarkly, Intercom, FullStory, Jira, and the likes. The objective is to convey the essential alterations and updates to the product. Remove all URLs, regardless of whether they are in HTML or Markdown format.

Now, please evaluate and summarize the following task…

Following the cleanup phase, where the AI summarizes all tasks, we then instruct it to select the most critical ones:

Based on the following updates provided, identify and summarize the most impactful changes for [name]’s customers. As you select each update, please provide a brief rationale for its inclusion. It's crucial that you do not reveal names of any specific users, clients, accounts, or organizations.

Typically, we would consider various metadata indicators, such as keywords, the number of lines of code altered, the number of contributors to the feature, or the volume of added comments, to gauge the significance of a change. However, we take a different approach in this case. We direct the AI to assess the changes solely from the perspective of the target audience. This approach compels the AI to evaluate the importance of these features based on customer perception. Next, we instruct it to prioritize the list according to what would matter most to the customer.

I anticipate many apps will tread this path in the coming years, as we push the boundaries beyond routine tasks.


New business remains slow for early-stage startups

ChartMogul’s SaaS Growth Report reveals a troubling trend. While companies traditionally grow their Annual Recurring Revenue through new business, the last 10 quarters show a significant shift towards revenue generation from expansions. Meaning: they increase in revenue generated from existing customers rather than from acquiring new ones through upselling, cross–selling, or renewals with better terms.


Related posts

Chaining Prompts

Chains go beyond a single LLM call and involve sequences of calls.

Chaining Prompts

Intelligence as an API

AI models are getting hard to beat when it comes to getting simple answers right at scale.

Intelligence as an API

Pairing with AI

Raise your floor with large language models.

Pairing with AI