Skip to content

Chaining Prompts

Chains go beyond a single LLM call and involve sequences of calls.

5 min read
Chaining Prompts

I have written about prompting multiple times now.

For example, in Smart Reactivity, we talked about a prompt that guides the AI to evaluate changes from the target audience’s perspective to write a sumary. This method forces the model to assess the significance of these features based on customer perception. Then, we instruct it to prioritize the list according to what matters most to the customer. This strategy allows apps to become more responsive with minimal user input.

Smart Reactivity
With the rise of AI, apps are set to become more reactive without much user input.

However, if we revisit these posts now, we’ll see that the inputs and outputs we discussed were relatively brief. Even if the prompts were complex, they consisted of a few paragraphs, and the model’s responses were similar. But what if we wanted the AI to write an entire book, for example? When I typed this prompt into ChatGPT, I got just nine paragraphs of something the model considers a “book,” which they obviously are not.

So the question is, is there any way to guide the model into writing a book, even though it can’t do it on its own?

It turns out the answer is yes. If you, as a prompt engineer, know how to write a book, you can hand-hold the model step by step. First, you ask it to generate a few topics and choose the best one. Next, you have it write an elevator pitch for the book to determine the approach to the topic. Then, you instruct it to create an outline based on this pitch. For each chapter from the outline, you have it develop a plan. Finally, for each bullet point, it writes a few paragraphs. The model might not be able to hold everything it wrote in memory, so to reduce the length of the prompt, each time you prompt it to write a new chapter, you provide the outlines and a summary of what it previously wrote. And yes, it writes the summaries on its own, too.

As a result, at every step, it will generate a few paragraphs, then use these paragraphs to generate more paragraphs, and so on. The outcome might not be easy to digest, since LLMs are not highly creative, but you just guided the model through an extremely complex process.

The technique we just used is called chaining. It involves generating a desired output by following a series of prompts. In this process, a sequence of prompts is provided to a model, guiding it to produce the desired response. Each prompt addresses a subtask defined by the prompt engineer, and they are executed in sequence, each one feeding into the next, continuing down the chain of tasks until we arrive at the final result.

Time for a practical example.

No Work Is Ever Wasted
What if you’ve launched your app, poured your heart into it, and still crashed and burned?

In No Work Is Ever Wasted, we discussed chain-of-density, a prompt that the Salesforce AI team rolled out as a fresh approach for LLM-based text summarization. We’ll take the result that prompt generated from one of my articles and build upon it by chaining it with a couple of other prompts.

Your article talks about “productive fictions” like 17th–century philosopher Blaise Pascal’s Wager and societal values like meritocracy, illustrating moral stances, human traits, and the impact of market demand. It underscores the pragmatic optimism and societal equilibrium emanating from these unverified but beneficial beliefs, which serve as practical guides, balancing life and societal constructs.
Productive Fictions
Why not categorize your beliefs into those that are productive and those that are not?

We'll take the result of the summarization prompt from the previous section and build upon it by chaining it with a couple of other prompts.

Model input:
Take the last summary you wrote and prepare a list of ideas for different social media sites based on it. Each idea should be tailored to the type of content that works best on each platform.
Model output:
Based on the summary of the blog post, here are tailored ideas for different social media platforms:

Twitter: Create a thread explaining "productive fictions" with examples like Pascal's Wager and meritocracy, and how these unverified beliefs positively impact societal values and behaviors. Include engaging questions to encourage discussions.

Instagram: Post a carousel of visually appealing graphics that illustrate the concepts of Pascal's Wager, meritocracy, and their benefits. Use concise text overlays and captions to explain how these ideas contribute to societal balance and pragmatic optimism.

TikTok: Make a short video explaining the concept of "productive fictions" using simple terms and relatable examples. Use visual aids, animations, and on-screen text to make the content engaging. End with a question prompting viewers to share their own examples of beneficial beliefs.

(More ideas follow…)

You may have noticed that depending on the platform, the AI suggested different content forms. On Twitter, it’s text; on Instagram, it’s images; on TikTok, it’s videos. This is another reason why chaining can be extremely useful. In this example, we can take the ideas generated by the LLM and feed them to a visual model or an audio model.

For now, let's focus on just one modality. We’ll feed the model with its own instructions to write a Twitter thread based on the summary it wrote.

Model input:
Create a thread explaining "productive fictions" with examples like Pascal's Wager and meritocracy, and how these unverified beliefs positively impact societal values and behaviors. Include engaging questions to encourage discussions.
Model output:
1/10 🌟 Ever heard of "productive fictions"? These are beliefs that, while unverified, bring positive impacts to our lives and society. Let's dive into some examples and see how they work!

2/10 📜 One classic example is Blaise Pascal's Wager. Pascal argued that it's rational to believe in God, even without proof, because the potential benefits (eternal happiness) outweigh the potential costs.

(More tweets follow…)

Great! We could use this simple chain, for example, in an app meant for content creators to help them automate processes related to self-promotion. As the app designer, you set up the process step by step by writing the prompts in a chainable way, teaching the model to act almost like you would instruct a human. The AI then takes care of the results.


Related posts

Intelligence as an API

AI models are getting hard to beat when it comes to getting simple answers right at scale.

Intelligence as an API

Pairing with AI

Raise your floor with large language models.

Pairing with AI

The Economics of LLMs

How to avoid bankruptcy scaling up large language models?

The Economics of LLMs