Skip to content

Intelligence as an API

AI models are getting hard to beat when it comes to getting simple answers right at scale.

2 min read
Intelligence as an API
🦉
Before Growth is a newsletter about startups before product–market fit by 3x founder and programmer Kamil Nicieja.

A year and a half later, after the hype subsided just a bit, I think of large language models as on-demand intelligence accessible through APIs. Whenever my software needs to use fuzzy logic or analyze unstructured data, I can just send an HTTP request and get the insights I need.

Let’s consider a real-world scenario. Imagine an applicant tracking system that filters candidates using specific keywords. Each resume must be reviewed by a recruiter to see if it matches their criteria. For example, they might be searching for a candidate with seven years of experience in a particular technology.

The top companies often get hundreds—if not thousands—of applications, resulting in a lot of repetitive manual work. Moreover, recruiters are usually not experts in technology. To compensate, they come up with their own heuristics, which might lead false positives or false negatives.

With large language models, rather than depending on these filters, we can tap into AI’s intelligence by making an API call. (Shoutout to Patrick McKenzie on Twitter for the inspiration behind this prompt.)

Suppose you’re looking for someone with at least seven years of Python experience. A developer’s resume indicates they’ve been using Django since version 1.8. Explain why you’d decide to include or exclude this candidate.

…and get a response:

Include them. Django 1.8’s release in 2015 implies over 7 years of Python experience.

I wasn’t sure myself since I’m a Ruby developer. But after looking into it, I confirmed this is the right decision. Great!

Building an applicant tracking system like that without AI would require engineers to either integrate a complex expert system directly into their product, or allow recruiters to set up a rigid job application workflow on their own. But LLMs can easily read the job listing and the resume by using their extensive knowledge—on demand.

So here’s the best way to understand how these models are useful today, despite not yet being smarter than humans: if you assign them specific, concrete tasks, they can help you achieve good results efficiently. Turns out, my perspective on AI systems hasn’t changed much over the past decade: they can deliver what a thousand interns would, but much faster and at a lower cost.

The major difference this time is that the tech stack has become much simpler to use due to commoditization. What used to be accessible only to Big Tech only has now become available to everyone.


📣 Before Growth has grown through word of mouth. Want to help? Share it on Twitter here, Facebook here, or LinkedIn here.

📬
Did a friend forward this to you? Subscribe now to get the latest updates delivered straight to your inbox every week.
AI

Kamil Nicieja

I guess you can call me an optimist. I build products for fun—I’m a startup founder, author, and software engineer who’s worked with tech companies around the world.


Related posts

Chaining Prompts

Chains go beyond a single LLM call and involve sequences of calls.

Chaining Prompts

Pairing with AI

Raise your floor with large language models.

Pairing with AI

The Economics of LLMs

How to avoid bankruptcy scaling up large language models?

The Economics of LLMs