Skip to content

Uneven Distribution, Again

Master the art of prompt engineering. It’s an open door to generative AI, which is still an industry in its infancy.

4 min read
Uneven Distribution, Again
people helping each other assemble a machine
🦉
Before Growth is a weekly column about startups and their builders prior to product–market fit.

When it comes to technology like large language models and text–to–image models, we’re still in the early days. The majority of people haven’t yet adopted these tools for their work, hobbies, or day–to–day lives.

Some have experimented with early versions, such as GPT–3, and found them lacking, abandoning ship before GPT–4, with its significantly improved capabilities, came into wider use. Even those who have found some applications often don’t venture beyond their starting point, opting to use these models for basic outputs.

Just recently, I assisted a friend who runs a one–person business with employing ChatGPT Plus as a virtual marketing consultant. We used the tool generated content and strategic ideas for the company. Yet, like all things, GPT–4 has its limitations. It needed many guiding questions and iterative refinements to produce truly valuable output. If you just skim the surface, it will do so, too.

And while I believe the term “prompt engineering” is overhyped, I do think a particular mindset is necessary when working with neural models: you have to guide and steer them. It’s an iterative refinement process, and anything less will yield rather superficial results—at least as things stand now.

So, if you’re anxiously searching for an avenue to break into the AI field, concerned you’re already behind due to the progress of others, here’s a tip: learn to prompt well and then help somebody who can’t.


The wiz

Here you can find a related, brief, yet fairly typical tale from a Reddit user who took a data analyst position in a company. They’ve been using ChatGPT to increase their productivity, but feel a bit shy about admitting this to their superiors. This story resonates with me. While large corporations may face difficulties in implementing AI from the top down, employees are able to adapt and use it from the bottom up quite readily. However, without any direct financial motivation, they might not necessarily share these productivity gains with the rest of the organization.

Llama 2

Meta announced Llama 2. Currently, it’s the best open–source software model available. The largest version closely mirrors the performance of GPT–3.5 in reasoning tasks, but there’s a substantial gap when it comes to coding benchmarks. It’s competitive with, or even better than, PaLM–540B on most metrics, but still trails behind GPT–4 and PaLM–2–L.

It’s freely available for commercial use up to a staggering 700 million monthly active users—that’s free until your app is seven times larger than ChatGPT at its peak. Plus, it can be run on your laptop. While I appreciate what Meta is doing with generative AI, it seems like everyone wants a slice of this industry. Word is that Apple will soon be joining the race too.

Turtles all the way down

Air is a conversational AI capable of conducting sales and customer service calls ranging from 5 to 40 minutes that are not that far off from human interactions.

It seems increasingly likely that our personal AI assistants will be able to handle phone and email conversations with company–employed AI for sales and support. This aligns with a prediction I made in 2016, where I foresaw chatbots executing what I termed as “self–defining interactions” by around 2020.

A bot that understands grammar and intent can automatically extract all the information it needs from a response generated by another bot.Take a look at the conversation between two AIs, Clara and Amy, personal assistants that schedule meetings. Two users wanted to meet for coffee, so they decided to let their AIs do the heavy lifting—and it worked.

While that term seems clunky to me now and the prediction was three years off, it’s clear that this is the direction the future is moving towards.

Is GPT–4 getting worse over time?

Researchers have assessed ChatGPT’s performance over a period of time and discovered significant differences in its responses to identical questions when comparing the June versions of GPT–4 and GPT–3.5 to the March releases. They argue that the newer iterations have deteriorated in performance on certain tasks. OpenAI, on the other hand, counters this claim, suggesting that as users engage with the model more intensively, they begin to notice issues that were previously overlooked. Several other experts have also expressed their skepticism. In the meantime, OpenAI has unveiled custom instructions for ChatGPT, along with an Android app.

Was this newsletter created by AI? There’s no way to tell.

If you input the US Constitution into a tool engineered to identify text generated by AI models like ChatGPT, the tool will indicate that the document was very likely written by an AI. Teachers who have been employing educational techniques established over the last hundred years are scrambling to preserve traditional methods. Personally, I find this to be a futile endeavor. In my opinion, essays should be evaluated based on the quality of the underlying thought, rather than mere grammatical accuracy and the ability to recite simple facts.

Accelerationism in control

Global GDP has grown by around 50% since 2010. Meanwhile, Americans have consistently increased their wealth since 2008, yet Europeans have seen a decline in their wealth. Could it be because Europeans often discuss the concept of degrowth while Americans continue to innovate, introducing concepts like effective accelerationism? It’s hardly surprising that techies in Silicon Valley were quick to embrace e/acc as it becomes clear that the San Francisco Bay Area continues to dominate as the world’s premier investment hub, far outstripping any other region.

🙌
Thanks for being a part of Before Growth! The newsletter has grown mainly through word of mouth. Would you mind sharing it with a couple of friends who might like it too?
AINews

Kamil Nicieja

I guess you can call me an optimist. I build products for fun—I’m a startup founder, author, and software engineer who’s worked with tech companies around the world.


Related posts

Chaining Prompts

Chains go beyond a single LLM call and involve sequences of calls.

Chaining Prompts

Intelligence as an API

AI models are getting hard to beat when it comes to getting simple answers right at scale.

Intelligence as an API

Pairing with AI

Raise your floor with large language models.

Pairing with AI