About Posts

Building GPT-based AI agents today, and their implications for tomorrow

ChatGPT and Stable Diffusion are currently entering the mainstream, but people haven't yet fully grasped the massive technological shift that is just getting started. This post explores the current state of the world, and implications over the next few years.

Intro to GPT

Everyone is having fun figuring out how to trick ChatGPT into doing things its creators didn't intend, such as impersonate Hitler. While funny, this is ultimately not super interesting.

When the first major AI art system (DALL-E 2) was released, it was also locked down, but within months, open source alternatives (Stable Diffusion & friends) appeared that were essentially unlocked. Text systems like ChatGPT will go the same way.

Folks are also pointing out cases where ChatGPT fails. While also funny, it is also not super interesting if one understands what how GPT-style systems work. They are, in broad strokes, just a system that takes some text as input, and outputs the most likely next text.

These systems are just saying the first thing that seems like a reasonable continuation of the existing text. By using the entire text of the internet to train on, this works great for "linear mode thinking" type problems.

Linear mode thinking reliably fails for classes of problems where a first intuition is likely wrong. For example, the word problem "A bat and ball cost $1.10. The bat costs $1 more than the ball. How much does the ball cost?". Most humans, and GPT, intuitively answer "$0.10".

ChatGPT incorrectly answering a tricky math question

A human, upon intuiting "$0.10", may enter "recursive mode thinking" and re-reason through the problem with their hypothesis. If the bat costs $1 more than the ball, and the ball is $0.10, the bat must be $1.10. But then the total cost would be $1.20, which is wrong.

The human then thinks more carefully about the problem and how their intuition failed them, and will produce the correct answer: The bat is $1.05 and the ball is $0.05. By prompting GPT to enter into a recursive mode of thought instead of linear, it can do the same thing:

ChatGPT correctly answering a tricky math question

Part of the problem is that the training data (the internet) tends to contain text in a format in which the answer comes first, and the reasoning (if any) comes second. This is not how humans actually think, though — it's just how they write.

GPT is trying to write like its training data. But it's really just a "simple" text completion system. It doesn't have the benefit of thinking offline, trying many solutions, organizing results, and producing well-composed writing last. It just writes.

Predictably, this leads to it painting itself into corners, particularly for nuanced, complex, or unintuitive queries. When pressed, it will often double down on its incorrect reasoning, because again, that is the behavior most often observed in its training data.

GPT is as good as it is because it's likely doing some implicit lookahead. The neural net is 96 layers deep, and during training it likely meta-learned some internal step-by-step reasoning, so it "thinks about" the problem a bit before predicting the next word.

This is analogous to AlphaZero's evaluation function. AlphaZero plays Go with a recursive search algorithm guided by a linear neural net. But just the neural net alone plays at a pro human level. It likely meta-learned a little bit of unrolled search algorithm internally.

Like AlphaZero operates best with a wrapper search algorithm, GPT is best used as a linear mode processor called from a recursive mode wrapper script. E.g. make GPT write a story, critique that story, rewrite the story idea according to the critique. Repeat in a loop.

A bland film premise, and a critique of that premise A somewhat improved film premise

Some good initial work in this vein can be found in Re3: Generating Longer Stories With Recursive Reprompting and Revision, but it's still early days.

Most of the tech community has not realized what is possible with sufficiently clever application of this simple technique. Using existing tools, it is possible to create a rudimentary full cognitive architecture for an artificial agent.

Building an AI agent today

To start, embed your agent in a simple command line sandbox, where it has access to commands that can be used to accomplish whatever task you have in mind.

Give GPT a prompt including the agent's goal, last N lines of the command line, the list of commands it has access to, and any knowledge relevant to the current situation (more on this later). Prompt it to propose N candidate commands to run next.

Have another prompt critique the candidate commands, filtering out the ones that probably won't work, etc. Recursively propose, critique, and edit until the system settles on the most likely best command to run.

Have a prompt produce a set of expectations for what the command will do when executed. Recursively edit/improve these expectations to arrive at a best prediction for the result of the command.

Run the command. Give the output to a prompt that checks if the expectations were met. If they weren't, use the recursive reasoning method to come up with a hypothesis on why they failed. Add this hypothesis to the knowledge store and loop back to generate the next action.

How to implement the knowledge store? Embeddings. An embedding algorithm takes text and encodes it into a high dimensional vector space where semantically similar texts map to points that are close to each other in the vector space.

So, to generate knowledge relevant to the current situation, use a prompt that takes the current state and outputs a list of facts that would be useful to know when making the next decision. Recursively critique/improve the list.

Then create embeddings for the list of facts that would be useful to know, and query your embedding store using whatever nearest-neighbor algorithm you like. Your action-proposing prompt is now able to "recall" knowledge relevant to the current situation.

Implementing a knowledge store is critical for sophisticated agents, because GPT-like systems have a context window limited to only a few thousand words. That represents "short term" memory, e.g. the last N lines of terminal output. The knowledge store is long term memory.

In a background thread, periodically sweep the knowledge store and use prompts try to generate new knowledge based on existing knowledge, merge two similar knowledge points into one, or prune stale/unused knowledge if space is a concern. (This sounds a bit like REM sleep...)

Here is a full diagram of the system I've described:

A diagram describing a cognitive architecture

It's possible to implement this system today with the GPT-3 API. It's pretty expensive to run, though. Each full cycle can cost on the order of $1 for complex workflows. This will be more feasible when Stability AI & friends release an implementation.

Recursive self improvement

Once you dial in the prompts in your system, you can reduce cost a fair amount by using OpenAI's fine-tuning API, which essentially creates a specialized version of the neural network with the prompt baked in. This will also improve the performance in general.

Fine-tuning gives a way to recursively improve the "intuition" of the system. When the agent successfully reaches its goal, add the inputs/outputs involved in the task to a set of known-good training data to fine-tune on. Use it to fine-tune the next generation of the agent.

With each cycle, the agent will get better at its job. Its linear mode first guesses will be correct more often, requiring fewer recursive improvement cycles. It will succeed more often and in better ways. This produces better data to recursively train the next generation.

This is all very similar to the core idea of the AlphaZero / MuZero whitepapers, and is a broadly applicable strategy.

What can we do with it?

So we have a simple but working architecture for a general purpose agent that can learn to accomplish goals in a command line type of environment. It has a long-term memory system based on embeddings that allow it to accumulate knowledge over time. What could we build?

AI executive assistant

From a short-term, societal point of view, I believe the most immediately impactful system will be the advent of automated personal executive assistants. Siri and Alexa are nowhere close to the utility of a general purpose assistant.

The implications of this will be a bigger delta to society than the arrival of the smartphone. Let's paint a picture.

Ads will die

By 2030, you will have an agent running on your phone that can read your social media accounts for you and show you a single, unified view of everything. If you want, it will filter out the ads and posts it knows you won't find interesting.

The ad ecosystem will not survive contact with agents that browse the internet indistinguishably from a human. Your AI can watch the ads, pull in the real content, and then show that to you without the ads. Only ads that are also high quality content will be viable.

In the short term, your agent will probably be text-based (living in a sort of command line environment), and will call out to more specialized programs for things like image analysis, speech synthesis, etc. Long term, the models will just be multi-modal.

The first web browsing agents that will become available over the next year or two will be predominantly text-based, i.e. read the raw HTML. In the long term, they will use browsers and view the website like a human. Self-driving web browser.

Managing your social life

You'll wake up and say "Agent, what's up today?". Your agent will respond in whatever voice you've given it, "Schedule is wide open today, but your anniversary is next week. Want me to get a dinner reservation?". You'll say "Yeah let's do a nice Italian place downtown".

Your agent will browse the internet, read reviews, look at photos, pick the one you'd like the most, call the restaurant and make the reservation for you. And, ironically, the entity on the other end of the phone line will probably be an AI agent too.

You will be able to tell your agent "Read the last few years of my DMs and make a list of my closest 20 friends in the city. Make sure I get coffee with all of them at least once every few months". Your agent will make the list, and contact your friends' agents to set dates.

Delegating your social life to an AI? Sound dystopian? You're just offloading the secretarial work. By removing the friction of planning things, your social life will become more rich, and from your point of view, even spontaneous.

Obviate (some) social media

Above, I said internet ads are doomed. In fact, all "keep up with your friends" social media is doomed. Keep-up-with-friends apps like Facebook exist because they reduce the problem from O(N) to O(1). I don't need to send a picture of my dog to everyone, I just upload to FB.

When everyone has a personal AI agent on their phone, the O(N) problem is simply not a big deal, because the AI agent can do all the work for me. I tell it what to do in O(1) and I don't care that it takes the agent O(N) amount of work — I'm not the one doing the work.

Rather than log onto Facebook to see all my friends' recent vacation pics and life updates, my AI agent can just message their AI agent at whatever cadence and ask it to send the data directly from their phone to mine.

My agent will collect all my friends' updates and photos directly and present them to me in my own personal UI. No clutter, no ads, no algorithmic feed that is out of my control.

Social media that is more of a publishing platform than a keep-up-with-friends app will survive. You could imagine Twitter as a p2p gossip protocol, but gossip is an N² problem. Your FB friends list² is still small. Twitter² is not small.

Save education

While I think personal executive assistants will be a huge social change for me and other adults, another major change in the short term will be the revolutionizing of primary education.

Everyone is currently worried about cheating. ChatGPT can do most writing/knowledge homework assignments. It can write decent essays. It knows wikipedia. It's kind of bad at math (an often recursive thought mode endeavor).

So it can help students cheat on their homework, fine. But it can also tutor. AI tutor agents will be available to every student. They will have infinite patience, and will be able to custom tailor their teaching techniques on a per-student basis.

As discussed above, ChatGPT produces confident bullshit too. The AI tutors of the future are not going to be raw ChatGPT. None of these applications will be. They will be agents that use large language models as a foundational processing unit, not the full application.

Kids are curious by nature, but this curiosity is too often stifled in a traditional classroom setting. Personalized AI tutors will enable widespread childhood learning and flourishing that tends to be reserved for a small minority of kids with private tutors today.

Some speculation

Let's consider some further out speculation. I'm less confident in these predictions as those above, which I am almost certain about happening in the 2020s.

Language vs Knowledge

We will figure out how to build a language model that is not also a knowledge model. A language model doesn't need to know the history of the Polk County, Florida to parse English. Separating the two will make the model smaller and more accurate.

It will make the model smaller by not wasting space encoding information the particular use-case will never need. It will increase accuracy because you'll be able to plug in a knowledge module that contains only true facts. Mitigates a lot of the confident bullshitting problem.

Decoupling the language model from the knowledge model will also make it easier to attach provenance to knowledge. You will be able to ask the AI for a citation, and it will give you a link to the wikipedia article it learned the fact from.

Language primitive

Like convolutional neural networks are built on a primitive well-suited to vision, analysis of fully connected LLMs will find lots of near-copies of a primitive well-suited to language. I predict it'll be multi-layer structure on the order of 1k-10k neurons.

If we do find such a structure, I predict it will have something to do with being a general "metaphor function". All language is essentially metaphor. When you teach a child new words, you do it with metaphor. When you learn new skills, you transfer-learn with metaphor.

And if we do find such a structure, I predict we will be able to copy and paste it in a structured way. Like CNNs have convolutional layers, future language models will have metaphor layers that are just made up of a matrix of these structures.

This sounds vaguely familiar with Jeff Hawkins' idea of trying to create artificial cortical columns as a basis for AI. Read more in A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex.

AI generated content issues

AI generated content will become a significant addiction problem, on par with hard drugs. TikTok is already incredibly addicting to a large part of the population. What happens when the content is custom created on a per-user basis to maximize engagement?

AI content might obliterate a shared cultural media narrative. Pop culture references, like quoting The Office or Seinfeld, are part of the cultural fabric. What happens when everyone mainly consumes media that was custom generated for them by AI?

Some job losses, and entrepreneurial gains

More than 50% of jobs existing today that can be described as "a human acting as a natural language interface to a computer" will not exist by 2030. Many call center jobs, data entry, some receptionists, lots of gig illustrators and writers, entry level coders, etc.

By 2030, there will exist profitable software startups whose codebase is 100% written by an AI. You say "Build a social networking site for dogs" and it grinds out a git repo. This will unleash an unprecedented wave of entrepreneurship.

The code will be written by agents as described above, not Copilot V2, which is just GPT for code. This will be further enabled as cloud services provide higher level primitives for expressing infrastructure plumbing, such as serverless cloud compute.

No robots

Without an advancement of the same magnitude as the transformer architecture, IRL robots will significantly lag behind fully virtual agents. Self-driving will be basically solved by 2030, but not robot butlers. Training data & loss function aren't close.