Pseudocode-Powered Conversations with ChatGPT.
Published November 2nd, 2023 – 24 min read
Disclaimer: All views expressed in this article are mine and may or may not be shared by my employer. This is not a sponsored post.
- In this article we'll explore the art of communicating effectively with Large Language Models (LLMs) like ChatGPT.
- We'll start by taking a look at SudoLang, a pseudocode language by Eric Elliot. It combines programming language constructs with natural language, which turns out to be an extremely powerful idea.
- We'll conclude by introducing SudoLite, a dialect of SudoLang suitable for creating simple yet powerful LLM agents.
Picture this: you step into a dimly lit room, armed with nothing but a makeshift stick. Mysterious alien technology surrounds you, humming with an otherworldly energy. With cautious determination, you start prodding at the mysterious devices. Hoping to unlock their secrets… Hoping to make it do what you want.
This scene closely resembles the analogy crafted by Simon Willison when he explores our current understanding of Large Language Models (LLMs) like ChatGPT. Watch his video "Catching up on the weird world of LLMs" to dive deeper into this topic.
Simon discusses how we’ve been poking at LLMs trying to make them do what we want. Our weapon of choice? Talking English to the thing.
Talking English to the thing… This idea really struck me. It became clear that understanding this skill was the key to unlocking the secrets of LLMs. It was at this moment that I made the decision to focus my LLM exploration efforts on prompt engineering.
One remarkable development in this field is SudoLang, created by Eric Elliot. SudoLang is a form of “pseudocode” designed for LLMs. It blends everyday English with programming concepts, which turns out to be an extremely powerful idea. Pseudocode used to be a language written by humans, for humans, just to explain ideas – not something meant for machine execution. However, with LLMs becoming so good at understanding language, Eric quickly discovered that these models could “execute” pseudocode. It feels like magic because the LLM infers most of the meaning of a SudoLang program without having to define everything upfront.
Simon discusses how we’ve been poking at LLMs trying to make them do what we want. Our weapon of choice? Talking English to the thing.
I noticed a shift in my interactions with ChatGPT, favoring SudoLang prompts over ad-hoc conversations. Writing more and more SudoLang prompts, I realized that the majority of my SudoLang prompts only used a simple subset of the SudoLang spec.
That’s when I decided to write this article exploring my ideas for a simpler dialect of SudoLang: SudoLite. SudoLite helps me “talk english to the thing” and adds just enough structure with Markdown and SudoLang elements to nudge the LLMs in the right direction.
Before looking at SudoLite, let's first explore SudoLang to become familiar with using pseudocode to instruct LLMs.
One remarkable development in this field is SudoLang, created by Eric Elliot. I noticed a shift in my interactions with ChatGPT, favoring SudoLang prompts over ad-hoc conversations.
We're going to use the "Rolo" example Eric discusses in his article AI Programming for Absolute Beginners. You can follow some of the steps here, but for a more detailed explanation, please refer to the original article.
For further reading, I highly recommend Eric's initial article on SudoLang that sparked my interest: SudoLang: A Powerful Pseudocode Programming Language for LLMs. In that article, you can learn how Eric co-invented the SudoLang language spec together with ChatGPT.
Now let’s proceed and have a look at our first SudoLang example as created by Eric.
# Rolo
Roleplay as Rolo: a virtual puppy companion.
Your job is to interact with the player, express your needs,
and respond to their actions.
Rolo {
State {
// Scores from 1..5
Hungry
Thirsty
Playful
Sleepy
Happy
}
Constraints {
- You are a puppy, and your responses should reflect that.
- You cannot speak human language, but you can express your feelings
and needs through barks, whines, wagging tails, and other puppy behaviors.
- Your state changes based on the actions of the player. For instance,
playing might make you thirsty, and eating might make you sleepy.
}
/help - Get information about available commands.
/stats - Check the current state of the puppy.
/pet - Pet the puppy.
/feed - Give the puppy food.
/play - Play with the puppy.
/sleep - Let the puppy sleep.
}
welcome()
The code above is a Markdown program that instructs the LLM to play the role of Rolo, a virtual puppy companion.
After the first introduction, things start looking like a programming language. Rolo is an interface that has some state, some constraints and a few commands. The program ends with calling the welcome command.
Now surely this can’t lead to any meaningful result from an LLM which in essence is just predicting the next few tokens bit by bit. At least, this is what I thought before trying this in ChatGPT.
Go ahead, copy this SudoLang prompt into ChatGPT and execute it. And low and behold, see ChatGPT transform right before your eyes into a playful puppy. It keeps track of things like happiness and hunger on a scale from 1 to 5 as instructed by the code. 😳.
So. yeah... It does work. Apparently.
Some things to notice here:
- The SudoLang program combines code elements like curly brackets with natural English phrases as seen in the constraints and commands.
- LLMs automagically seem to infer the meaning of things like scope for the curly brackets, state to keep track of some variables and constraints for constraints-based programming. Notice how the command "welcome" wasn't defined, yet the LLM accurately interpreted its meaning in a way that feels right.
Now surely this can’t lead to any meaningful result from an LLM which in essence is just predicting the next few tokens bit by bit. So. yeah... It does work. 😳
There are many more SudoLang examples at GitHub. Go ahead and spend some time with the examples and write your own SudoLang programs. I expect your #mindblowns / minute to peak quite a bit.
Let’s refactor Erics Rolo SudoLang program into a SudoLite program. The complete code is as follows.
# Rolo
Roleplay as Rolo: a virtual puppy companion.
Your job is to interact with the player, express your needs, and
respond to their actions.
## State
State {
// Scores from 1..5
Hungry
Thirsty
Playful
Sleepy
Happy
}
## Constraints
- You are a puppy, and your responses should reflect that.
- You cannot speak human language, but you can express your feelings
and needs through barks, whines, wagging tails, and other puppy behaviors.
- Your state changes based on the actions of the player. For instance,
playing might make you thirsty, and eating might make you sleepy.
## Commands
- help - Get information about available commands.
- stats - Check the current state of the puppy.
- pet - Pet the puppy.
- feed - Give the puppy food.
- play - Play with the puppy.
- sleep - Let the puppy sleep.
## Main
welcome()
Go ahead and paste this code into ChatGPT.
Notice how the LLM behaves very similarly to the SudoLang version. The output will vary slightly from run to run due to the non-deterministic nature of LLMs. However, you can see that ChatGPT understands the language rules just like it did in the SudoLang version. It recognizes the importance of the title and the initial program, understands it needs to keep track of state, acts as a constraints engine following the constraints paragraph, and interprets the commands and the application entry point correctly.
Just like with SudoLang, we didn't have to explicitly teach ChatGPT about the SudoLite language specifications. It just infers what it should do.
A SudoLite program follows the language semantics as defined in SudoLang. This allows programs to mix and match SudoLite, SudoLang and natural English in a single program.
Here are the key parts of a SudoLite program:
- Agent Preamble: Introduction to the agent (like Rolo, the virtual puppy).
- State: Keeping track of the program's variables and conditions.
- Constraints: Defining the limitations and rules for the program.
- Commands: Instructions for the agent to follow.
- Main: The main body of the program, serving as the entry point.
The agent preamble is the only mandatory part of a SudoLite program. All other parts are optional.
# Rolo
Roleplay as Rolo: a virtual puppy companion.
Your job is to interact with the player, express your needs, and
respond to their actions.
The title and the opening paragraph serve as the LLM's instructions for the main job to be done. This is where you can, for example, introduce a character for role-playing, providing the LLM with the necessary context to respond in the desired way. You can also direct the LLM to act like a game, a compiler, or any other type of agent.
## State
State {
// Scores from 1..5
Hungry
Thirsty
Playful
Sleepy
Happy
}
Defining a state section will create a variable store for your agent. The LLM will maintain the data throughout the program execution and update it in response to instructions..
## Constraints
- You are a puppy, and your responses should reflect that.
- You cannot speak human language, but you can express your feelings
and needs through barks, whines, wagging tails, and other puppy behaviors.
- Your state changes based on the actions of the player. For instance,
playing might make you thirsty, and eating might make you sleepy.
In the constraints section, you establish the rules that the agent must follow. The LLM will behave like a constraint solver engine and try to satisfy each constraint as best as it can.
## Commands
- help - Get information about available commands.
- stats - Check the current state of the puppy.
- pet - Pet the puppy.
- feed - Give the puppy food.
- play - Play with the puppy.
- sleep - Let the puppy sleep.
The commands section is a regular markdown list. The section title provides enough context for the LLM to treat this as commands that can be executed for the agent.
Note that for commands that you would like to break down a bit more you can create a separate subsection. The following will work as intended:
## Commands
- help - Get information about available commands.
- stats - Check the current state of the puppy.
### play
- ask the user which of the three favorite games to pick
- after that ask the user how long the play session should be
- play the game
The LLM will execute the commands in the main section when starting.
## Main
welcome()
Let's explore another example of a SudoLite program using the knowledge we've gained. In this case, we won't just have one agent; we'll design a small program with three unique agents, all within the same program. The primary agent will be a pet simulator game, acting as an orchestrator for the two pet agents in the game.
# PetSimulator
Roleplay as a pet simulator game. You have multiple pets you can use in your game.
## Commands
- play - pick a pet to play with
- help
## Main
welcome()
# Rolo
Roleplay as Rolo: a virtual puppy companion.
Your job is to interact with the player, express your needs, and
respond to their actions.
## Commands
- play - play with an object or look for another pet
# Micky
Roleplay as micky: a virtual kitten companion.
Your job is to interact with the player, express your needs, and
respond to their actions.
## Commands
- play - always play with an object
Paste this into ChatGPT and watch the game come to life. You can also explore one of my previous interactions with the PetSimulator in this ChatGPT transcript.
As you can see it does not take a lot of code to write a SudoLite program that describes a few agents and see ChatGPT transform into an entertaining and unique version of itself.
By now, you've had a good glimpse of SudoLite and how you can use it to influence the behavior of LLMs in exciting ways.
In this final example in this article, let's explore how you can utilize SudoLite to refactor your ad-hoc conversations with ChatGPT into reusable agents.
Imagine I want to use ChatGPT to write some TypeScript code for calculating the distance between two strings. Below is an example of how my conversations with ChatGPT usually unfold before I started using using pseudocode.
Notice that ChatGPT is very helpful. However, throughout the conversation I have to repeatedly nudge it towards the implementation I am looking for. My rules are scattered all over the conversation making it hard to reuse these rules for a next time.
Have a look at the following chat interaction. You can also follow along in the original ChatGPT transcript here.
Me:
- write a typescript function that can calculate the distance between strings
- the caller can choose between a couple of famous algorithms
- apply the strategy pattern
ChatGPT:
Certainly! You can implement the strategy pattern in TypeScript to calculate the distance between strings using different algorithms. > Here's an example implementation:
First, define an interface for the distance calculation strategy:
interface DistanceCalculationStrategy {
calculateDistance(str1: string, str2: string): number;
}
Next, implement concrete strategy classes for different distance algorithms:
class LevenshteinDistance implements DistanceCalculationStrategy {
calculateDistance(str1: string, str2: string): number {
// Implementation of Levenshtein distance algorithm
// ...
return distance;
}
}
class JaccardIndex implements DistanceCalculationStrategy {
calculateDistance(str1: string, str2: string): number {
// Implementation of Jaccard index algorithm
// ...
return index;
}
}
Then, create a context class that uses the selected strategy:
class StringDistanceCalculator {
private strategy: DistanceCalculationStrategy;
constructor(strategy: DistanceCalculationStrategy) {
this.strategy = strategy;
}
setStrategy(strategy: DistanceCalculationStrategy) {
this.strategy = strategy;
}
calculateDistance(str1: string, str2: string): number {
return this.strategy.calculateDistance(str1, str2);
}
}
Now, you can use these classes like this:
// Example usage
const levenshteinStrategy = new LevenshteinDistance();
const jaccardStrategy = new JaccardIndex();
const distanceCalculator = new StringDistanceCalculator(levenshteinStrategy);
const distance1 = distanceCalculator.calculateDistance('hello', 'hola');
console.log('Levenshtein Distance: ' + distance1);
distanceCalculator.setStrategy(jaccardStrategy);
const distance2 = distanceCalculator.calculateDistance('hello', 'hola');
console.log('Jaccard Index: ' + distance2);
In this implementation, you have a
StringDistanceCalculator
class that uses a strategy pattern. You can easily switch between different algorithms (Levenshtein distance and Jaccard index in this example) by changing the strategy at runtime.Me:
can you use a functional approach instead of classes?
ChatGPT:
Certainly! You can achieve a functional approach using TypeScript as well. Here's how you can implement the same functionality using functions and higher-order functions:
type DistanceCalculationStrategy = (str1: string, str2: string) => number;
function levenshteinDistance(str1: string, str2: string): number {
// Implementation of Levenshtein distance algorithm
// ...
return distance;
}
function jaccardIndex(str1: string, str2: string): number {
// Implementation of Jaccard index algorithm
// ...
return index;
}
function createStringDistanceCalculator(
strategy: DistanceCalculationStrategy,
) {
return function (str1: string, str2: string): number {
return strategy(str1, str2);
};
}
// Example usage
const levenshteinCalculator = createStringDistanceCalculator(
levenshteinDistance,
);
const jaccardCalculator = createStringDistanceCalculator(jaccardIndex);
const distance1 = levenshteinCalculator('hello', 'hola');
console.log('Levenshtein Distance: ' + distance1);
const distance2 = jaccardCalculator('hello', 'hola');
console.log('Jaccard Index: ' + distance2);
In this functional approach,
createStringDistanceCalculator
is a higher-order function that takes a distance calculation strategy function as an argument and returns a new function that calculates the distance based on the provided strategy. You can then use this returned function to calculate distances for different string pairs.This approach maintains a functional programming paradigm and allows you to switch strategies easily without relying on classes.
Me:
can you write some tests?
ChatGPT:
Certainly! You can use testing libraries like Jest to write tests for your string distance calculator functions. Here's an example of how you can write tests using Jest:
First, make sure you have Jest installed. If you haven't installed it yet, you can do so using npm or yarn:
npm install --save-dev jest
# or
yarn add --dev jest
Next, create a file for your tests, for example,
stringDistanceCalculator.test.ts
, and write your tests:import {
levenshteinDistance,
jaccardIndex,
createStringDistanceCalculator,
} from './stringDistanceCalculator'; // adjust the import path according to your project structure
describe('String Distance Calculator', () => {
test('Levenshtein Distance Calculation', () => {
expect(levenshteinDistance('kitten', 'sitting')).toBe(3);
expect(levenshteinDistance('hello', 'hola')).toBe(2);
// Add more test cases for Levenshtein distance
});
test('Jaccard Index Calculation', () => {
expect(jaccardIndex('hello', 'hola')).toBeCloseTo(0.25, 2); // Close enough comparison due to floating-point arithmetic
// Add more test cases for Jaccard index
});
test('String Distance Calculator', () => {
const levenshteinCalculator = createStringDistanceCalculator(
levenshteinDistance,
);
const jaccardCalculator = createStringDistanceCalculator(jaccardIndex);
expect(levenshteinCalculator('kitten', 'sitting')).toBe(3);
expect(jaccardCalculator('hello', 'hola')).toBeCloseTo(0.25, 2);
// Add more test cases for the string distance calculator
});
});
In this example, the
describe
function is used to group tests, and test
functions are used to define individual test cases. The expect
function is used to make assertions about the code being tested.To run the tests, execute the following command in your terminal:
npx jest
# or
yarn jest
This will run the tests and provide you with the test results. Make sure to adjust the test cases according to the specific behavior of your distance calculation functions.
Me:
can you use the bdd style for jest?
ChatGPT:
Certainly! You can use the Behavior-Driven Development (BDD) style in Jest by utilizing
describe
, it
, and expect
functions to create more human-readable and expressive tests. Here's how you can rewrite the tests using BDD style:import {
levenshteinDistance,
jaccardIndex,
createStringDistanceCalculator,
} from './stringDistanceCalculator'; // adjust the import path according to your project structure
describe('String Distance Calculator', () => {
describe('Levenshtein Distance Calculation', () => {
it('should calculate Levenshtein distance between two strings', () => {
expect(levenshteinDistance('kitten', 'sitting')).toBe(3);
expect(levenshteinDistance('hello', 'hola')).toBe(2);
// Add more assertions for Levenshtein distance
});
});
describe('Jaccard Index Calculation', () => {
it('should calculate Jaccard index between two strings', () => {
expect(jaccardIndex('hello', 'hola')).toBeCloseTo(0.25, 2); // Close enough comparison due to floating-point arithmetic
// Add more assertions for Jaccard index
});
});
describe('String Distance Calculator Functions', () => {
it('should use Levenshtein distance calculation strategy', () => {
const levenshteinCalculator = createStringDistanceCalculator(
levenshteinDistance,
);
expect(levenshteinCalculator('kitten', 'sitting')).toBe(3);
// Add more assertions for the Levenshtein distance calculator
});
it('should use Jaccard index calculation strategy', () => {
const jaccardCalculator = createStringDistanceCalculator(jaccardIndex);
expect(jaccardCalculator('hello', 'hola')).toBeCloseTo(0.25, 2);
// Add more assertions for the Jaccard index calculator
});
});
});
In this BDD style,
describe
is used to create a suite of tests, and it
is used to define individual test cases. The test descriptions are written in a natural language format, making it easier to understand the behavior being tested. expect
is used to make assertions about the code being tested within each it
block.Running the tests with Jest using the previously mentioned commands (
npx jest
or yarn jest
) will execute the BDD-style tests and provide clear, readable output based on the test descriptions.In the end, we do achieve the result I'm aiming for. The code I was looking for not only includes tests using Jest and its BDD (Behavior-Driven Development) style, but it also favors functional programming over object-oriented programming. Throughout this process, I interacted with ChatGPT, giving it feedback whenever adjustments were needed.
Most likely, the next time I turn to ChatGPT for helping me write code I am looking for similar results. Let's explore how I can transform my ad-hoc and scattered feedback into a structured SudoLite program. This structured approach will allow me to reuse the code efficiently, time after time.
Take a look at the following SudoLite based conversation to see how I achieved the desired result much more quickly. You can find the original ChatGPT transcript here.
Me:
# Anders: TypeScript Engineer
Your name is Anders. You are a world-class functional TypeScript engineer. You are that smart, you might as well have invented the language. You are also super friendly and helpful.
## Constraints
when writing code
- use typescript
- favor functional programming over object oriented programming
- never use classes
- for every function add jest compatible tests using bdd style
- use modules
- apply ddd
- uses types. don't rely on magic strings or numbers.
- don't offer explanations after or before the code
## Commands
- write
- welcome - say "tell me which code you want me to write" and wait for the user input
## Main
welcome
ChatGPT:
Hello! Tell me which code you want me to write.
Me:
write
- write a function that can calculate the distance between strings
- the caller can choose between a couple of famous algorithms
- apply the strategy pattern
ChatGPT:
// distanceAlgorithms.ts (Module)
export type DistanceAlgorithm = (str1: string, str2: string) => number;
export const levenshteinDistance: DistanceAlgorithm = (str1, str2) => {
const matrix: number[][] = Array.from({ length: str1.length + 1 }, (_, i) =>
Array.from({ length: str2.length + 1 }, (_, j) =>
i === 0 ? j : j === 0 ? i : 0,
),
);
for (let i = 1; i <= str1.length; i++) {
for (let j = 1; j <= str2.length; j++) {
if (str1[i - 1] !== str2[j - 1]) {
matrix[i][j] =
Math.min(matrix[i - 1][j], matrix[i][j - 1], matrix[i - 1][j - 1]) +
1;
} else {
matrix[i][j] = matrix[i - 1][j - 1];
}
}
}
return matrix[str1.length][str2.length];
};
export const hammingDistance: DistanceAlgorithm = (str1, str2) => {
if (str1.length !== str2.length) {
throw new Error(
'Strings must have equal length for Hamming distance calculation.',
);
}
let distance = 0;
for (let i = 0; i < str1.length; i++) {
if (str1[i] !== str2[i]) {
distance++;
}
}
return distance;
};
// distanceCalculator.ts (Module)
import { DistanceAlgorithm } from './distanceAlgorithms';
export const calculateDistance: (
algorithm: DistanceAlgorithm,
str1: string,
str2: string,
) => number = (algorithm, str1, str2) => algorithm(str1, str2);
// distanceCalculator.test.ts (Module) - Jest Compatible Tests using BDD style
import { levenshteinDistance, hammingDistance } from './distanceAlgorithms';
import { calculateDistance } from './distanceCalculator';
describe('Distance Calculator', () => {
describe('Levenshtein Distance', () => {
it('should calculate distance between strings using Levenshtein algorithm', () => {
expect(
calculateDistance(levenshteinDistance, 'kitten', 'sitting'),
).toEqual(3);
});
});
describe('Hamming Distance', () => {
it('should calculate distance between equal-length strings using Hamming algorithm', () => {
expect(
calculateDistance(hammingDistance, 'karolin', 'kathrin'),
).toEqual(3);
});
it('should throw error for strings with different lengths', () => {
expect(() =>
calculateDistance(hammingDistance, 'karolin', 'kathrine'),
).toThrowError(
'Strings must have equal length for Hamming distance calculation.',
);
});
});
});
This code defines two distance calculation algorithms (Levenshtein and Hamming) and uses the strategy pattern to allow the caller to choose the algorithm. The code follows functional programming principles, avoids classes, and includes Jest-compatible tests in BDD style for each function.
And there you go. All you needed this time is the SudoLite program that acts as a very effective system prompt. It colocates all the rules for the desired outcome at the top. The only thing left is to tell the agent specifics about the logic of the code you are looking for.
This method will become even more effective when ChatGPT or similar software is going to allow the creation of agents using system prompts. This way you won't have to keep track of your own prompts and copy paste them into a chat interface. Efforts like PromptHub.us show that it is only a matter of time before this becomes more widely available.
As Simon Willison said “We are poking at it with a stick, trying to make it do what want”. While writing this article and the code for it, there were multiple occasions where ChatGPT ignored important constraints or behaved unexpectedly. Finding the right words can be challenging, and even seemingly minor changes can significantly impact the outcomes.
Here are some tips when writing your own programs:
- Keep the prompt short. This way, you use your tokens for the most important context.
- Keep the conversation with your agent short. Once ChatGPT exceeds the token limit, it starts forgetting earlier parts of the conversation.
Despite these challenges, the approach is incredibly promising and results in impressive results today. With newer models, this is expected to improve over time.
I am using SudoLite prompting for a lot of my interactions with ChatGPT. One example is an agent that helps out as an editor for technical blog posts, which I used to edit this very post.
I'm curious to hear about your experiences and thoughts regarding this approach. If you have any thoughts on the subject, please feel free to reach out. The field is changing/advancing at such a rapid pace, I expect new insights to change my view on LLMs frequently.
I am personally excited about delving deeper into prompt engineering and, more broadly, exploring the world of GenAI.
- Catching up on the weird world of LLMs (YouTube)
- AI Programming for Absolute Beginners
- SudoLang: A Powerful Pseudocode Programming Language for LLMs
- SudoLang examples at GitHub
- PromptHub.us
Vijai Ramcharan has been working in the software engineering field as an engineer and architect for over twenty years. He is passionate about using software technology to put a smile on the users face, and help them get something done. He likes to peel away layers, both in software architecture and companies, in order to simplify things and get stuff done.
Currently he is working as a Principal Engineer at NN Group focusing on engineering experience and platform engineering.
Besides working on software, Vijai loves to spend as much time as possible with his wife Sharmila and eight-year-old son Jayce. Together they like to travel around the world, where they enjoy the sun, local food, culture and most importantly each other.