Blog

Text-based adventure game

Cloudflare Workers
Workers AI
Astro
TypeScript

An adventure game running on Cloudflare Workers, with stories generated by Workers AI.

Cloudflare recently announced Workers AI, an API that allows users to run machine learning models on the edge. I decided to try it out by building a text-based adventure game, where the story is generated by Workers AI.

The idea

You may remember the ComputerCraft mod from Minecraft which had a built-in text-based adventure game. My intention was to create something similar - a small game that can show the capabilities of the platform.

The game

The story is generated by Llama 2, a text generation model available on Workers AI.

The challenges

Stories were boring, and repetitive. I solved this by seeding the context with a random number, which seems to have helped a lot. Stories are a lot more interesting and varied now.

Here’s an example story:

Example story

Text-based games make ugly websites. Thanks to the NES.css library, I was able to improve the looks, to the point where it’s actually pleasant to look at.

How it works

Every time the frontend needs more of the story, it sends a request to the Pages Functions deployment. The entire history is transmitted, so no database is currently needed.

This is the code for the Astro API endpoint:

export const POST: APIRoute = async ({ request, locals }) => {
  // This is for Astro's dev server
  if (!locals.runtime) {
    return Response.json([])
  }
  
  const ai = new Ai(locals.runtime.env.AI)
  
  // Transform input into the format the model expects
  const input = await request.json()
  const messages = input.length === 0 ? await prime(ai) : input

  // Run the inference
  const { response } = await ai.run('@cf/meta/llama-2-7b-chat-int8', { messages })

  // Trim out any "Great! Let's begin." from messages
  return Response.json([ ...messages, { role: 'assistant', content: response.replace(/^Great[!,] .*?\n/s, '').trim() }])
}

The Function then sends the request to a GPU running somewhere nearby, which runs the LLM.

And that’s actually very simple! It took me about 10 minutes to get the first response from Llama1. Cloudflare has tutorials on how to use the Llama 2 model, so here’s a link to their guide, just in case something changes (it’s in beta, afterall)

Future improvements

I plan to iterate on the game, adding more features and improving the story generation.

Currently, the game is limited by the length of the model’s input. Vectorize to the rescue!

The UI is also imperfect, and while I’m no designer, there is definitely some low-hanging fruit to be picked.

The interaction with the model is also not perfect. The detections for displaying certain bits of the interface can definitely be improved.

Libraries used

Footnotes

  1. Pages Functions don’t currently have a remote development mode, and Workers AI don’t support local mode yet, so I had to deploy the project and test in production. Workers AI is in beta, so it’s fine, especially as deployments take no more than 20 seconds from starting the build to “Deployment complete”.