ClientAI Tutorial: Building an AI Dungeon Master¶
In this tutorial, we'll walk through the process of creating an AI-powered Dungeon Master using the ClientAI package. We'll explain each concept in detail and build our game step-by-step, providing context for every decision we make, both technical and gameplay-related.
Table of Contents¶
- Introduction
- Setting Up the Project 2.5 Creating the Project Structure
- Creating the Game Structure
- Integrating Multiple AI Providers
- Developing the Enhanced AI Dungeon Master
- Main Script that Runs the Game
- Running the Game
- Conclusion and Further Improvements
1. Introduction¶
ClientAI is a Python package that provides a unified interface for interacting with multiple AI providers. In this tutorial, we'll use ClientAI to create an AI Dungeon Master that can generate story elements, NPC dialogues, and dynamic environments using different AI models.
Our AI Dungeon Master will be a text-based role-playing game (RPG) where the game's content is dynamically generated by AI. This approach allows for infinite replayability and unique experiences for each player.
We'll focus on explaining both technical decisions (such as class structures and AI interactions) and gameplay decisions (like character creation and game mechanics).
The final result is available in this github repo.
2. Setting Up the Project¶
First, let's set up our project and install the necessary dependencies.
- Create a new directory for your project:
-
Install ClientAI and its dependencies:
If you want to use poetry, you may skip this part.
This command installs ClientAI with support for all providers. If you only need specific providers, you can install them individually (e.g., pip install clientai[openai]
for just OpenAI support).
-
Install additional dependencies:
If you want to use poetry, you may also skip this part.
We'll need some additional packages for our project.
-
requests
: For making HTTP requests to check if the local AI servers are running. -
Install Ollama:
Ollama is a local AI model server that we'll use to run the Llama 3 model. Follow these steps to install Ollama:
-
For macOS or Linux:
-
For Windows: Download the installer from the Ollama GitHub releases page and follow the installation instructions.
-
Pull the Llama 3 model from Ollama:
After installing Ollama, you need to download the Llama 3 model. Run the following command:
This command will download and set up the Llama 3 model for use with Ollama. The download might take some time depending on your internet connection.
These imports will be used throughout our project:
random
: For generating random numbers and making random choices.subprocess
: For starting and managing subprocesses like local AI servers.time
: For adding delays and managing timeouts.requests
: For making HTTP requests to check server availability.logging
: For logging information and errors.ClientAI
: The main class from the ClientAI package that we'll use to interact with AI providers.
2.5 Creating the Project Structure¶
Before we dive into the code, let's set up a proper project structure. This will help us organize our code and make it easier to maintain and expand in the future.
- Create the following directory structure:
clientai_dungeon_master/
├── pyproject.toml
├── README.md
├── .gitignore
├── .env
└── ai_dungeon_master/
├── __init__.py
├── main.py
├── game/
│ ├── __init__.py
│ ├── character.py
│ ├── game_state.py
│ └── dungeon_master.py
├── ai/
│ ├── __init__.py
│ ├── ai_providers.py
│ └── ollama_server.py
└── utils/
├── __init__.py
└── text_utils.py
-
Create a
pyproject.toml
file in the root directory with the following content:If you're using pip directly, you may skip this part
[tool.poetry]
name = "clientai-dungeon-master"
version = "0.1.0"
description = "An AI-powered dungeon master for text-based RPG adventures"
authors = ["Your Name <your.email@example.com>"]
readme = "README.md"
packages = [{include = "clientai_dungeon_master"}]
[tool.poetry.dependencies]
python = "^3.11"
clientai = "^0.1.2"
requests = "^2.32.3"
python-decouple = "^3.8"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
and run
- Create a
.gitignore
file in the root directory with the following content:
# Python
__pycache__/
*.py[cod]
*.pyo
*.pyd
.Python
env/
venv/
ENV/
# Poetry
.venv/
dist/
# Environment variables
.env
# IDEs
.vscode/
.idea/
# Logs
*.log
# OS generated files
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
- Create a
.env
file in the root directory to store your API keys:
Remember to replace your_openai_api_key_here
and your_replicate_api_key_here
with your actual API keys.
- Move the relevant code into the appropriate files based on the new structure.
This structure separates concerns, making the code more modular and easier to maintain. It also sets up the project for potential future expansion, such as adding more game features or integrating additional AI providers.
3. Creating the Game Structure¶
Before integrating AI, we'll create the basic structure of our game. This includes classes to represent the character, game state, and AI providers.
Character Class¶
The Character
class represents the player's character in the game. It stores essential character information like name, race, class, background story, and stats.
class Character:
def __init__(self, name: str, race: str, class_type: str, background: str, stats: dict):
self.name = name
self.race = race
self.class_type = class_type
self.background = background
self.stats = stats
def __str__(self):
return f"Name: {self.name}, Race: {self.race}, Class: {self.class_type}, Background: {self.background}, Stats: {self.stats}"
Here we define a character with attributes like a name, race, class, background and stats (like Strength, Intelligence, Wisdom). This is really simple, but will be enough to customize what happens in the story.
We'll also define the __str__
method to be able to print the character's details easily.
GameState Class¶
The GameState
class keeps track of the game's current state, including the character's status, location, inventory, health, experience, and quests.
from typing import Optional
from .character import Character
class GameState:
def __init__(self, character: Character):
self.character = character
self.location = "entrance"
self.inventory = []
self.health = 100
self.experience = 0
self.quests = []
def update(self, location: Optional[str] = None, item: Optional[str] = None, health_change: int = 0, exp_gain: int = 0, quest: Optional[str] = None):
if location:
self.location = location
if item:
self.inventory.append(item)
self.health = max(0, min(100, self.health + health_change))
self.experience += exp_gain
if quest:
self.quests.append(quest)
def __str__(self):
return f"{str(self.character)}\nLocation: {self.location}, Health: {self.health}, XP: {self.experience}, Inventory: {', '.join(self.inventory)}, Quests: {', '.join(self.quests)}"
We keep track of the state to keep a more consistent experience, we can't expect this to be always generated by the llm. We need to pass the game state as a guide to generate the content.
The update method allows easy updates to the game state, we'll keep health within 0 to 100, and add an inventory and quests to add more depth to the game.
4. Integrating Multiple AI Providers¶
We'll use ClientAI to create a class that manages interactions with different AI providers. This abstraction allows us to switch between providers seamlessly.
AIProviders Class¶
from typing import List
from clientai import ClientAI
class AIProviders:
def __init__(self):
self.openai = ClientAI('openai', api_key=openai_token)
self.replicate = ClientAI('replicate', api_key=replicate_token)
self.ollama = ClientAI('ollama', host="http://localhost:11434")
def chat(
self,
messages: List[dict],
provider: str = 'openai',
openai_model="gpt-4o-mini",
replicate_model="meta/meta-llama-3-8b-instruct",
ollama_model="llama3",
):
if provider == 'openai':
return self.openai.chat(messages, model=openai_model, stream=True)
elif provider == 'replicate':
return self.replicate.chat(messages, model=replicate_model, stream=True)
elif provider == 'ollama':
return self.ollama.chat(messages, model=ollama_model, stream=True)
else:
raise ValueError(f"Unknown provider: {provider}")
We create instances of ClientAI for each provider with the necessary API keys or host information, then abstract the chat method to allow for easy switching between AI providers.
We are going to use ClientAI to use multiple AI models from different providers, since we want to find what is the best model for each task balancing performance and costs.
Managing API Keys with python-decouple and a .env File¶
To securely handle your API keys without exposing them in your codebase, you can use the python-decouple package and store your keys in a .env file. This approach keeps sensitive information out of your code and version control.
- Install python-decouple: You may skip this if you used poetry
- Create a .env File:
In your project's root directory, make sure the
.env
has your API keys:
Replace your_openai_api_key_here
and your_replicate_api_key_here
with your actual API keys.
- Ensure .env is added to .gitignore: To prevent the .env file from being tracked by version control, ensure it is in your .gitignore file:
This ensures your API keys remain private and aren't pushed to repositories like GitHub.
- Access the API Keys in Your Code:
Import
config
from decouple and retrieve the API keys:
from decouple import config
openai_token = config('OPENAI_API_KEY')
replicate_token = config('REPLICATE_API_KEY')
Now, you can use these variables when initializing your AI providers.
- Update the AIProviders Class:
ai_dungeon_master/ai/ai_providers.py
from typing import List from clientai import ClientAI from decouple import config openai_token = config('OPENAI_API_KEY') replicate_token = config('REPLICATE_API_KEY')he ol class AIProviders: def __init__(self): self.openai = ClientAI('openai', api_key=openai_token) self.replicate = ClientAI('replicate', api_key=replicate_token) self.ollama = ClientAI('ollama', host="http://localhost:11434") ...
Managing AI Servers¶
We need to ensure that local AI servers (like Ollama) are running before the game starts, so let's define a function to start ollama.
import subprocess
import time
import requests
import logging
logging.basicConfig(level=logging.INFO)
def start_ollama_server(timeout: int = 30, check_interval: float = 1.0):
"""
Start the Ollama server and wait for it to be ready.
"""
logging.info("Starting Ollama server...")
try:
process = subprocess.Popen(
['ollama', 'serve'],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True
)
except subprocess.SubprocessError as e:
logging.error(f"Failed to start Ollama process: {e}")
raise
start_time = time.time()
while time.time() - start_time < timeout:
try:
response = requests.get('http://localhost:11434', timeout=5)
if response.status_code == 200:
logging.info("Ollama server is ready.")
return process
except requests.ConnectionError:
pass
except requests.RequestException as e:
logging.error(f"Unexpected error when checking Ollama server: {e}")
process.terminate()
raise
if process.poll() is not None:
stdout, stderr = process.communicate()
logging.error(f"Ollama process terminated unexpectedly. stdout: {stdout}, stderr: {stderr}")
raise subprocess.SubprocessError("Ollama process terminated unexpectedly")
time.sleep(check_interval)
process.terminate()
raise TimeoutError(f"Ollama server did not start within {timeout} seconds")
By managing the server startup within the code, we reduce the setup burden on the player.
5. Developing the Enhanced AI Dungeon Master¶
Now we'll develop the main class that controls the game logic and interactions with AI models.
EnhancedAIDungeonMaster Class¶
from typing import Tuple, List
import random
import time
from ai.ai_providers import AIProviders
from utils.text_utils import print_separator
from game.character import Character
from game.game_state import GameState
class EnhancedAIDungeonMaster:
def __init__(self):
self.ai = AIProviders()
self.conversation_history = []
self.game_state = None
# Methods will be added here...
Creating the Character¶
We need a method to create the player's character. We'll use AI to do this automatically for us:
class EnhancedAIDungeonMaster:
...
def create_character(self):
print("Let's create your character!")
name = input("What is your character's name? ")
# We start by defining a prompt
character_prompt = f"""
Create a character for a fantasy RPG with the following details:
Name: {name}
Please provide:
1. A suitable race (e.g., Human, Elf, Dwarf, etc.)
2. A class (e.g., Warrior, Mage, Rogue, etc.)
3. A brief background story (2-3 sentences)
4. Basic stats (Strength, Dexterity, Constitution, Intelligence, Wisdom, Charisma) on a scale of 1-20
Format the response as follows:
Race: [race]
Class: [class]
Background: [background story]
Stats:
- Strength: [value]
- Dexterity: [value]
- Constitution: [value]
- Intelligence: [value]
- Wisdom: [value]
- Charisma: [value]
"""
# And we add this prompt to our chat history
self.add_to_history("user", character_prompt)
character_info = self.print_stream(self.ai.chat(self.conversation_history, provider='openai'))
# Parse the character info
lines = character_info.strip().split('\n')
race = class_type = background = ""
stats = {}
for line in lines:
if line.startswith("Race:"):
race = line.split(": ", 1)[1].strip()
elif line.startswith("Class:"):
class_type = line.split(": ", 1)[1].strip()
elif line.startswith("Background:"):
background = line.split(": ", 1)[1].strip()
elif ":" in line and not line.startswith("Stats:"):
key, value = line.split(":", 1)
key = key.strip("- ")
try:
stats[key] = int(value.strip())
except ValueError:
stats[key] = random.randint(1, 20)
# Just in case, let's ensure it'the player has stats
# If any stat is missing, assign a random value
for stat in ["Strength", "Dexterity", "Constitution", "Intelligence", "Wisdom", "Charisma"]:
if stat not in stats:
stats[stat] = random.randint(1, 20)
# And let's also ensure other required attributes are assigned
# If race, class, or background is empty, assign default values
race = race or "Human"
class_type = class_type or "Adventurer"
background = background or "A mysterious traveler with an unknown past."
return Character(name, race, class_type, background, stats)
We'll use GPT 4o mini to create initial stuff we need, like the race, class, background etc, and extract the information from the generated content to handle errors.
Note that since we are leaving this information to the LLM, the name will influence the attributes. If you need a more consistently random generation, do it in the python code and just pass it to the prompt.
Maintaining Conversation History¶
To provide context to the AI, we maintain a conversation history.
class EnhancedAIDungeonMaster:
...
def add_to_history(self, role: str, content: str):
if not self.conversation_history or self.conversation_history[-1]['content'] != content:
self.conversation_history.append({"role": role, "content": content})
if len(self.conversation_history) > 10:
self.conversation_history = self.conversation_history[-10:]
Here we will ensure we don't add the same message twice. Plus, we are limiting the conversation history to 10 messages to prevent exceeding token limits.
Generating the Environment¶
Next, let's create detailed environments to enhance the imersion.
class EnhancedAIDungeonMaster:
...
def generate_environment(self):
if not hasattr(self, 'current_environment'):
prompt = f"""
The character {self.game_state.character.name} is a {self.game_state.character.race} {self.game_state.character.class_type}
currently in the {self.game_state.location}.
Describe the current environment in detail, focusing on:
1. The physical setting and atmosphere
2. Any notable NPCs present
3. Interesting objects or features
Do not create a new character or change any existing character details.
Do not include any actions or dialogue for {self.game_state.character.name}.
End your description with one of these tags if appropriate:
[INTERACT_OPPORTUNITY] - if there's a chance for the player to interact with someone or something
[QUEST_OPPORTUNITY] - if there's a potential quest or mission available
"""
self.add_to_history("user", prompt)
self.current_environment = self.ai.chat(self.conversation_history, provider='openai')
return self.current_environment
Here we instruct the AI to provide specific details, and we use tags for opportunities. We'll parse these tags INTERACT_OPPORTUNITY
and QUEST_OPPORTUNITY
later to perform other actions.
We'll also store the environment description to avoid regenerating it unnecessarily.
Handling Player Actions¶
Now let's process the player's actions and generate outcomes. We'll run this one locally with ollama.
class EnhancedAIDungeonMaster:
...
def handle_player_action(self, action):
prompt = f"""
The player ({self.game_state.character.name}, a {self.game_state.character.race} {self.game_state.character.class_type})
attempts to {action} in {self.game_state.location}.
Describe the immediate result of this action, focusing on the environment and NPCs' reactions.
Do not generate any further actions or dialogue for {self.game_state.character.name}.
If the player is trying to interact with an NPC, end your response with [NPC_INTERACTION: <npc_name>].
"""
self.add_to_history("user", prompt)
return self.ai.chat(self.conversation_history, provider='ollama')
Here we pass what the player wants to do to the AI and generate the outcomes for the players actions. We are also using a tag here for interactions, so we can process those in a different way.
Generating NPC Dialogue¶
Next, let's create a function to generate a dialogue with an npc. We'll use replicate with llama3 8b for this.
class EnhancedAIDungeonMaster:
...
def generate_npc_dialogue(self, npc_name: str, player_input: str):
prompt = f"""
The player ({self.game_state.character.name}) said to {npc_name}: "{player_input}"
Generate a single, natural response from {npc_name}, addressing the player's input directly.
If the player is asking about items for sale, list 2-3 specific items with brief descriptions and prices.
Do not include any actions or responses from the player character.
Keep the response concise and relevant to the player's input.
Do not include any formatting tags, headers, or quotation marks in your response.
Respond as if you are {npc_name} speaking directly to the player.
"""
self.add_to_history("user", prompt)
return self.ai.chat(self.conversation_history, provider='replicate')
Note that in the prompt we ensure the AI provides responses that are in character and appropriate, so we can pass this directly to the player.
Handling Conversations¶
We manage conversations with NPCs in a separate method. We start with a conversation loop, to allow the player to have a back-and-forth dialogue with an NPC, and we reset the conversation history to focus the AI on the dialogue.
class EnhancedAIDungeonMaster:
...
def handle_conversation(self, npc_name):
print(f"\nYou are now in conversation with {npc_name}.")
self.conversation_history = [
{"role": "system", "content": f"You are {npc_name}, speaking directly to the player. Respond naturally and in character."}
]
while True:
player_input = input(f"\nWhat do you say to {npc_name}? (or type 'end conversation' to stop): ")
if player_input.lower() == "end conversation":
print(f"\nYou end your conversation with {npc_name}.")
break
print(f"\n{npc_name}:")
self.print_stream(self.generate_npc_dialogue(npc_name, player_input))
We also add the possibility for the player to end the conversation at any time.
Updating the Game State¶
We update the game state based on the outcomes provided by the AI.
class EnhancedAIDungeonMaster:
...
def update_game_state(self, outcome):
if "found" in outcome.lower():
item = outcome.split("found")[1].split(".")[0].strip()
self.game_state.update(item=item)
if "new area" in outcome.lower():
new_location = outcome.split("new area")[1].split(".")[0].strip()
self.game_state.update(location=new_location)
if "damage" in outcome.lower():
self.game_state.update(health_change=-10)
if "healed" in outcome.lower():
self.game_state.update(health_change=10)
if "quest" in outcome.lower():
quest = outcome.split("quest")[1].split(".")[0].strip()
self.game_state.update(quest=quest)
self.game_state.update(exp_gain=5)
This is a simpler way to do it, but we will just look for keywords in the AI's response to determine what changes to make. This isn't the most consistent way to do it, but is easy to do and will easily allow the game to respond to the player's actions, making the experience feel more dynamic.
Processing Story Elements¶
Let's process the AI-generated story to extract content and any special flags.
class EnhancedAIDungeonMaster:
...
def process_story(self, story_generator) -> Tuple[str, List[str]]:
story = self.print_stream(story_generator, print_output=True)
story_lines = story.split('\n')
flags = []
for line in reversed(story_lines):
if line.strip().startswith('[') and line.strip().endswith(']'):
flags.append(line.strip('[').strip(']'))
story_lines.remove(line)
else:
break
story_content = '\n'.join(story_lines).strip()
if any(flag.startswith("NPC_INTERACTION:") for flag in flags):
npc_name = next(flag.split(':')[1].strip() for flag in flags if flag.startswith("NPC_INTERACTION:"))
return story_content, npc_name
else:
return story_content, flags
Here is where we'll actually separates the special tags we defined earlier from the story content and ensure the player sees a coherent story without tags.
Printing Streamed Content¶
We also don't want to wait until the whole content is generated to print, so let's define a function to display the AI's response in real-time, simulating typing.
class EnhancedAIDungeonMaster:
...
def print_stream(self, stream, print_output=True) -> str:
full_text = ""
for chunk in stream:
if print_output:
print(chunk, end='', flush=True)
full_text += chunk
time.sleep(0.03)
if print_output:
print()
return full_text
Main Game Loop¶
Finally, we bring everything together in the play_game method.
class EnhancedAIDungeonMaster:
...
def play_game(self):
print("Welcome to the Dungeon!")
character = self.create_character()
self.game_state = GameState(character)
print("\nYour adventure begins...")
while True:
print_separator()
environment_description, env_flags = self.process_story(self.generate_environment())
if "INTERACT_OPPORTUNITY" in env_flags:
print("\nThere seems to be an opportunity to interact.")
if "QUEST_OPPORTUNITY" in env_flags:
print("\nThere might be a quest available.")
action = input("\nWhat do you do? ")
if action.lower() == "quit":
break
print("\nOutcome:")
outcome, npc_interaction = self.process_story(self.handle_player_action(action))
self.update_game_state(outcome)
if npc_interaction:
self.handle_conversation(npc_interaction)
print_separator()
print(f"Current state: {str(self.game_state)}")
if self.game_state.health <= 0:
print("Game Over! Your health reached 0.")
break
if hasattr(self, 'current_environment'):
del self.current_environment
The game loop continuously processes player actions and updates the game state, new environments are generated to keep the game dynamic and the player is allowed to quit whenever they want.
Plus, the game is over if health reaches zero.
Helper Methods¶
Let's also create some methods for improved user experience, we want to separate content to make it easier to see and also create a print_slowly to simulate streamed content in important messages.
import time
def print_separator(self):
print("\n" + "=" * 50 + "\n")
def print_slowly(text, delay=0.03):
for char in text:
print(char, end='', flush=True)
time.sleep(delay)
print()
6. Main Script that Runs the Game¶
At our main script, we initialize and start the game.
from game.dungeon_master import EnhancedAIDungeonMaster
from utils.text_utils import print_slowly
from ai.ollama_server import start_ollama_server
def main():
print_slowly("Welcome to the AI Dungeon Master!")
print_slowly("Prepare for an adventure guided by multiple AI models.")
print_slowly("Type 'quit' at any time to exit the game.")
print()
# Start the Ollama server before the game begins
ollama_process = start_ollama_server()
game = EnhancedAIDungeonMaster()
game.play_game()
print_slowly("Thank you for playing AI Dungeon Master!")
# Terminate the Ollama server when the game ends
if ollama_process:
ollama_process.terminate()
if __name__ == "__main__":
main()
7. Running the Game¶
-
Ensure you're in the root directory of the project.
-
Run the game using Poetry:
Or directly if you used pip:
This command will execute the main.py
file, which should contain the game initialization and main loop.
8. Conclusion and Further Improvements¶
Congratulations! You've now created an AI Dungeon Master using the ClientAI package. This project demonstrates how to integrate multiple AI providers and manage game logic to create a dynamic and engaging text-based RPG.
Potential Improvements:¶
- Error Handling: Implement try-except blocks to handle exceptions and improve robustness.
- Saving and Loading: Add functionality to save and load game states.
- Combat System: Develop a combat system that uses character stats and AI to determine outcomes.
- Quest Management: Create a more complex quest system with objectives and rewards.
- Multiplayer: Explore options for multiplayer interactions.
- User Interface: Develop a GUI for a more user-friendly experience.
- AI Fine-Tuning: Customize AI models for more consistent and relevant responses.
By implementing these improvements, you can further enhance the gameplay experience and create an even more immersive and engaging AI-driven RPG.