This package contains art, text, or software code produced using generative AI.
RPGX AI Assistant for Foundry VTT
An Add-on Module for Foundry Virtual Tabletop
Author: RPGX Studios
Foundry Compatibility: Version 13+
Last Updated: 11/3/2025 - v1.0.0 Core Release
The RPGX AI Assistant brings a fully local, customizable AI into Foundry Virtual Tabletop. It connects directly to your locally hosted Ollama instance and any compatible large language model (LLM) — allowing Game Masters and players to integrate intelligent conversation, lore recall, or creative narration directly into their tabletop sessions.
This module is designed to work independently, requiring only Foundry VTT and Ollama.
For users who want to extend the Assistant with a world-specific knowledge base, the optional RPGX AI Librarian premium module adds RAG (Retrieval-Augmented Generation) support for deeper context and memory.

Overview
The RPGX AI Assistant provides seamless AI integration for Foundry, offering a responsive in-game assistant that can assist with story generation, NPC dialogue, rules clarifications, and scene narration — all through a private, locally hosted model.
With your preferred LLM running via Ollama (such as Qwen 2.5, Llama 3, Mistral, or others), this module lets you:
-
💬 Chat directly with AI inside Foundry’s chat window
-
🧠 Generate story ideas, lore, NPC dialogue, and rules clarifications
-
⚙️ Customize your AI’s tone, temperature, and token limits
-
🔒 Run 100% locally — no API keys, no cloud services, no data leaks
-
🪶 Optionally connect to RPGX AI Librarian for persistent, world-specific memory
Core Features
Local AI Integration via Ollama
-
Connects directly to your local Ollama instance (http://127.0.0.1:11434).
-
Supports any installed model (Qwen, Llama, Mistral, Phi, etc).
-
No API keys or external servers required.
Foundry Chat Integration
-
Use simple chat commands (/rpgx, /w rpgx) to communicate with your chosen model.
-
Responses appear natively in the Foundry chat log with Markdown formatting.
-
GM-only mode available for private AI queries and notes.
Customizable Model Settings
-
Set temperature, max tokens, timeout duration, and system prompt.
-
Tune your AI for creativity, precision, or balance based on your game style.

Optional World Knowledge Support
-
Compatible with RPGX AI Librarian for enhanced RAG functionality.
-
When paired, the Assistant can access your ingested journals and notes for lore-aware responses.
-
Runs perfectly fine without the Librarian — this is an optional premium add-on.
Setup & Usage
-
Install the module via Foundry’s Module Management.
-
Extract .zip to - C:\Users\<User>\AppData\Local\FoundryVTT\Data\modules
-
Or use the Foundry installer from the Modules section in app*
-
-
Ensure Ollama is running locally — for example:
Powershell - ollama serve ollama pull qwen2.5:14b
-
Enable CORS (Cross-Origin Resource Sharing) if running Ollama and Foundry from different servers. The Ollama server and Foundry VTT must be allowed to communicate across different local ports or hosts.
-
Enable RPGX AI Assistant in your Foundry world.
-
In Game Settings → RPGX AI Assistant, configure:
-
Ollama Base URL (default: http://127.0.0.1:11434)
-
Default Model Name (e.g., qwen2.5:14b)
-
Temperature, Max Tokens, and Timeout
-
-
Open chat and type commands like:
-
/rpgx Generate a random tavern name and description.
-
/rpgx Summarize last session’s key moments.
-
-
(Optional) Enable RPGX AI Librarian for connected RAG knowledge queries.
Example Prompts
-
“Describe a mysterious artifact discovered in the ruins beneath Elturel.”
-
“Write NPC dialogue for a suspicious merchant with a hidden agenda.”
-
“Summarize what happened in the last three sessions.”
-
“Create a D&D-style encounter in a fog-covered valley.”
Troubleshooting & FAQ
Q: The chat command doesn’t respond.
A: Make sure Ollama is running and reachable at the configured address.
Q: The model runs too slow or times out.
A: Reduce max tokens or use a smaller LLM (e.g., Qwen 7B instead of 14B).
Q: Do I need the Librarian module?
A: No — it’s completely optional. The Assistant runs standalone using your chosen model.
Q: Can I connect to an online API model?
A: The Assistant is optimized for Ollama, but technically compatible with any local REST-based LLM endpoint.
Future Roadmap
-
🧠 Editable AI “personality” prompt fields in Foundry settings.
-
💾 Chat history memory (session-based contextual recall).
-
⚙️ Model profile saving and quick switching.
-
🔄 Improved RAG query handling when paired with the Librarian module.
-
🧩 Optional GPT/Claude connector (for users with cloud keys).
Supported Game Systems
-
Works with any Foundry system (D&D 5e, Pathfinder, Starfinder, Cyberpunk RED, etc.)
-
System-agnostic — uses Foundry’s core chat API for communication.
License
Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License
Changelog (v1.0.0 Core Release)
-
✅ Standalone Ollama integration
-
✅ Full Foundry chat command support
-
✅ Custom model & prompt configuration
-
✅ Optional RAG integration via RPGX AI Librarian
-
🚧 Personality profiles and session memory in progress