Register

RPGX AI Assistant

An Add-on Module for Foundry Virtual Tabletop

Author: x8xid82 Project Source Foundry Versions 11.0+ (Verified 13.350) Last Updated 2 weeks, 4 days ago

This package contains art, text, or software code produced using generative AI.

RPGX AI Assistant for Foundry VTT

An Add-on Module for Foundry Virtual Tabletop

Author: RPGX Studios
Foundry Compatibility: Version 13+
Last Updated: 11/3/2025 - v1.0.0 Core Release

The RPGX AI Assistant brings a fully local, customizable AI into Foundry Virtual Tabletop. It connects directly to your locally hosted Ollama instance and any compatible large language model (LLM) — allowing Game Masters and players to integrate intelligent conversation, lore recall, or creative narration directly into their tabletop sessions.

This module is designed to work independently, requiring only Foundry VTT and Ollama.


For users who want to extend the Assistant with a world-specific knowledge base, the optional RPGX AI Librarian premium module adds RAG (Retrieval-Augmented Generation) support for deeper context and memory.

Overview

The RPGX AI Assistant provides seamless AI integration for Foundry, offering a responsive in-game assistant that can assist with story generation, NPC dialogue, rules clarifications, and scene narration — all through a private, locally hosted model.

With your preferred LLM running via Ollama (such as Qwen 2.5, Llama 3, Mistral, or others), this module lets you:

Core Features

Local AI Integration via Ollama

Foundry Chat Integration

Customizable Model Settings

Optional World Knowledge Support

Setup & Usage

  1. Install the module via Foundry’s Module Management.

    1. Extract .zip to - C:\Users\<User>\AppData\Local\FoundryVTT\Data\modules

    2. Or use the Foundry installer from the Modules section in app*

  2. Ensure Ollama is running locally — for example:

    Powershell - ollama serve ollama pull qwen2.5:14b

  3. Enable CORS (Cross-Origin Resource Sharing) if running Ollama and Foundry from different servers. The Ollama server and Foundry VTT must be allowed to communicate across different local ports or hosts.

  4. Enable RPGX AI Assistant in your Foundry world.

  5. In Game Settings → RPGX AI Assistant, configure:

    • Ollama Base URL (default: http://127.0.0.1:11434)

    • Default Model Name (e.g., qwen2.5:14b)

    • Temperature, Max Tokens, and Timeout

  6. Open chat and type commands like:

    • /rpgx Generate a random tavern name and description.

    • /rpgx Summarize last session’s key moments.

  7. (Optional) Enable RPGX AI Librarian for connected RAG knowledge queries.

Example Prompts

Troubleshooting & FAQ

Q: The chat command doesn’t respond.
A: Make sure Ollama is running and reachable at the configured address.

Q: The model runs too slow or times out.
A: Reduce max tokens or use a smaller LLM (e.g., Qwen 7B instead of 14B).

Q: Do I need the Librarian module?
A: No — it’s completely optional. The Assistant runs standalone using your chosen model.

Q: Can I connect to an online API model?
A: The Assistant is optimized for Ollama, but technically compatible with any local REST-based LLM endpoint.

Future Roadmap

Supported Game Systems

License

Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License

Changelog (v1.0.0 Core Release)

Categories

Available Versions

  1. Version v1.50

    2 weeks, 4 days ago
    Foundry Version 11.0+ (Verified 13.350) Manifest URL Read Notes