## LLM Persona Triage - Allows automated responses to posts using AI personas - Configurable to respond as regular posts or whispers - Adds context-aware formatting for topics and private messages - Provides special handling for topic metadata (title, category, tags) ## LLM Tool Triage - Enables custom AI tools to process and respond to posts - Tools can analyze post content and invoke personas when needed - Zero-parameter tools can be used for automated workflows - Not enabled in production yet ## Implementation Details - Added new scriptable registration in discourse_automation/ directory - Created core implementation in lib/automation/ modules - Enhanced PromptMessagesBuilder with topic-style formatting - Added helper methods for persona and tool selection in UI - Extended AI Bot functionality to support whisper responses - Added rate limiting to prevent abuse ## Other Changes - Added comprehensive test coverage for both automation types - Enhanced tool runner with LLM integration capabilities - Improved error handling and logging This feature allows forum admins to configure AI personas to automatically respond to posts based on custom criteria and leverage AI tools for more complex triage workflows. Tool Triage has been disabled in production while we finalize details of new scripting capabilities. |
||
---|---|---|
.github/workflows | ||
admin/assets/javascripts/discourse | ||
app | ||
assets | ||
config | ||
db | ||
discourse_automation | ||
evals | ||
lib | ||
public/ai-share | ||
spec | ||
svg-icons | ||
test/javascripts | ||
tokenizers | ||
.discourse-compatibility | ||
.gitignore | ||
.npmrc | ||
.prettierignore | ||
.prettierrc.cjs | ||
.rubocop.yml | ||
.streerc | ||
.template-lintrc.cjs | ||
Gemfile | ||
Gemfile.lock | ||
LICENSE | ||
README.md | ||
about.json | ||
eslint.config.mjs | ||
package.json | ||
plugin.rb | ||
pnpm-lock.yaml | ||
translator.yml |
README.md
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco
Evals
The directory evals
contains AI evals for the Discourse AI plugin.
You may create a local config by copying config/eval-llms.yml
to config/eval-llms.local.yml
and modifying the values.
To run them use:
cd evals ./run --help
Usage: evals/run [options]
-e, --eval NAME Name of the evaluation to run
--list-models List models
-m, --model NAME Model to evaluate (will eval all models if not specified)
-l, --list List evals
To run evals you will need to configure API keys in your environment:
OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key