This commit introduces a new Forum Researcher persona specialized in deep forum content analysis along with comprehensive improvements to our AI infrastructure. Key additions: New Forum Researcher persona with advanced filtering and analysis capabilities Robust filtering system supporting tags, categories, dates, users, and keywords LLM formatter to efficiently process and chunk research results Infrastructure improvements: Implemented CancelManager class to centrally manage AI completion cancellations Replaced callback-based cancellation with a more robust pattern Added systematic cancellation monitoring with callbacks Other improvements: Added configurable default_enabled flag to control which personas are enabled by default Updated translation strings for the new researcher functionality Added comprehensive specs for the new components Renames Researcher -> Web Researcher This change makes our AI platform more stable while adding powerful research capabilities that can analyze forum trends and surface relevant content. |
||
---|---|---|
.github/workflows | ||
admin/assets/javascripts/discourse | ||
app | ||
assets | ||
config | ||
db | ||
discourse_automation | ||
evals | ||
lib | ||
public/ai-share | ||
spec | ||
svg-icons | ||
test/javascripts | ||
tokenizers | ||
.discourse-compatibility | ||
.gitignore | ||
.npmrc | ||
.prettierignore | ||
.prettierrc.cjs | ||
.rubocop.yml | ||
.streerc | ||
.template-lintrc.cjs | ||
Gemfile | ||
Gemfile.lock | ||
LICENSE | ||
README.md | ||
about.json | ||
eslint.config.mjs | ||
package.json | ||
plugin.rb | ||
pnpm-lock.yaml | ||
stylelint.config.mjs | ||
translator.yml |
README.md
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco
Evals
The directory evals
contains AI evals for the Discourse AI plugin.
You may create a local config by copying config/eval-llms.yml
to config/eval-llms.local.yml
and modifying the values.
To run them use:
cd evals ./run --help
Usage: evals/run [options]
-e, --eval NAME Name of the evaluation to run
--list-models List models
-m, --model NAME Model to evaluate (will eval all models if not specified)
-l, --list List evals
To run evals you will need to configure API keys in your environment:
OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key