Go to file
David Taylor ca78b1a1c5
DEV: Patch `Net::BufferedIO` to help debug spec flakes (#1375)
Internal `/t/154170`
2025-05-28 10:24:07 +01:00
.github/workflows
admin/assets/javascripts/discourse
app FIX: Improve MessageBus efficiency and correctly stop streaming (#1362) 2025-05-23 16:23:06 +10:00
assets DEV: cleanup diff streaming (#1370) 2025-05-27 18:12:02 +10:00
config Update translations (#1371) 2025-05-27 22:11:32 +02:00
db
discourse_automation
evals
lib FIX: improve researcher tool - fix topic filters (#1368) 2025-05-26 16:00:44 +10:00
public/ai-share
spec DEV: Patch `Net::BufferedIO` to help debug spec flakes (#1375) 2025-05-28 10:24:07 +01:00
svg-icons
test/javascripts
tokenizers
.discourse-compatibility
.gitignore
.npmrc
.prettierignore
.prettierrc.cjs
.rubocop.yml
.streerc
.template-lintrc.cjs
Gemfile
Gemfile.lock
LICENSE
README.md
about.json
eslint.config.mjs
package.json
plugin.rb
pnpm-lock.yaml
stylelint.config.mjs
translator.yml

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key