Go to file
Roman Rizzi 5bc9fdc06b
FIX: Return structured output on non-streaming mode (#1318)
2025-05-06 15:34:30 -03:00
.github/workflows
admin/assets/javascripts/discourse
app DEV: Use structured responses for summaries (#1252) 2025-05-06 10:09:39 -03:00
assets DEV: Use structured responses for summaries (#1252) 2025-05-06 10:09:39 -03:00
config DEV: Use structured responses for summaries (#1252) 2025-05-06 10:09:39 -03:00
db DEV: Use structured responses for summaries (#1252) 2025-05-06 10:09:39 -03:00
discourse_automation
evals
lib FIX: Return structured output on non-streaming mode (#1318) 2025-05-06 15:34:30 -03:00
public/ai-share
spec DEV: Use structured responses for summaries (#1252) 2025-05-06 10:09:39 -03:00
svg-icons
test/javascripts
tokenizers
.discourse-compatibility DEV: Add compatibility with the Glimmer Post Stream (#1230) 2025-04-29 23:55:54 -03:00
.gitignore
.npmrc
.prettierignore
.prettierrc.cjs
.rubocop.yml
.streerc
.template-lintrc.cjs
Gemfile
Gemfile.lock
LICENSE
README.md
about.json
eslint.config.mjs
package.json
plugin.rb
pnpm-lock.yaml
stylelint.config.mjs
translator.yml

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key