Go to file
Sam cc4e9e030f
FIX: normalize keys in structured output (#1468)
* FIX: normalize keys in structured output

Previously we did not validate the hash passed in to structured
outputs which could either be string based or symbol base

Specifically this broke structured outputs for Gemini in some
specific cases.

* comment out flake
2025-06-27 15:42:48 +10:00
.github/workflows
admin/assets/javascripts/discourse FEATURE: Display bot in feature list (#1466) 2025-06-27 12:35:41 +10:00
app FEATURE: Display bot in feature list (#1466) 2025-06-27 12:35:41 +10:00
assets FEATURE: Display bot in feature list (#1466) 2025-06-27 12:35:41 +10:00
config FEATURE: Display bot in feature list (#1466) 2025-06-27 12:35:41 +10:00
db DEV: Indicate backfill rate for translations is hourly (#1451) 2025-06-21 15:45:09 +08:00
discourse_automation
evals
lib FIX: normalize keys in structured output (#1468) 2025-06-27 15:42:48 +10:00
public/ai-share FEATURE: persistent key-value storage for AI Artifacts (#1417) 2025-06-11 06:59:46 +10:00
spec FIX: normalize keys in structured output (#1468) 2025-06-27 15:42:48 +10:00
svg-icons
test/javascripts DEV: adds pointerup event to our select text helper (#1429) 2025-06-12 10:33:14 +02:00
tokenizers FEATURE: Add Qwen3 tokenizer and update Gemma to version 3 (#1440) 2025-06-17 10:25:03 -03:00
.discourse-compatibility DEV: Remove 'experimental' from translation features (#1439) 2025-06-19 12:23:56 +08:00
.gitignore
.npmrc
.prettierignore
.prettierrc.cjs
.rubocop.yml
.streerc
.template-lintrc.cjs
Gemfile
Gemfile.lock bump rack from 3.1.14 to 3.1.16 (#1408) 2025-06-24 12:42:22 +10:00
LICENSE
README.md
about.json
eslint.config.mjs
package.json
plugin.rb DEV: Use a PORO to represent modules/features. (#1421) 2025-06-10 14:37:53 -03:00
pnpm-lock.yaml
stylelint.config.mjs
translator.yml

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key