Go to file
Roman Rizzi 6059b6e111
FIX: Customizable max_output_tokens for AI triage. (#1510)
We enforced a hard limit of 700 tokens in this script, which is not enough when using thinking models, which can quickly use all of them.

A temporary solution could be bumping the limit, but there is no guarantee we won't hit it again, and it's hard to find one value that fits all scenarios. Another alternative could be removing it and relying on the LLM config's `max_output_token`, but if you want different rules and want to assign different limits, you are forced to duplicate the config each time.

Considering all this, we are adding a dedicated field for this in the triage script, giving you an easy way to tweak it to your needs. If empty, no limit is applied.
2025-07-21 15:36:39 -03:00
.github/workflows Initial commit 2023-02-17 11:33:47 -03:00
admin/assets/javascripts/discourse FEATURE: Display bot in feature list (#1466) 2025-06-27 12:35:41 +10:00
app FIX: Fix embeddings to use the old OpenAI tokenizer (#1506) 2025-07-15 14:44:11 -03:00
assets FIX: prevent crash in "all" filter on features page (#1505) 2025-07-15 12:51:46 -04:00
config FIX: Customizable max_output_tokens for AI triage. (#1510) 2025-07-21 15:36:39 -03:00
db DEV: Set a min and max for translations backfill (#1508) 2025-07-17 17:43:05 +08:00
discourse_automation FIX: Customizable max_output_tokens for AI triage. (#1510) 2025-07-21 15:36:39 -03:00
evals FIX: make AI helper more robust (#1484) 2025-07-04 14:47:11 +10:00
lib FIX: Customizable max_output_tokens for AI triage. (#1510) 2025-07-21 15:36:39 -03:00
public/ai-share FEATURE: persistent key-value storage for AI Artifacts (#1417) 2025-06-11 06:59:46 +10:00
spec FIX: Customizable max_output_tokens for AI triage. (#1510) 2025-07-21 15:36:39 -03:00
svg-icons REFACTOR: update embeddings to formkit (#1188) 2025-03-13 11:27:38 -04:00
test/javascripts FIX: cross talk when in ai helper (#1478) 2025-07-01 18:02:16 +10:00
.discourse-compatibility DEV: Remove 'experimental' from translation features (#1439) 2025-06-19 12:23:56 +08:00
.gitignore DEV: Move tokenizers to a gem (#1481) 2025-07-02 14:43:03 -03:00
.npmrc DEV: Switch to use pnpm (#833) 2024-10-14 13:37:20 +02:00
.prettierignore FEATURE: UI to update ai personas on admin page (#290) 2023-11-21 16:56:43 +11:00
.prettierrc.cjs DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
.rubocop.yml DEV: Expose AI spam scanning metrics (#1077) 2025-01-27 11:57:01 +08:00
.streerc DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
.template-lintrc.cjs DEV: Update linting (#326) 2023-11-29 23:01:48 +01:00
Gemfile DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
Gemfile.lock bump rack from 3.1.14 to 3.1.16 (#1408) 2025-06-24 12:42:22 +10:00
LICENSE DEV: Update license (#1147) 2025-02-24 11:20:06 +08:00
README.md DEV: Extract configs to a yml file and allow local config (#1142) 2025-02-24 16:22:19 +11:00
about.json DEV: GH CI needs discourse-solved (#1220) 2025-03-26 10:12:55 -03:00
eslint.config.mjs DEV: Update eslint config (#917) 2024-11-19 11:57:40 +01:00
package.json DEV: Update linting (#1194) 2025-03-17 15:14:53 +11:00
plugin.rb FEATURE: Add old OpenAI tokenizer to embeddings (#1487) 2025-07-07 15:07:27 -03:00
pnpm-lock.yaml DEV: Update linting (#1194) 2025-03-17 15:14:53 +11:00
stylelint.config.mjs DEV: Update linting (#1191) 2025-03-13 13:25:38 +00:00
translator.yml UX: Display the indexing progress for RAG uploads (#557) 2024-04-09 11:03:07 -03:00

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key