### 🔍 Overview
This update performs some enhancements to the LLM configuration screen. In particular, it renames the UI for the number of tokens for the prompt to "Context window" since the naming can be confusing to the user. Additionally, it adds a new optional field called "Max output tokens".
|
||
---|---|---|
.github/workflows | ||
admin/assets/javascripts/discourse | ||
app | ||
assets | ||
config | ||
db | ||
discourse_automation | ||
evals | ||
lib | ||
public/ai-share | ||
spec | ||
svg-icons | ||
test/javascripts | ||
tokenizers | ||
.discourse-compatibility | ||
.gitignore | ||
.npmrc | ||
.prettierignore | ||
.prettierrc.cjs | ||
.rubocop.yml | ||
.streerc | ||
.template-lintrc.cjs | ||
Gemfile | ||
Gemfile.lock | ||
LICENSE | ||
README.md | ||
about.json | ||
eslint.config.mjs | ||
package.json | ||
plugin.rb | ||
pnpm-lock.yaml | ||
stylelint.config.mjs | ||
translator.yml |
README.md
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco
Evals
The directory evals
contains AI evals for the Discourse AI plugin.
You may create a local config by copying config/eval-llms.yml
to config/eval-llms.local.yml
and modifying the values.
To run them use:
cd evals ./run --help
Usage: evals/run [options]
-e, --eval NAME Name of the evaluation to run
--list-models List models
-m, --model NAME Model to evaluate (will eval all models if not specified)
-l, --list List evals
To run evals you will need to configure API keys in your environment:
OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key