Go to file
Sam 7dc3c30fa4
FEATURE: correctly decorate AI bots (#1300)
AI bots come in 2 flavors

1. An LLM and LLM user, in this case we should decorate posts with persona name
2. A Persona user, in this case, in PMs we decorate with LLM name

(2) is a significant improvement, cause previously when creating a conversation
you could not tell which LLM you were talking to by simply looking at the post, you would
have to scroll to the top of the page.

* lint

* translation missing
2025-04-30 16:36:38 +10:00
.github/workflows
admin/assets/javascripts/discourse
app
assets FEATURE: correctly decorate AI bots (#1300) 2025-04-30 16:36:38 +10:00
config FEATURE: correctly decorate AI bots (#1300) 2025-04-30 16:36:38 +10:00
db FEATURE: add OpenAI image generation and editing capabilities (#1293) 2025-04-29 17:38:54 +10:00
discourse_automation
evals
lib FEATURE: correctly decorate AI bots (#1300) 2025-04-30 16:36:38 +10:00
public/ai-share
spec FEATURE: correctly decorate AI bots (#1300) 2025-04-30 16:36:38 +10:00
svg-icons
test/javascripts
tokenizers
.discourse-compatibility DEV: Add compatibility with the Glimmer Post Stream (#1230) 2025-04-29 23:55:54 -03:00
.gitignore
.npmrc
.prettierignore
.prettierrc.cjs
.rubocop.yml
.streerc
.template-lintrc.cjs
Gemfile
Gemfile.lock
LICENSE
README.md
about.json
eslint.config.mjs
package.json
plugin.rb
pnpm-lock.yaml
stylelint.config.mjs
translator.yml

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key