Go to file
Keegan George 38f7e9c2c4
UX: AI composer helper refinements (#1387)
This update includes a variety of small refinements to the AI composer helper:

- prevent height jump when going from loading text placeholder → proofreading text streaming
- update padding on AI helper options list to be more suitable with typical Discourse menu design
- for composer helper results that are not `showResultAsDiff` (i.e. translation):
   - update before/after diff design to be more subtle
   - results should be in normal font (as the text is cooked and not raw markdown)
- fix: smooth streaming animation stuck showing dot icon even after smooth streaming is done
2025-05-30 10:35:53 -07:00
.github/workflows
admin/assets/javascripts/discourse
app FIX: custom tools incorrectly setting all fields to blank enum (#1385) 2025-05-30 17:12:24 +10:00
assets UX: AI composer helper refinements (#1387) 2025-05-30 10:35:53 -07:00
config FEATURE: Automatic translation and localization of posts, topics, categories (#1376) 2025-05-29 17:28:06 +08:00
db
discourse_automation
evals
lib FEATURE: support upload.getUrl in custom tools (#1384) 2025-05-30 15:47:07 +10:00
public/ai-share
spec FIX: custom tools incorrectly setting all fields to blank enum (#1385) 2025-05-30 17:12:24 +10:00
svg-icons
test/javascripts
tokenizers
.discourse-compatibility
.gitignore
.npmrc
.prettierignore
.prettierrc.cjs
.rubocop.yml
.streerc
.template-lintrc.cjs
Gemfile
Gemfile.lock
LICENSE
README.md
about.json
eslint.config.mjs
package.json
plugin.rb FEATURE: Automatic translation and localization of posts, topics, categories (#1376) 2025-05-29 17:28:06 +08:00
pnpm-lock.yaml
stylelint.config.mjs
translator.yml

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key