Previously we attempted to add a diff streaming animation to the AI composer helper: https://github.com/discourse/discourse-ai/pull/1332, but it resulted in issues despite attempted fixes (https://github.com/discourse/discourse-ai/pull/1338) so we temporarily suppressed the diff animation (https://github.com/discourse/discourse-ai/pull/1341). This update makes a second attempt at implementing the diff streaming animation. Instead of creating a custom diff algorithm, we make use of a third-party library [`jsDiff`](https://github.com/kpdecker/jsdiff) (which we added to core here: https://github.com/discourse/discourse/pull/32833). While streaming, the diff animation often struggles with markdown links and images, so we make use of `markdown-it` parser to detect those cases and prevent breaking the animation. --------- Co-authored-by: Sam Saffron <sam.saffron@gmail.com> |
||
---|---|---|
.github/workflows | ||
admin/assets/javascripts/discourse | ||
app | ||
assets | ||
config | ||
db | ||
discourse_automation | ||
evals | ||
lib | ||
public/ai-share | ||
spec | ||
svg-icons | ||
test/javascripts | ||
tokenizers | ||
.discourse-compatibility | ||
.gitignore | ||
.npmrc | ||
.prettierignore | ||
.prettierrc.cjs | ||
.rubocop.yml | ||
.streerc | ||
.template-lintrc.cjs | ||
Gemfile | ||
Gemfile.lock | ||
LICENSE | ||
README.md | ||
about.json | ||
eslint.config.mjs | ||
package.json | ||
plugin.rb | ||
pnpm-lock.yaml | ||
stylelint.config.mjs | ||
translator.yml |
README.md
Discourse AI Plugin
Plugin Summary
For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco
Evals
The directory evals
contains AI evals for the Discourse AI plugin.
You may create a local config by copying config/eval-llms.yml
to config/eval-llms.local.yml
and modifying the values.
To run them use:
cd evals ./run --help
Usage: evals/run [options]
-e, --eval NAME Name of the evaluation to run
--list-models List models
-m, --model NAME Model to evaluate (will eval all models if not specified)
-l, --list List evals
To run evals you will need to configure API keys in your environment:
OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key