Go to file
Joffrey JAFFEUX 296aa24df1
DEV: rewrites artifact spec with capybara waiters (#1347)
Generally speaking we never want to do:

```
expect(element.text).to eq("foo")
```

As these are rspec matchers and do not add further Capybara-style waiting specifically for the text content to become present.
2025-05-20 07:27:15 +10:00
.github/workflows
admin/assets/javascripts/discourse
app FEATURE: allow passing in data attributes to an artifact (#1346) 2025-05-19 15:44:37 +10:00
assets FEATURE: allow passing in data attributes to an artifact (#1346) 2025-05-19 15:44:37 +10:00
config FEATURE: allow researcher to also research specific topics (#1339) 2025-05-15 17:48:21 +10:00
db FEATURE: forum researcher persona for deep research (#1313) 2025-05-14 12:36:16 +10:00
discourse_automation
evals
lib DEV: Temporarily suppress diff animation as we fix issues (#1341) 2025-05-15 14:55:30 -07:00
public/ai-share
spec DEV: rewrites artifact spec with capybara waiters (#1347) 2025-05-20 07:27:15 +10:00
svg-icons
test/javascripts
tokenizers
.discourse-compatibility DEV: Add compatibility with the Glimmer Post Stream (#1230) 2025-04-29 23:55:54 -03:00
.gitignore
.npmrc
.prettierignore
.prettierrc.cjs
.rubocop.yml
.streerc
.template-lintrc.cjs
Gemfile
Gemfile.lock Build(deps-dev): Bump rack from 3.1.12 to 3.1.14 (#1327) 2025-05-12 17:11:57 -03:00
LICENSE
README.md
about.json
eslint.config.mjs
package.json
plugin.rb
pnpm-lock.yaml
stylelint.config.mjs
translator.yml

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key