Go to file
Sam 9196546f6f
FIX: better LLM feedback for image generation failures (#1306)
* FIX: handle error conditions when generating images gracefully

* FIX: also handle error for edit_image

* Update lib/inference/open_ai_image_generator.rb

Co-authored-by: Krzysztof Kotlarek <kotlarek.krzysztof@gmail.com>

* lint

---------

Co-authored-by: Krzysztof Kotlarek <kotlarek.krzysztof@gmail.com>
2025-05-01 19:25:38 +10:00
.github/workflows
admin/assets/javascripts/discourse DEV: migrates tools form to form-kit (#1204) 2025-04-22 09:23:25 -07:00
app FIX: system persona state leaking between sites (#1304) 2025-05-01 13:24:53 +10:00
assets DEV: Lint `discourse-ai-bot-conversations` (#1305) 2025-05-01 16:10:47 +10:00
config Update translations (#1294) 2025-05-01 16:11:26 +10:00
db FEATURE: add support for uploads when starting a convo (#1301) 2025-05-01 12:21:07 +10:00
discourse_automation
evals
lib FIX: better LLM feedback for image generation failures (#1306) 2025-05-01 19:25:38 +10:00
public/ai-share
spec FIX: better LLM feedback for image generation failures (#1306) 2025-05-01 19:25:38 +10:00
svg-icons
test/javascripts FEATURE: Bot Conversation Homepage (#1273) 2025-04-22 10:22:03 -05:00
tokenizers
.discourse-compatibility DEV: Add compatibility with the Glimmer Post Stream (#1230) 2025-04-29 23:55:54 -03:00
.gitignore
.npmrc
.prettierignore
.prettierrc.cjs
.rubocop.yml
.streerc
.template-lintrc.cjs
Gemfile
Gemfile.lock
LICENSE
README.md
about.json
eslint.config.mjs
package.json
plugin.rb DEV: Update gems for Ruby 3.4 compatibility (#1281) 2025-04-24 13:02:51 -03:00
pnpm-lock.yaml
stylelint.config.mjs
translator.yml

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key