Go to file
Roman Rizzi d72ad84f8f
FIX: Retry parsing escaped inner JSON to handle control chars. (#1357)
The structured output JSON comes embedded inside the API response, which is also a JSON. Since we have to parse the response to process it, any control characters inside the structured output are unescaped into regular characters, leading to invalid JSON and breaking during parsing. This change adds a retry mechanism that escapes
the string again if parsing fails, preventing the parser from breaking on malformed input and working around this issue.

For example:

```
  original = '{ "a": "{\\"key\\":\\"value with \\n newline\\"}" }'
  JSON.parse(original) => { "a" => "{\"key\":\"value with \n newline\"}" }
  # At this point, the inner JSON string contains an actual newline.
```
2025-05-21 11:25:59 -03:00
.github/workflows Initial commit 2023-02-17 11:33:47 -03:00
admin/assets/javascripts/discourse DEV: migrates tools form to form-kit (#1204) 2025-04-22 09:23:25 -07:00
app FEATURE: allow passing in data attributes to an artifact (#1346) 2025-05-19 15:44:37 +10:00
assets FEATURE: add participants and invite button to AI conversations (#1354) 2025-05-21 10:27:06 +10:00
config FEATURE: add participants and invite button to AI conversations (#1354) 2025-05-21 10:27:06 +10:00
db FEATURE: forum researcher persona for deep research (#1313) 2025-05-14 12:36:16 +10:00
discourse_automation FEATURE: allow to send LLM reports to groups (#1246) 2025-04-07 15:31:30 +10:00
evals DEV: Support multiple tests per eval and followups per test (#1199) 2025-03-18 11:42:05 +08:00
lib FIX: Retry parsing escaped inner JSON to handle control chars. (#1357) 2025-05-21 11:25:59 -03:00
public/ai-share UX: improve artifact styling add direct share link (#930) 2024-11-20 13:13:03 +11:00
spec FIX: Retry parsing escaped inner JSON to handle control chars. (#1357) 2025-05-21 11:25:59 -03:00
svg-icons REFACTOR: update embeddings to formkit (#1188) 2025-03-13 11:27:38 -04:00
test/javascripts FEATURE: Bot Conversation Homepage (#1273) 2025-04-22 10:22:03 -05:00
tokenizers FEATURE: Gemini Tokenizer (#1088) 2025-01-23 18:20:35 -03:00
.discourse-compatibility DEV: Add compatibility with the Glimmer Post Stream (#1230) 2025-04-29 23:55:54 -03:00
.gitignore FEATURE: allow tools to amend personas (#1250) 2025-04-09 15:48:25 +10:00
.npmrc DEV: Switch to use pnpm (#833) 2024-10-14 13:37:20 +02:00
.prettierignore FEATURE: UI to update ai personas on admin page (#290) 2023-11-21 16:56:43 +11:00
.prettierrc.cjs DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
.rubocop.yml DEV: Expose AI spam scanning metrics (#1077) 2025-01-27 11:57:01 +08:00
.streerc DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
.template-lintrc.cjs DEV: Update linting (#326) 2023-11-29 23:01:48 +01:00
Gemfile DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
Gemfile.lock Build(deps-dev): Bump rack from 3.1.12 to 3.1.14 (#1327) 2025-05-12 17:11:57 -03:00
LICENSE DEV: Update license (#1147) 2025-02-24 11:20:06 +08:00
README.md DEV: Extract configs to a yml file and allow local config (#1142) 2025-02-24 16:22:19 +11:00
about.json DEV: GH CI needs discourse-solved (#1220) 2025-03-26 10:12:55 -03:00
eslint.config.mjs DEV: Update eslint config (#917) 2024-11-19 11:57:40 +01:00
package.json DEV: Update linting (#1194) 2025-03-17 15:14:53 +11:00
plugin.rb DEV: Update gems for Ruby 3.4 compatibility (#1281) 2025-04-24 13:02:51 -03:00
pnpm-lock.yaml DEV: Update linting (#1194) 2025-03-17 15:14:53 +11:00
stylelint.config.mjs DEV: Update linting (#1191) 2025-03-13 13:25:38 +00:00
translator.yml UX: Display the indexing progress for RAG uploads (#557) 2024-04-09 11:03:07 -03:00

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key