Go to file
Sam e15984029d
FEATURE: allow tools to amend personas (#1250)
Add API methods to AI tools for reading and updating personas, enabling
more flexible AI workflows. This allows custom tools to:

- Fetch persona information through discourse.getPersona()
- Update personas with modified settings via discourse.updatePersona()
- Also update using persona.update()

These APIs enable new use cases like "trainable" moderation bots, where
users with appropriate permissions can set and refine moderation rules
through direct chat interactions, without needing admin panel access.

Also adds a special API scope which allows people to lean on API
for similar actions

Additionally adds a rather powerful hidden feature can allow custom tools
to inject content into the context unconditionally it can be used for memory and similar features
2025-04-09 15:48:25 +10:00
.github/workflows Initial commit 2023-02-17 11:33:47 -03:00
admin/assets/javascripts/discourse Revert "DEV: Convert tool editor to form kit (#1135)" (#1201) 2025-03-18 18:07:04 +11:00
app FEATURE: Personas powered summaries. (#1232) 2025-04-02 12:54:47 -03:00
assets FIX: search discovery quirks (#1249) 2025-04-07 12:52:23 -07:00
config FEATURE: allow tools to amend personas (#1250) 2025-04-09 15:48:25 +10:00
db FIX: Restore gists previous group access behavior. (#1247) 2025-04-07 12:04:30 -03:00
discourse_automation FEATURE: allow to send LLM reports to groups (#1246) 2025-04-07 15:31:30 +10:00
evals DEV: Support multiple tests per eval and followups per test (#1199) 2025-03-18 11:42:05 +08:00
lib FEATURE: allow tools to amend personas (#1250) 2025-04-09 15:48:25 +10:00
public/ai-share UX: improve artifact styling add direct share link (#930) 2024-11-20 13:13:03 +11:00
spec FEATURE: allow tools to amend personas (#1250) 2025-04-09 15:48:25 +10:00
svg-icons REFACTOR: update embeddings to formkit (#1188) 2025-03-13 11:27:38 -04:00
test/javascripts DEV: Streaming animation API for components (#1224) 2025-03-27 08:06:33 -07:00
tokenizers FEATURE: Gemini Tokenizer (#1088) 2025-01-23 18:20:35 -03:00
.discourse-compatibility DEV: supports for form-kit changes (#1203) 2025-03-19 15:01:14 +01:00
.gitignore FEATURE: allow tools to amend personas (#1250) 2025-04-09 15:48:25 +10:00
.npmrc DEV: Switch to use pnpm (#833) 2024-10-14 13:37:20 +02:00
.prettierignore FEATURE: UI to update ai personas on admin page (#290) 2023-11-21 16:56:43 +11:00
.prettierrc.cjs DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
.rubocop.yml DEV: Expose AI spam scanning metrics (#1077) 2025-01-27 11:57:01 +08:00
.streerc DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
.template-lintrc.cjs DEV: Update linting (#326) 2023-11-29 23:01:48 +01:00
Gemfile DEV: Update linting configs (#280) 2023-11-03 11:30:09 +00:00
Gemfile.lock DEV: Update linting (#1194) 2025-03-17 15:14:53 +11:00
LICENSE DEV: Update license (#1147) 2025-02-24 11:20:06 +08:00
README.md DEV: Extract configs to a yml file and allow local config (#1142) 2025-02-24 16:22:19 +11:00
about.json DEV: GH CI needs discourse-solved (#1220) 2025-03-26 10:12:55 -03:00
eslint.config.mjs DEV: Update eslint config (#917) 2024-11-19 11:57:40 +01:00
package.json DEV: Update linting (#1194) 2025-03-17 15:14:53 +11:00
plugin.rb FEATURE: allow tools to amend personas (#1250) 2025-04-09 15:48:25 +10:00
pnpm-lock.yaml DEV: Update linting (#1194) 2025-03-17 15:14:53 +11:00
stylelint.config.mjs DEV: Update linting (#1191) 2025-03-13 13:25:38 +00:00
translator.yml UX: Display the indexing progress for RAG uploads (#557) 2024-04-09 11:03:07 -03:00

README.md

Discourse AI Plugin

Plugin Summary

For more information, please see: https://meta.discourse.org/t/discourse-ai/259214?u=falco

Evals

The directory evals contains AI evals for the Discourse AI plugin. You may create a local config by copying config/eval-llms.yml to config/eval-llms.local.yml and modifying the values.

To run them use:

cd evals ./run --help

Usage: evals/run [options]
    -e, --eval NAME                  Name of the evaluation to run
        --list-models                List models
    -m, --model NAME                 Model to evaluate (will eval all models if not specified)
    -l, --list                       List evals

To run evals you will need to configure API keys in your environment:

OPENAI_API_KEY=your_openai_api_key ANTHROPIC_API_KEY=your_anthropic_api_key GEMINI_API_KEY=your_gemini_api_key