We were logging persona triage as "bot" in logs, causing some
confusions around real world usage
This amends it so we log usage to "automation - AUTOMATION NAME"
This allows for a new mode in persona triage where nothing is posted on topics.
This allows people to perform all triage actions using tools
Additionally introduces new APIs to create chat messages from tools which can be useful in certain moderation scenarios
Co-authored-by: Natalie Tay <natalie.tay@gmail.com>
* remove TODO code
---------
Co-authored-by: Natalie Tay <natalie.tay@gmail.com>
- Fix search API to only include column_names when present to make the API less confusing
- Ensure correct LLM is used in PMs by tracking and preferring the last bot user
- Fix persona_id conversion from string to integer in custom fields
- Add missing test for PM triage with no replies - ensure we don't try to auto title topic
- Ensure bot users are properly added to PMs
- Make title setting optional when replying to posts
- Add ability to control stream_reply behavior
These changes improve reliability and fix edge cases in bot interactions,
particularly in private messages with multiple LLMs and while triaging posts using personas
This PR enhances the LLM triage automation with several important improvements:
- Add ability to use AI personas for automated replies instead of canned replies
- Add support for whisper responses
- Refactor LLM persona reply functionality into a reusable method
- Add new settings to configure response behavior in automations
- Improve error handling and logging
- Fix handling of personal messages in the triage flow
- Add comprehensive test coverage for new features
- Make personas configurable with more flexible requirements
This allows for more dynamic and context-aware responses in automated workflows, with better control over visibility and attribution.
## LLM Persona Triage
- Allows automated responses to posts using AI personas
- Configurable to respond as regular posts or whispers
- Adds context-aware formatting for topics and private messages
- Provides special handling for topic metadata (title, category, tags)
## LLM Tool Triage
- Enables custom AI tools to process and respond to posts
- Tools can analyze post content and invoke personas when needed
- Zero-parameter tools can be used for automated workflows
- Not enabled in production yet
## Implementation Details
- Added new scriptable registration in discourse_automation/ directory
- Created core implementation in lib/automation/ modules
- Enhanced PromptMessagesBuilder with topic-style formatting
- Added helper methods for persona and tool selection in UI
- Extended AI Bot functionality to support whisper responses
- Added rate limiting to prevent abuse
## Other Changes
- Added comprehensive test coverage for both automation types
- Enhanced tool runner with LLM integration capabilities
- Improved error handling and logging
This feature allows forum admins to configure AI personas to automatically respond to posts based on custom criteria and leverage AI tools for more complex triage workflows.
Tool Triage has been disabled in production while we finalize details of new scripting capabilities.
adds support for "thinking tokens" - a feature that exposes the model's reasoning process before providing the final response. Key improvements include:
- Add a new Thinking class to handle thinking content from LLMs
- Modify endpoints (Claude, AWS Bedrock) to handle thinking output
- Update AI bot to display thinking in collapsible details section
- Fix SEARCH/REPLACE blocks to support empty replacement strings and general improvements to artifact editing
- Allow configurable temperature in triage and report automations
- Various bug fixes and improvements to diff parsing
A new feature_context json column was added to ai_api_audit_logs
This allows us to store rich json like context on any LLM request
made.
This new field now stores automation id and name.
Additionally allows llm_triage to specify maximum number of tokens
This means that you can limit the cost of llm triage by scanning only
first N tokens of a post.
* DEV: Remove old code now that features rely on LlmModels.
* Hide old settings and migrate persona llm overrides
* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
Introduce a Discourse Automation based periodical report. Depends on Discourse Automation.
Report works best with very large context language models such as GPT-4-Turbo and Claude 2.
- Introduces final_insts to generic llm format, for claude to work best it is better to guide the last assistant message (we should add this to other spots as well)
- Adds GPT-4 turbo support to generic llm interface
llm_triage supported claude 2 in triage, this implements it
OpenAI rate limits frequently, this introduces some exponential
backoff (3 attempts - 3 seconds, 9 and 27)
Also reduces temp of classifiers so they have consistent behavior
The new automation rule can be used to perform llm based classification and categorization of topics.
You specify a system prompt (which has %%POST%% as an input), if it returns a particular piece of text then we will apply rules such as tagging, hiding, replying or categorizing.
This can be used as a spam filter, a "oops you are in the wrong place" filter and so on.
Co-authored-by: Joffrey JAFFEUX <j.jaffeux@gmail.com>