Commit Graph

605 Commits

Author SHA1 Message Date
Sam 2c609e165b
FEATURE: Add user location info to spam scanner context (#1076)
This adds registration and last known IP information and email to scanning context.

This provides another hint for spam scanner about possible malicious users.

For example registered in India, replying from Australia or
email is clearly a throwaway email address.
2025-01-21 17:51:21 +11:00
Roman Rizzi 46fcdb6ba5
FIX: Make summaries backfill job more resilient. (#1071)
To quickly select backfill candidates without comparing SHAs, we compare the last summarized post to the topic's highest_post_number. However, hiding or deleting a post and adding a small action will update this column, causing the job to stall and re-generate the same summary repeatedly until someone posts a regular reply. On top of this, this is not always true for topics with `best_replies`, as this last reply isn't necessarily included.

Since this is not evident at first glance and each summarization strategy picks its targets differently, I'm opting to simplify the backfill logic and how we track potential candidates.

The first step is dropping `content_range`, which serves no purpose and it's there because summary caching was supposed to work differently at the beginning. So instead, I'm replacing it with a column called `highest_target_number`, which tracks `highest_post_number` for topics and could track other things like channel's `message_count` in the future.

Now that we have this column when selecting every potential backfill candidate, we'll check if the summary is truly outdated by comparing the SHAs, and if it's not, we just update the column and move on
2025-01-16 09:42:53 -03:00
Rafael dos Santos Silva f9aa2de413
FIX: AWS Bedrock non-streaming calls response log (#1072) 2025-01-15 18:51:25 -03:00
Sam 81b952d56e
FIX: only hide posts detected explicitly as spam (#1070)
When enabling spam scanner it there may be old unscanned posts
this can create a risky situation where spam scanner operates
on legit posts during false positives

To keep this a lot safer we no longer try to hide old stuff by
the spammers.
2025-01-15 16:50:41 +11:00
Natalie Tay c881f8b361
DEV: Add rake task to send topics or posts to spam scanner (#1059) 2025-01-15 11:48:57 +08:00
Roman Rizzi 65bbcd71fc
DEV: Embedding tables' model_id has to be a bigint (#1058)
* DEV: Embedding tables' model_id has to be a bigint

* Drop old search_bit indexes

* copy rag fragment embeddings created during deploy window
2025-01-14 10:53:06 -03:00
Sam d07cf51653
FEATURE: llm quotas (#1047)
Adds a comprehensive quota management system for LLM models that allows:

- Setting per-group (applied per user in the group) token and usage limits with configurable durations
- Tracking and enforcing token/usage limits across user groups
- Quota reset periods (hourly, daily, weekly, or custom)
-  Admin UI for managing quotas with real-time updates

This system provides granular control over LLM API usage by allowing admins
to define limits on both total tokens and number of requests per group.
Supports multiple concurrent quotas per model and automatically handles
quota resets.


Co-authored-by: Keegan George <kgeorge13@gmail.com>
2025-01-14 15:54:09 +11:00
Sam 20612fde52
FEATURE: add the ability to disable streaming on an Open AI LLM
Disabling streaming is required for models such o1 that do not have streaming
enabled yet

It is good to carry this feature around in case various apis decide not to support streaming endpoints and Discourse AI can continue to work just as it did before. 

Also: fixes issue where sharing artifacts would miss viewport leading to tiny artifacts on mobile
2025-01-13 17:01:01 +11:00
Keegan George b24669c810
DEV: Add structure for errors in spam (#1054)
This update adds some structure for handling errors in the spam config while also handling a specific error related to the spam scanning user not being an admin account.
2025-01-09 09:17:06 -08:00
Sam 11d0f60f1e
FEATURE: smart date support for AI helper (#1044)
* FEATURE: smart date support for AI helper

This feature allows conversion of human typed in dates and times
to smart "Discourse" timezone friendly dates.

* fix specs and lint

* lint

* address feedback

* add specs
2024-12-31 08:04:25 +11:00
Sam f9f89adac5
FIX: keep track of silence reason when spam detection flags user (#1046)
Previously reason was blank for silencing user
2024-12-27 17:47:16 +11:00
Keegan George b480f13a0f
FIX: Prevent LLM enumerator from erroring when spam enabled (#1045)
This PR fixes an issue where LLM enumerator would error out when `SiteSetting.ai_spam_detection = true` but there was no `AiModerationSetting.spam` present.

Typically, we add an `LlmDependencyValidator` for the setting itself, however, since Spam is unique in that it has it's model set in `AiModerationSetting` instead of a `SiteSetting`, we'll add a simple check here to prevent erroring out.
2024-12-27 09:12:29 +11:00
Sam 47ecf86aa1
FIX: embedding validation (#1043) 2024-12-24 09:37:23 +11:00
Rafael dos Santos Silva 792df58fbc
FIX: AI Helper category / tag suggestion when user does not categories muted (#1042) 2024-12-23 15:58:26 -03:00
Roman Rizzi ceac6e5efb
FIX: Embeddings validator test needs to use the new Vector class. (#1041) 2024-12-23 14:19:22 -03:00
Keegan George bdb8f1d5e0
FIX: Custom prefix causing allowed seeded LLMs not to be shown (#1039)
* FIX: Custom prefix causing allowed seeded LLMs not to be shown

* DEV: update spec

* not `_map` so should be string not array
2024-12-23 16:42:26 +11:00
Rafael dos Santos Silva 7607477ff9
FIX: Cloudflare Workers AI embeddings (#1037)
Regressed on 534b0df
2024-12-20 17:45:27 -03:00
Keegan George 059b3fabb8
DEV: Unreachable LLM error shouldn't prevent setting (#1036)
Previously we had the behaviour for model settings so that when you try and set a model, it runs a test and returns an error if it can't run the test successfully. The error then prevents you from setting the site setting.

This results in some issues when we try and automate things. This PR updates that so that the test runs and discreetly logs the changes, but doesn't prevent the setting from being set. Instead we rely on "run test" in the LLM config along with ProblemChecks to catch issues.
2024-12-20 11:52:11 -08:00
Sam 6a7a45fd4f
FIX: properly spin down unused streamer threads (#1035)
Previous version was prone to the bug:

https://github.com/ruby-concurrency/concurrent-ruby/issues/1075

This is particularly bad cause we could have a DB connection
attached to the thread and we never clear it up, so after N hours
this could start exhibiting weird connection issues.
2024-12-20 12:09:42 +11:00
Sam fae2d5ff2c
FEATURE: link correctly to filters to assist in debugging spam (#1031)
- Add spam_score_type to AiSpamSerializer for better integration with reviewables.
- Introduce a custom filter for detecting AI spam false negatives in moderation workflows.
- Refactor spam report generation to improve identification of false negatives.
- Add tests to verify the custom filter and its behavior.
- Introduce links for all spam counts in report
2024-12-17 11:02:18 +11:00
Mark VanLandingham 24b107881a
FEATURE: Unavailable state for semantic search when sort is not Relevant (#1030)
This commit adds an "unavailable" state for the AI semantic search toggle. Currently the AI toggle disappears when the sort by is anything but Relevance which makes the UI confusing for users looking for AI results. This should help!
2024-12-16 14:30:11 -06:00
Roman Rizzi 534b0df391
REFACTOR: Separation of concerns for embedding generation. (#1027)
In a previous refactor, we moved the responsibility of querying and storing embeddings into the `Schema` class. Now, it's time for embedding generation.

The motivation behind these changes is to isolate vector characteristics in simple objects to later replace them with a DB-backed version, similar to what we did with LLM configs.
2024-12-16 09:55:39 -03:00
Roman Rizzi eae527f99d
REFACTOR: A Simpler way of interacting with embeddings tables. (#1023)
* REFACTOR: A Simpler way of interacting with embeddings' tables.

This change adds a new abstraction called `Schema`, which acts as a repository that supports the same DB features `VectorRepresentation::Base` has, with the exception that removes the need to have duplicated methods per embeddings table.

It is also a bit more flexible when performing a similarity search because you can pass it a block that gives you access to the builder, allowing you to add multiple joins/where conditions.
2024-12-13 10:15:21 -03:00
Roman Rizzi 97ec2c5ff4
FEATURE: Show gists everywhere except suggested/related (#995) 2024-12-12 12:29:35 -03:00
Sam 47c1ea337e
FIX: allow scanning of trashed posts and deleted users for test (#1024)
When a post is trashed we should still be allowed to scan it
post.topic will be nil for a trashed topic even if post is trashed
2024-12-12 10:26:05 +11:00
Sam 47f5da7e42
FEATURE: Add AI-powered spam detection for new user posts (#1004)
This introduces a comprehensive spam detection system that uses LLM models
to automatically identify and flag potential spam posts. The system is
designed to be both powerful and configurable while preventing false positives.

Key Features:
* Automatically scans first 3 posts from new users (TL0/TL1)
* Creates dedicated AI flagging user to distinguish from system flags
* Tracks false positives/negatives for quality monitoring
* Supports custom instructions to fine-tune detection
* Includes test interface for trying detection on any post

Technical Implementation:
* New database tables:
  - ai_spam_logs: Stores scan history and results
  - ai_moderation_settings: Stores LLM config and custom instructions
* Rate limiting and safeguards:
  - Minimum 10-minute delay between rescans
  - Only scans significant edits (>10 char difference)
  - Maximum 3 scans per post
  - 24-hour maximum age for scannable posts
* Admin UI features:
  - Real-time testing capabilities
  - 7-day statistics dashboard
  - Configurable LLM model selection
  - Custom instruction support

Security and Performance:
* Respects trust levels - only scans TL0/TL1 users
* Skips private messages entirely
* Stops scanning users after 3 successful public posts
* Includes comprehensive test coverage
* Maintains audit log of all scan attempts


---------

Co-authored-by: Keegan George <kgeorge13@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>
2024-12-12 09:17:25 +11:00
Martin Brennan ae80494448
UX: Improve rough edges of AI usage page (#1014)
* UX: Improve rough edges of AI usage page

* Ensure all text uses I18n
* Change from <button> usage to <DButton>
* Use <AdminConfigAreaCard> in place of custom card styles
* Format numbers nicely using our number format helper,
  show full values on hover using title attr
* Ensure 0 is always shown for counters, instead of being blank

* FEATURE: Load usage data after page load

Use ConditionalLoadingSpinner to hide load of usage
data, this prevents us hanging on page load with a white
screen.

* UX: Split users table, and add empty placeholders and page subheader

* DEV: Test fix
2024-12-12 08:55:24 +11:00
Keegan George a4440c507b
UX: Make sentiment trends more readable (#1018)
Instead of a stacked chart showing a separate series for positive and negative, this PR introduces a simplification to the overall sentiment dashboard. It comprises the sentiment into a single series of the difference between `positive - negative` instead. This should allow for the data to be more easy to scan and look for trends
2024-12-11 09:13:18 -08:00
Roman Rizzi 5fc7a730ef
FIX: Triage rule should append selected tags instead of replacing them (#1022) 2024-12-11 11:19:44 -03:00
Roman Rizzi 6da35d8e66
FIX: Gemini inference client was missing #instance (#1019) 2024-12-10 15:42:31 -03:00
Keegan George 700e9de073
Revert "UX: Make sentiment trends more readable in time series data (#1013)" (#1016)
This reverts commit 375dd702b2.
2024-12-10 08:15:27 -08:00
Keegan George 375dd702b2
UX: Make sentiment trends more readable in time series data (#1013)
Instead of a stacked chart showing a separate series for positive and negative, this PR introduces a simplification to the overall sentiment dashboard. It comprises the sentiment into a single series of the difference between `positive - negative` instead. This should allow for the data to be more easy to scan and look for trends.
2024-12-10 07:22:41 -08:00
Sam 7ca21cc329
FEATURE: first class support for OpenRouter (#1011)
* FEATURE: first class support for OpenRouter

This new implementation supports picking quantization and provider pref

Also:

- Improve logging for summary generation
- Improve error message when contacting LLMs fails

* Better support for full screen artifacts on iPad

Support back button to close full screen
2024-12-10 05:59:19 +11:00
Roman Rizzi 085dde7042
FEATURE: Select stop sequences from triage script (#1010) 2024-12-06 11:13:47 -03:00
Roman Rizzi 7ebbcd2de3
FIX: Make sure prompt uploads get included in the prompt when triaging (#1008) 2024-12-05 21:04:35 -03:00
Sam a55216773a
FEATURE: Amazon Nova support via bedrock (#997)
Refactor dialect selection and add Nova API support

    Change dialect selection to use llm_model object instead of just provider name
    Add support for Amazon Bedrock's Nova API with native tools
    Implement Nova-specific message processing and formatting
    Update specs for Nova and AWS Bedrock endpoints
    Enhance AWS Bedrock support to handle Nova models
    Fix Gemini beta API detection logic
2024-12-06 07:45:58 +11:00
Rafael dos Santos Silva 5ba274f7a5
FIX: Change how we filter date for emotion on /filter (#1006) 2024-12-05 14:10:12 -03:00
Roman Rizzi b32b1cf241
FIX: Add a digest check to avoid repeteadly generating embeddings (bulk) (#1001) 2024-12-04 17:47:28 -03:00
Rafael dos Santos Silva 938d4c018c
FIX: More resilient sentiment backfill query (#998) 2024-12-04 12:10:31 -03:00
Roman Rizzi ce6a2eca21
FEATURE: Backfill posts sentiment. (#982)
* FEATURE: Backfill posts sentiment.

It adds a scheduled job to backfill posts' sentiment, similar to our existing rake task, but with two settings to control the batch size and posts' max-age.

* Make sure model_name order is consistent.
2024-12-03 10:27:03 -03:00
Rafael dos Santos Silva e3f5e86dc5
FIX: AI Automation scripts were broken when using seeded models (#991) 2024-12-02 19:07:05 -03:00
Keegan George fc88bb08ab
FIX: Tag suggester is suggesting already assigned tags (#990)
This PR fixes an issue where the tag suggester for edit title topic area was suggesting tags that are already assigned on a post. It also updates the amount of suggested tags to 7 so that there is still a decent amount of tags suggested when tags are already assigned.
2024-12-03 07:25:04 +11:00
Sam 117c06220e
FEATURE: allow artifacts to be updated (#980)
Add support for versioned artifacts with improved diff handling

* Add versioned artifacts support allowing artifacts to be updated and tracked
  - New `ai_artifact_versions` table to store version history
  - Support for updating artifacts through a new `UpdateArtifact` tool
  - Add version-aware artifact rendering in posts
  - Include change descriptions for version tracking

* Enhance artifact rendering and security
  - Add support for module-type scripts and external JS dependencies
  - Expand CSP to allow trusted CDN sources (unpkg, cdnjs, jsdelivr, googleapis)
  - Improve JavaScript handling in artifacts

* Implement robust diff handling system (this is dormant but ready to use once LLMs catch up)
  - Add new DiffUtils module for applying changes to artifacts
  - Support for unified diff format with multiple hunks
  - Intelligent handling of whitespace and line endings
  - Comprehensive error handling for diff operations

* Update routes and UI components
  - Add versioned artifact routes
  - Update markdown processing for versioned artifacts

Also

- Tweaks summary prompt
- Improves upload support in custom tool to also provide urls
2024-12-03 07:23:31 +11:00
Rafael dos Santos Silva 3828370679
DEV: Cleanup deprecations (#952) 2024-12-02 14:18:03 -03:00
Roman Rizzi 0abd4b1244
FIX: Sentiment classification results needs to be transformed before saving (#983) 2024-11-29 17:31:56 -03:00
Sam 0cb2c413ba
FEATURE: exclude muted categories from category suggester (#979)
The logic here is that users do not particularly care about
topics in the category so we can exclude them from tag
and category suggestions
2024-11-29 12:17:28 +11:00
Rafael dos Santos Silva 80adefa1c1
FIX: Move count logic to the end for tag suggestions (#978) 2024-11-28 16:27:38 -03:00
Sam bc0657f478
FEATURE: AI Usage page (#964)
- Added a new admin interface to track AI usage metrics, including tokens, features, and models.
- Introduced a new route `/admin/plugins/discourse-ai/ai-usage` and supporting API endpoint in `AiUsageController`.
- Implemented `AiUsageSerializer` for structuring AI usage data.
- Integrated CSS stylings for charts and tables under `stylesheets/modules/llms/common/usage.scss`.
- Enhanced backend with `AiApiAuditLog` model changes: added `cached_tokens` column  (implemented with OpenAI for now) with relevant DB migration and indexing.
- Created `Report` module for efficient aggregation and filtering of AI usage metrics.
- Updated AI Bot title generation logic to log correctly to user vs bot
- Extended test coverage for the new tracking features, ensuring data consistency and access controls.
2024-11-29 06:26:48 +11:00
Roman Rizzi c980c34d77
REFACTOR: Simplify sentiment classification (#977)
This change adds a simpler class for sentiment classification, replacing the soon-to-be removed `Classificator` hierarchy. Additionally, it adds a method for classifying concurrently, speeding up the backfill rake task.
2024-11-28 15:38:23 -03:00
Rafael dos Santos Silva 4980a4b2f7
FIX: Multiple concurrent summaries could result in pg index errors (#973) 2024-11-28 11:53:04 -03:00
Keegan George f1c7ee8624
DEV: Better control what prompts can appear in post/composer (#969)
This PR updates the logic for the location map so it permits only the desired prompts through to the composer/post menu. Anything else won't be shown by default.

This PR also adds relevant tests to prevent regression.
2024-11-27 16:14:21 -08:00
Keegan George dabef02919
DEV: Prevent `detect_text_locale` from appearing in menus (#967)
### 🔍 Overview
With the recent changes to allow DiscourseAi in the translator plugin, `detect_text_locale` was needed as a CompletionPrompt. However, it is leaking into composer/post helper menus. This PR ensures we don't not show it in those menus.
2024-11-28 09:27:08 +11:00
Keegan George 6b7d7c1179
REFACTOR: Helper suggestions (#914)
This PR adds some updates to the Helper suggestions to improve it's functionality and modernize some of the codebase.
2024-11-27 12:21:03 -08:00
Roman Rizzi 251628bfa1
FIX: Shutdown embeddings thread pool after processing (#961) 2024-11-26 18:12:03 -03:00
Roman Rizzi ef07fcb308
FIX: Skip records without content to classify (#960) 2024-11-26 15:54:20 -03:00
Roman Rizzi ddf2bf7034
DEV: Backfill embeddings concurrently. (#941)
We are adding a new method for generating and storing embeddings in bulk, which relies on `Concurrent::Promises::Future`. Generating an embedding consists of three steps:

Prepare text
HTTP call to retrieve the vector
Save to DB.
Each one is independently executed on whatever thread the pool gives us.

We are bringing a custom thread pool instead of the global executor since we want control over how many threads we spawn to limit concurrency. We also avoid firing thousands of HTTP requests when working with large batches.
2024-11-26 14:12:32 -03:00
Rafael dos Santos Silva 23193ee6f2
FEATURE: Calculate gists from non hot topics too (#958)
Also renames some settings to remove 'hot' references.
2024-11-26 13:44:12 -03:00
Sam 616b990894
FEATURE: LLM mentions and auto silence (#949)
* FEATURE: allow mentioning an LLM mid conversation to switch

This is a edgecase feature that allow you to start a conversation
in a PM with LLM1 and then use LLM2 to evaluation or continue
the conversation

* FEATURE: allow auto silencing of spam accounts

New rule can also allow for silencing an account automatically

This can prevent spammers from creating additional posts.
2024-11-26 07:19:56 +11:00
Roman Rizzi 79021252e9
REFACTOR: Tidy-up embedding endpoints config. (#937)
Two changes worth mentioning:

`#instance` returns a fully configured embedding endpoint ready to use.
All endpoints respond to the same method and have the same signature - `perform!(text)`

This makes it easier to reuse them when generating embeddings in bulk.
2024-11-25 13:12:43 -03:00
Roman Rizzi 95762723de
PERF: Preload only gists when including summaries in topic list (#948)
* PERF: Preload only gists when including summaries in topic list

* Add unique index on summaries and dedup existing records

* Make hot topics batch size setting hidden
2024-11-25 12:24:02 -03:00
Rafael dos Santos Silva 5fb1177f7b
FEATURE: Refinements to Emotion in dashboard (#947)
* FEATURE: Refinements to Emotion in dashboard

- Added descriptions to individual reports
- Made reports work with data older than 60 days
2024-11-25 11:31:51 -03:00
David Taylor ea24328415
FIX: Correctly register 'info' icon (#950)
This icon is not part of the core set. Previously, d-ai was relying on other installed plugins (e.g. data-explorer) to include the icon
2024-11-25 10:20:17 -03:00
Roman Rizzi e54f2da1a5
FIX: Unnecessary complex preloading accidentally filters some topics. (#945)
The `topic_query_create_list_topics` modifier we append was always meant to avoid an N+1 situation when serializing gists. However, I tried to be too smart and only preload these, which resulted in some topics with *only* regular summaries getting removed from the list. This issue became apparent now we are adding gists to other lists besides hot.

Let's simplify the preloading, which still solves the N+1 issue, and let the serializer get the needed summary.
2024-11-22 12:07:27 -03:00
Joffrey JAFFEUX 2cc8115b48
FIX: disables temporarily ai_summaries filtering (#943) 2024-11-22 08:34:54 +01:00
Rafael dos Santos Silva 2c78961bed
FIX: Properly capture period for /filter emotion ordering (#940) 2024-11-21 16:54:30 -03:00
Rafael dos Santos Silva 8e00e036e1
FEATURE: Make emotion /filter ordering match the dashboard table (#939)
* FEATURE: Make emotion /filter ordering match the dashboard table

This change makes the /filter endpoint use the same criteria we use
in the dashboard table for emotion, so it is not confusing for users.
It means that only posts made in the period with the emotion shall be
shown in the /filter, and the order is simply a count of posts that
match the emotion in the period.

It also uses a trick to extract the filter period, and apply it to
the CTE clause that calculates post emotion count on the period, making
it a bit more efficient. Downside is that /filter filters are evaluated
from left to right, so it will only get the speed-up if the emotion
order is last. As we do this on the dashboard table, it should cover
most uses of the ordering, kicking the need for materialized views
down the road.

* Remove zero score in filter

* add table tooltip

* lint
2024-11-21 15:18:31 -03:00
Sam d56ed53eb1
FIX: cancel functionality regressed (#938)
The cancel messaging was not floating correctly to the HTTP call leading to impossible to cancel completions 

This is now fully tested as well.
2024-11-21 17:51:45 +11:00
Sam 52c644798d
DEV: improve artifact presentation (#932)
1. Keep source in a "details" block after rendered so it does
not overwhelm users

2. Ensure artifacts are never indexed by robots

3. Cache break our CSS that changed recently
2024-11-20 18:53:19 +11:00
Sam a0aec48606
FIX: gists are not html safe (#931)
Also allow "Everyone" in ai_hot_topic_gists_allowed_groups
2024-11-20 10:54:49 +11:00
Rafael dos Santos Silva 2c8d81827f
FIX: Misc fixes for sentiment in the admin dashboard (#928)
* FIX: Misc fixes for sentiment in the admin dashboard

- Fixes missing filters for the main graph

- Fixes previous 30 days trend in emotion table

Also moves links to individual cells in emotion table, so admins can
drill down to the specific time period on their reports.

* lints
2024-11-19 19:16:21 -03:00
Roman Rizzi 530a795d43
FIX: Instruct AR that we want to use ai_summaries for filtering. (#927)
We use `includes` instead of `joins` because we want to eager-load summaries, avoiding an extra query when summarizing. However, Rails will complain unless you explicitly inform them you plan to use that inside a `WHERE` clause.
2024-11-19 17:32:13 -03:00
Roman Rizzi dcde94a393
FIX: Reduce scope of topic gists inclusion. (#925)
The topic query is used differently, and we can't assume the modifier will always receive an AR relation. Let's scope it to `Discourse#filters` instead of most lists.
2024-11-19 15:37:19 -03:00
Roman Rizzi 3c91f374ac
FIX: Skip gists from PM topic lists (#923) 2024-11-19 12:51:19 -03:00
Roman Rizzi fb80d776d8
FEATURE: Enable gists on all topic lists (#922) 2024-11-19 11:04:34 -03:00
Rafael dos Santos Silva 48d08dedd4
FEATURE: Emotion activity metrics table (#916) 2024-11-19 10:01:10 -03:00
Sam 755b63f31f
FEATURE: Add support for Mistral models (#919)
Adds support for mistral models (pixtral and mistral large now have presets)

Also corrects token accounting in AWS bedrock models
2024-11-19 17:28:09 +11:00
Sam 0d7f353284
FEATURE: AI artifacts (#898)
This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes:

1. AI Artifacts System:
   - Adds a new `AiArtifact` model and database migration
   - Allows creation of web artifacts with HTML, CSS, and JavaScript content
   - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution
   - Implements artifact rendering in iframes with sandbox protection
   - New `CreateArtifact` tool for AI to generate interactive content

2. Tool System Improvements:
   - Adds support for partial tool calls, allowing incremental updates during generation
   - Better handling of tool call states and progress tracking
   - Improved XML tool processing with CDATA support
   - Fixes for tool parameter handling and duplicate invocations

3. LLM Provider Updates:
   - Updates for Anthropic Claude models with correct token limits
   - Adds support for native/XML tool modes in Gemini integration
   - Adds new model configurations including Llama 3.1 models
   - Improvements to streaming response handling

4. UI Enhancements:
   - New artifact viewer component with expand/collapse functionality
   - Security controls for artifact execution (click-to-run in strict mode)
   - Improved dialog and response handling
   - Better error management for tool execution

5. Security Improvements:
   - Sandbox controls for artifact execution
   - Public/private artifact sharing controls
   - Security settings to control artifact behavior
   - CSP and frame-options handling for artifacts

6. Technical Improvements:
   - Better post streaming implementation
   - Improved error handling in completions
   - Better memory management for partial tool calls
   - Enhanced testing coverage

7. Configuration:
   - New site settings for artifact security
   - Extended LLM model configurations
   - Additional tool configuration options

This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-19 09:22:39 +11:00
Rafael dos Santos Silva 4fb686a548
FIX: Move emotion /filter logic into a CTE to keep cardinality sane (#915) 2024-11-14 17:16:48 -03:00
Rafael dos Santos Silva 5026ab52d0
FEATURE: Order by emotion on /filter (#913) 2024-11-14 12:45:40 -03:00
Sam 823e8ef490
FEATURE: partial tool call support for OpenAI and Anthropic (#908)
Implement streaming tool call implementation for Anthropic and Open AI.

When calling:

llm.generate(..., partial_tool_calls: true) do ...
Partials may contain ToolCall instances with partial: true, These tool calls are partially populated with json partially parsed.

So for example when performing a search you may get:

ToolCall(..., {search: "hello" })
ToolCall(..., {search: "hello world" })

The library used to parse json is:

https://github.com/dgraham/json-stream

We use a fork cause we need access to the internal buffer.

This prepares internals to perform partial tool calls, but does not implement it yet.
2024-11-14 06:58:24 +11:00
Sam 9551b1a4d1
FIX: do not strip empty string during stream processing (#911)
Fixes issue in Open AI provider eating newlines and spaces
2024-11-13 07:12:00 +11:00
Rafael dos Santos Silva aef9a03d4c
FEATURE: Truncate AI Captions to a reasonable max size (#907) 2024-11-12 15:52:46 -03:00
Sam e817b7dc11
FEATURE: improve tool support (#904)
This re-implements tool support in DiscourseAi::Completions::Llm #generate

Previously tool support was always returned via XML and it would be the responsibility of the caller to parse XML

New implementation has the endpoints return ToolCall objects.

Additionally this simplifies the Llm endpoint interface and gives it more clarity. Llms must implement

decode, decode_chunk (for streaming)

It is the implementers responsibility to figure out how to decode chunks, base no longer implements. To make this easy we ship a flexible json decoder which is easy to wire up.

Also (new)

    Better debugging for PMs, we now have a next / previous button to see all the Llm messages associated with a PM
    Token accounting is fixed for vllm (we were not correctly counting tokens)
2024-11-12 08:14:30 +11:00
Keegan George 644141ff08
FIX: Regenerate summary button still shows cached summary (#903)
This PR fixes an issue where clicking to regenerate a summary was still showing the cached summary. To resolve this we call resetSummary() to reset all the summarization related properties before creating a new request.
2024-11-07 16:01:18 -08:00
Roman Rizzi 9505a8976c
FEATURE: Automatically backfill regular summaries. (#892)
This change introduces a job to summarize topics and cache the results automatically. We provide a setting to control how many topics we'll backfill per hour and what the topic's minimum word count is to qualify.

We'll prioritize topics without summary over outdated ones.
2024-11-04 17:48:11 -03:00
Sam 98022d7d96
FEATURE: support custom instructions for persona streaming (#890)
This allows us to inject information into the system prompt
which can help shape replies without repeating over and over
in messages.
2024-11-05 07:43:26 +11:00
Rafael dos Santos Silva 772ee934ab
Migrate sentiment to a TEI backend (#886) 2024-11-04 09:14:34 -03:00
Sam bffe9dfa07
FIX: we must properly encode objects prior to escaping (#891)
in cases of arrays escapeHTML will not work)

*
2024-11-04 16:16:25 +11:00
Sam c352054d4e
FIX: encode parameters returned from LLMs correctly (#889)
Fixes encoding of params on LLM function calls.

Previously we would improperly return results if a function parameter returned an HTML tag.

Additionally adds some missing HTTP verbs to tool calls.
2024-11-04 10:07:17 +11:00
Roman Rizzi 7e3a543f6f
FEATURE: Double gist length to 40 words (#888) 2024-11-01 13:09:03 -03:00
Roman Rizzi e8f0633141
DEV: Extend truncation to all summarizable content (#884) 2024-10-31 12:17:42 -03:00
Roman Rizzi e8eed710e0
FIX: Truncate OP for gists to help the model focus on the latest posts (#883) 2024-10-31 10:54:56 -03:00
Sam 34a59b623e
FIX: ensure replies are never double streamed (#879)
The custom field "discourse_ai_bypass_ai_reply" was added so
we can signal the post created hook to bypass replying even
if it thinks it should.

Otherwise there are cases where we double answer user questions
leading to much confusion.

This also slightly refactors code making the controller smaller
2024-10-30 20:24:39 +11:00
Sam be0b78cacd
FEATURE: new endpoint for directly accessing a persona (#876)
The new `/admin/plugins/discourse-ai/ai-personas/stream-reply.json` was added.

This endpoint streams data direct from a persona and can be used
to access a persona from remote systems leaving a paper trail in
PMs about the conversation that happened

This endpoint is only accessible to admins.

---------

Co-authored-by: Gabriel Grubba <70247653+Grubba27@users.noreply.github.com>
Co-authored-by: Keegan George <kgeorge13@gmail.com>
2024-10-30 10:28:20 +11:00
Roman Rizzi dd404c924a
DEV: Use different feature_names for summarization strategies (#875) 2024-10-29 08:45:14 -03:00
Rafael dos Santos Silva 8ded4b2e58
FIX: Use present? instead of invalid exists? (#869) 2024-10-25 13:04:42 -03:00
Roman Rizzi a2b1ea3c63
FEATURE: Fast-track gist regeneration when a hot topic gets a new post (#860)
* FEATURE: Fast-track gist regeneration when a hot topic gets a new post

* DEV: Introduce an upsert-like summarize

* FIX: Only enqueue fast-track gist for hot hot hot topics

---------

Co-authored-by: Rafael Silva <xfalcox@gmail.com>
2024-10-25 12:38:49 -03:00
Rafael dos Santos Silva 33da27e231
FIX: Change hot gist prompt to avoid title repeating #859 (#859)
Co-authored-by: Roman Rizzi <rizziromanalejandro@gmail.com>
2024-10-25 12:12:33 -03:00
Roman Rizzi ec97996905
FIX/REFACTOR: FoldContent revamp (#866)
* FIX/REFACTOR: FoldContent revamp

We hit a snag with our hot topic gist strategy: the regex we used to split the content didn't work, so we cannot send the original post separately. This was important for letting the model focus on what's new in the topic.

The algorithm doesn’t give us full control over how prompts are written, and figuring out how to format the content isn't straightforward. This means we're having to use more complicated workarounds, like regex.

To tackle this, I'm suggesting we simplify the approach a bit. Let's focus on summarizing as much as we can upfront, then gradually add new content until there's nothing left to summarize.

Also, the "extend" part is mostly for models with small context windows, which shouldn't pose a problem 99% of the time with the content volume we're dealing with.

* Fix fold docs

* Use #shift instead of #pop to get the first elem, not the last
2024-10-25 11:51:17 -03:00
Sam 12869f2146
FIX: testing tool was not showing rag results (#867)
This changeset contains 4 fixes:

1. We were allowing running tests on unsaved tools,
this is problematic cause uploads are not yet associated or indexed
leading to confusing results. We now only show the test button when
tool is saved.


2. We were not properly scoping rag document fragements, this
meant that personas and ai tools could get results from other
unrelated tools, just to be filtered out later


3. index.search showed options as "optional" but implementation
required the second option

4. When testing tools searching through document fragments was
not working at all cause we did not properly load the tool
2024-10-25 16:01:25 +11:00