Commit Graph

112 Commits

Author SHA1 Message Date
Penar Musaraj 059da39a4a DEV: output raw response in dev mode
Makes it easier to debug issues, for example if rate limits are hit.
2025-05-13 16:54:10 -04:00
Sam 2a62658248
FEATURE: support configurable thinking tokens for Gemini (#1322) 2025-05-08 07:39:50 +10:00
Roman Rizzi 5bc9fdc06b
FIX: Return structured output on non-streaming mode (#1318) 2025-05-06 15:34:30 -03:00
Roman Rizzi c0a2d4c935
DEV: Use structured responses for summaries (#1252)
* DEV: Use structured responses for summaries

* Fix system specs

* Make response_format a first class citizen and update endpoints to support it

* Response format can be specified in the persona

* lint

* switch to jsonb and make column nullable

* Reify structured output chunks. Move JSON parsing to the depths of Completion

* Switch to JsonStreamingTracker for partial JSON parsing
2025-05-06 10:09:39 -03:00
Rafael dos Santos Silva 4eac377987
DEV: Zero delays on fake endpoint used in tests (#1311) 2025-05-05 17:47:32 -03:00
Sam 1dde82eb58
FEATURE: allow specifying tool use none in completion prompt
This PR adds support for disabling further tool calls by setting tool_choice to :none across all supported LLM providers:

- OpenAI: Uses "none" tool_choice parameter
- Anthropic: Uses {type: "none"} and adds a prefill message to prevent confusion
- Gemini: Sets function_calling_config mode to "NONE"
- AWS Bedrock: Doesn't natively support tool disabling, so adds a prefill message

We previously used to disable tool calls by simply removing tool definitions, but this would cause errors with some providers. This implementation uses the supported method appropriate for each provider while providing a fallback for Bedrock.

Co-authored-by: Natalie Tay <natalie.tay@gmail.com>

* remove stray puts

* cleaner chain breaker for last tool call (works in thinking)

remove unused code

* improve test

---------

Co-authored-by: Natalie Tay <natalie.tay@gmail.com>
2025-03-25 08:06:43 +11:00
Sam 8f4cd2fcbd
FEATURE: allow disabling of top_p and temp for thinking models (#1184)
thinking models such as Claude 3.7 Thinking and o1 / o3 do not
support top_p or temp.

Previously you would have to carefully remove it from everywhere
by having it be a provider param we now support blanker removing
without forcing people to update automation rules or personas
2025-03-11 16:54:02 +11:00
Sam 01893bb6ed
FEATURE: Add persona-based replies and whisper support to LLM triage (#1170)
This PR enhances the LLM triage automation with several important improvements:

- Add ability to use AI personas for automated replies instead of canned replies
- Add support for whisper responses
- Refactor LLM persona reply functionality into a reusable method
- Add new settings to configure response behavior in automations
- Improve error handling and logging
- Fix handling of personal messages in the triage flow
- Add comprehensive test coverage for new features
- Make personas configurable with more flexible requirements

This allows for more dynamic and context-aware responses in automated workflows, with better control over visibility and attribution.
2025-03-06 17:18:15 +11:00
Sam f6eedf3e0b
FEATURE: implement thinking token support (#1155)
adds support for "thinking tokens" - a feature that exposes the model's reasoning process before providing the final response. Key improvements include:

- Add a new Thinking class to handle thinking content from LLMs
- Modify endpoints (Claude, AWS Bedrock) to handle thinking output
- Update AI bot to display thinking in collapsible details section
- Fix SEARCH/REPLACE blocks to support empty replacement strings and general improvements to artifact editing
- Allow configurable temperature in triage and report automations
- Various bug fixes and improvements to diff parsing
2025-03-04 12:22:30 +11:00
Sam fe19133dd4
FEATURE: full support for Sonnet 3.7 (#1151)
* FEATURE: full support for Sonnet 3.7

- Adds support for Sonnet 3.7 with reasoning on bedrock and anthropic
- Fixes regression where provider params were not populated

Note. reasoning tokens are hardcoded to minimum of 100 maximum of 65536

* FIX: open ai non reasoning models need to use deprecate max_tokens
2025-02-25 17:32:12 +11:00
Sam 12f00a62d2
FIX: use max_completion_tokens for open ai models (#1134)
max_tokens is now deprecated per API
2025-02-19 15:49:15 +11:00
Sam 5e80f93e4c
FEATURE: PDF support for rag pipeline (#1118)
This PR introduces several enhancements and refactorings to the AI Persona and RAG (Retrieval-Augmented Generation) functionalities within the discourse-ai plugin. Here's a breakdown of the changes:

**1. LLM Model Association for RAG and Personas:**

-   **New Database Columns:** Adds `rag_llm_model_id` to both `ai_personas` and `ai_tools` tables. This allows specifying a dedicated LLM for RAG indexing, separate from the persona's primary LLM.  Adds `default_llm_id` and `question_consolidator_llm_id` to `ai_personas`.
-   **Migration:**  Includes a migration (`20250210032345_migrate_persona_to_llm_model_id.rb`) to populate the new `default_llm_id` and `question_consolidator_llm_id` columns in `ai_personas` based on the existing `default_llm` and `question_consolidator_llm` string columns, and a post migration to remove the latter.
-   **Model Changes:**  The `AiPersona` and `AiTool` models now `belong_to` an `LlmModel` via `rag_llm_model_id`. The `LlmModel.proxy` method now accepts an `LlmModel` instance instead of just an identifier.  `AiPersona` now has `default_llm_id` and `question_consolidator_llm_id` attributes.
-   **UI Updates:**  The AI Persona and AI Tool editors in the admin panel now allow selecting an LLM for RAG indexing (if PDF/image support is enabled).  The RAG options component displays an LLM selector.
-   **Serialization:** The serializers (`AiCustomToolSerializer`, `AiCustomToolListSerializer`, `LocalizedAiPersonaSerializer`) have been updated to include the new `rag_llm_model_id`, `default_llm_id` and `question_consolidator_llm_id` attributes.

**2. PDF and Image Support for RAG:**

-   **Site Setting:** Introduces a new hidden site setting, `ai_rag_pdf_images_enabled`, to control whether PDF and image files can be indexed for RAG. This defaults to `false`.
-   **File Upload Validation:** The `RagDocumentFragmentsController` now checks the `ai_rag_pdf_images_enabled` setting and allows PDF, PNG, JPG, and JPEG files if enabled.  Error handling is included for cases where PDF/image indexing is attempted with the setting disabled.
-   **PDF Processing:** Adds a new utility class, `DiscourseAi::Utils::PdfToImages`, which uses ImageMagick (`magick`) to convert PDF pages into individual PNG images. A maximum PDF size and conversion timeout are enforced.
-   **Image Processing:** A new utility class, `DiscourseAi::Utils::ImageToText`, is included to handle OCR for the images and PDFs.
-   **RAG Digestion Job:** The `DigestRagUpload` job now handles PDF and image uploads. It uses `PdfToImages` and `ImageToText` to extract text and create document fragments.
-   **UI Updates:**  The RAG uploader component now accepts PDF and image file types if `ai_rag_pdf_images_enabled` is true. The UI text is adjusted to indicate supported file types.

**3. Refactoring and Improvements:**

-   **LLM Enumeration:** The `DiscourseAi::Configuration::LlmEnumerator` now provides a `values_for_serialization` method, which returns a simplified array of LLM data (id, name, vision_enabled) suitable for use in serializers. This avoids exposing unnecessary details to the frontend.
-   **AI Helper:** The `AiHelper::Assistant` now takes optional `helper_llm` and `image_caption_llm` parameters in its constructor, allowing for greater flexibility.
-   **Bot and Persona Updates:** Several updates were made across the codebase, changing the string based association to a LLM to the new model based.
-   **Audit Logs:** The `DiscourseAi::Completions::Endpoints::Base` now formats raw request payloads as pretty JSON for easier auditing.
- **Eval Script:** An evaluation script is included.

**4. Testing:**

-    The PR introduces a new eval system for LLMs, this allows us to test how functionality works across various LLM providers. This lives in `/evals`
2025-02-14 12:15:07 +11:00
Sam a7d032fa28
DEV: artifact system update (#1096)
### Why

This pull request fundamentally restructures how AI bots create and update web artifacts to address critical limitations in the previous approach:

1.  **Improved Artifact Context for LLMs**: Previously, artifact creation and update tools included the *entire* artifact source code directly in the tool arguments. This overloaded the Language Model (LLM) with raw code, making it difficult for the LLM to maintain a clear understanding of the artifact's current state when applying changes. The LLM would struggle to differentiate between the base artifact and the requested modifications, leading to confusion and less effective updates.
2.  **Reduced Token Usage and History Bloat**: Including the full artifact source code in every tool interaction was extremely token-inefficient.  As conversations progressed, this redundant code in the history consumed a significant number of tokens unnecessarily. This not only increased costs but also diluted the context for the LLM with less relevant historical information.
3.  **Enabling Updates for Large Artifacts**: The lack of a practical diff or targeted update mechanism made it nearly impossible to efficiently update larger web artifacts.  Sending the entire source code for every minor change was both computationally expensive and prone to errors, effectively blocking the use of AI bots for meaningful modifications of complex artifacts.

**This pull request addresses these core issues by**:

*   Introducing methods for the AI bot to explicitly *read* and understand the current state of an artifact.
*   Implementing efficient update strategies that send *targeted* changes rather than the entire artifact source code.
*   Providing options to control the level of artifact context included in LLM prompts, optimizing token usage.

### What

The main changes implemented in this PR to resolve the above issues are:

1.  **`Read Artifact` Tool for Contextual Awareness**:
    - A new `read_artifact` tool is introduced, enabling AI bots to fetch and process the current content of a web artifact from a given URL (local or external).
    - This provides the LLM with a clear and up-to-date representation of the artifact's HTML, CSS, and JavaScript, improving its understanding of the base to be modified.
    - By cloning local artifacts, it allows the bot to work with a fresh copy, further enhancing context and control.

2.  **Refactored `Update Artifact` Tool with Efficient Strategies**:
    - The `update_artifact` tool is redesigned to employ more efficient update strategies, minimizing token usage and improving update precision:
        - **`diff` strategy**:  Utilizes a search-and-replace diff algorithm to apply only the necessary, targeted changes to the artifact's code. This significantly reduces the amount of code sent to the LLM and focuses its attention on the specific modifications.
        - **`full` strategy**:  Provides the option to replace the entire content sections (HTML, CSS, JavaScript) when a complete rewrite is required.
    - Tool options enhance the control over the update process:
        - `editor_llm`:  Allows selection of a specific LLM for artifact updates, potentially optimizing for code editing tasks.
        - `update_algorithm`: Enables choosing between `diff` and `full` update strategies based on the nature of the required changes.
        - `do_not_echo_artifact`:  Defaults to true, and by *not* echoing the artifact in prompts, it further reduces token consumption in scenarios where the LLM might not need the full artifact context for every update step (though effectiveness might be slightly reduced in certain update scenarios).

3.  **System and General Persona Tool Option Visibility and Customization**:
    - Tool options, including those for system personas, are made visible and editable in the admin UI. This allows administrators to fine-tune the behavior of all personas and their tools, including setting specific LLMs or update algorithms. This was previously limited or hidden for system personas.

4.  **Centralized and Improved Content Security Policy (CSP) Management**:
    - The CSP for AI artifacts is consolidated and made more maintainable through the `ALLOWED_CDN_SOURCES` constant. This improves code organization and future updates to the allowed CDN list, while maintaining the existing security posture.

5.  **Codebase Improvements**:
    - Refactoring of diff utilities, introduction of strategy classes, enhanced error handling, new locales, and comprehensive testing all contribute to a more robust, efficient, and maintainable artifact management system.

By addressing the issues of LLM context confusion, token inefficiency, and the limitations of updating large artifacts, this pull request significantly improves the practicality and effectiveness of AI bots in managing web artifacts within Discourse.
2025-02-04 16:27:27 +11:00
Sam cf86d274a0
FEATURE: improve o3-mini support (#1106)
* DEV: raise timeout for reasoning LLMs

* FIX: use id to identify llms, not model_name

model_name is not unique, in the case of reasoning models
you may configure the same llm multiple times using different
reasoning levels.
2025-02-03 08:45:56 +11:00
Sam 381a2715c8
FEATURE: o3-mini supports (#1105)
1. Adds o3-mini presets
2. Adds support for reasoning effort
3. properly use "developer" messages for reasoning models
2025-02-01 14:08:34 +11:00
Sam 8bf350206e
FEATURE: track duration of AI calls (#1082)
* FEATURE: track duration of AI calls

* annotate
2025-01-23 11:32:12 +11:00
Rafael dos Santos Silva f9aa2de413
FIX: AWS Bedrock non-streaming calls response log (#1072) 2025-01-15 18:51:25 -03:00
Sam d07cf51653
FEATURE: llm quotas (#1047)
Adds a comprehensive quota management system for LLM models that allows:

- Setting per-group (applied per user in the group) token and usage limits with configurable durations
- Tracking and enforcing token/usage limits across user groups
- Quota reset periods (hourly, daily, weekly, or custom)
-  Admin UI for managing quotas with real-time updates

This system provides granular control over LLM API usage by allowing admins
to define limits on both total tokens and number of requests per group.
Supports multiple concurrent quotas per model and automatically handles
quota resets.


Co-authored-by: Keegan George <kgeorge13@gmail.com>
2025-01-14 15:54:09 +11:00
Sam 20612fde52
FEATURE: add the ability to disable streaming on an Open AI LLM
Disabling streaming is required for models such o1 that do not have streaming
enabled yet

It is good to carry this feature around in case various apis decide not to support streaming endpoints and Discourse AI can continue to work just as it did before. 

Also: fixes issue where sharing artifacts would miss viewport leading to tiny artifacts on mobile
2025-01-13 17:01:01 +11:00
Sam 7ca21cc329
FEATURE: first class support for OpenRouter (#1011)
* FEATURE: first class support for OpenRouter

This new implementation supports picking quantization and provider pref

Also:

- Improve logging for summary generation
- Improve error message when contacting LLMs fails

* Better support for full screen artifacts on iPad

Support back button to close full screen
2024-12-10 05:59:19 +11:00
Roman Rizzi 085dde7042
FEATURE: Select stop sequences from triage script (#1010) 2024-12-06 11:13:47 -03:00
Sam a55216773a
FEATURE: Amazon Nova support via bedrock (#997)
Refactor dialect selection and add Nova API support

    Change dialect selection to use llm_model object instead of just provider name
    Add support for Amazon Bedrock's Nova API with native tools
    Implement Nova-specific message processing and formatting
    Update specs for Nova and AWS Bedrock endpoints
    Enhance AWS Bedrock support to handle Nova models
    Fix Gemini beta API detection logic
2024-12-06 07:45:58 +11:00
Sam bc0657f478
FEATURE: AI Usage page (#964)
- Added a new admin interface to track AI usage metrics, including tokens, features, and models.
- Introduced a new route `/admin/plugins/discourse-ai/ai-usage` and supporting API endpoint in `AiUsageController`.
- Implemented `AiUsageSerializer` for structuring AI usage data.
- Integrated CSS stylings for charts and tables under `stylesheets/modules/llms/common/usage.scss`.
- Enhanced backend with `AiApiAuditLog` model changes: added `cached_tokens` column  (implemented with OpenAI for now) with relevant DB migration and indexing.
- Created `Report` module for efficient aggregation and filtering of AI usage metrics.
- Updated AI Bot title generation logic to log correctly to user vs bot
- Extended test coverage for the new tracking features, ensuring data consistency and access controls.
2024-11-29 06:26:48 +11:00
Sam d56ed53eb1
FIX: cancel functionality regressed (#938)
The cancel messaging was not floating correctly to the HTTP call leading to impossible to cancel completions 

This is now fully tested as well.
2024-11-21 17:51:45 +11:00
Sam 755b63f31f
FEATURE: Add support for Mistral models (#919)
Adds support for mistral models (pixtral and mistral large now have presets)

Also corrects token accounting in AWS bedrock models
2024-11-19 17:28:09 +11:00
Sam 0d7f353284
FEATURE: AI artifacts (#898)
This is a significant PR that introduces AI Artifacts functionality to the discourse-ai plugin along with several other improvements. Here are the key changes:

1. AI Artifacts System:
   - Adds a new `AiArtifact` model and database migration
   - Allows creation of web artifacts with HTML, CSS, and JavaScript content
   - Introduces security settings (`strict`, `lax`, `disabled`) for controlling artifact execution
   - Implements artifact rendering in iframes with sandbox protection
   - New `CreateArtifact` tool for AI to generate interactive content

2. Tool System Improvements:
   - Adds support for partial tool calls, allowing incremental updates during generation
   - Better handling of tool call states and progress tracking
   - Improved XML tool processing with CDATA support
   - Fixes for tool parameter handling and duplicate invocations

3. LLM Provider Updates:
   - Updates for Anthropic Claude models with correct token limits
   - Adds support for native/XML tool modes in Gemini integration
   - Adds new model configurations including Llama 3.1 models
   - Improvements to streaming response handling

4. UI Enhancements:
   - New artifact viewer component with expand/collapse functionality
   - Security controls for artifact execution (click-to-run in strict mode)
   - Improved dialog and response handling
   - Better error management for tool execution

5. Security Improvements:
   - Sandbox controls for artifact execution
   - Public/private artifact sharing controls
   - Security settings to control artifact behavior
   - CSP and frame-options handling for artifacts

6. Technical Improvements:
   - Better post streaming implementation
   - Improved error handling in completions
   - Better memory management for partial tool calls
   - Enhanced testing coverage

7. Configuration:
   - New site settings for artifact security
   - Extended LLM model configurations
   - Additional tool configuration options

This PR significantly enhances the plugin's capabilities for generating and displaying interactive content while maintaining security and providing flexible configuration options for administrators.
2024-11-19 09:22:39 +11:00
Sam 823e8ef490
FEATURE: partial tool call support for OpenAI and Anthropic (#908)
Implement streaming tool call implementation for Anthropic and Open AI.

When calling:

llm.generate(..., partial_tool_calls: true) do ...
Partials may contain ToolCall instances with partial: true, These tool calls are partially populated with json partially parsed.

So for example when performing a search you may get:

ToolCall(..., {search: "hello" })
ToolCall(..., {search: "hello world" })

The library used to parse json is:

https://github.com/dgraham/json-stream

We use a fork cause we need access to the internal buffer.

This prepares internals to perform partial tool calls, but does not implement it yet.
2024-11-14 06:58:24 +11:00
Sam e817b7dc11
FEATURE: improve tool support (#904)
This re-implements tool support in DiscourseAi::Completions::Llm #generate

Previously tool support was always returned via XML and it would be the responsibility of the caller to parse XML

New implementation has the endpoints return ToolCall objects.

Additionally this simplifies the Llm endpoint interface and gives it more clarity. Llms must implement

decode, decode_chunk (for streaming)

It is the implementers responsibility to figure out how to decode chunks, base no longer implements. To make this easy we ship a flexible json decoder which is easy to wire up.

Also (new)

    Better debugging for PMs, we now have a next / previous button to see all the Llm messages associated with a PM
    Token accounting is fixed for vllm (we were not correctly counting tokens)
2024-11-12 08:14:30 +11:00
Sam 98022d7d96
FEATURE: support custom instructions for persona streaming (#890)
This allows us to inject information into the system prompt
which can help shape replies without repeating over and over
in messages.
2024-11-05 07:43:26 +11:00
Sam bffe9dfa07
FIX: we must properly encode objects prior to escaping (#891)
in cases of arrays escapeHTML will not work)

*
2024-11-04 16:16:25 +11:00
Sam c352054d4e
FIX: encode parameters returned from LLMs correctly (#889)
Fixes encoding of params on LLM function calls.

Previously we would improperly return results if a function parameter returned an HTML tag.

Additionally adds some missing HTTP verbs to tool calls.
2024-11-04 10:07:17 +11:00
Sam be0b78cacd
FEATURE: new endpoint for directly accessing a persona (#876)
The new `/admin/plugins/discourse-ai/ai-personas/stream-reply.json` was added.

This endpoint streams data direct from a persona and can be used
to access a persona from remote systems leaving a paper trail in
PMs about the conversation that happened

This endpoint is only accessible to admins.

---------

Co-authored-by: Gabriel Grubba <70247653+Grubba27@users.noreply.github.com>
Co-authored-by: Keegan George <kgeorge13@gmail.com>
2024-10-30 10:28:20 +11:00
Sam 4923837165
FIX: Llm selector / forced tools / search tool (#862)
* FIX: Llm selector / forced tools / search tool


This fixes a few issues:

1. When search was not finding any semantic results we would break the tool
2. Gemin / Anthropic models did not implement forced tools previously despite it being an API option
3. Mechanics around displaying llm selector were not right. If you disabled LLM selector server side persona PM did not work correctly.
4. Disabling native tools for anthropic model moved out of a site setting. This deliberately does not migrate cause this feature is really rare to need now, people who had it set probably did not need it.
5. Updates anthropic model names to latest release

* linting

* fix a couple of tests I missed

* clean up conditional
2024-10-25 06:24:53 +11:00
Rafael dos Santos Silva 3022d34613
FEATURE: Support srv records for OpenAI compatible LLMs (#865) 2024-10-24 15:47:12 -03:00
Sam 059d3b6fd2
FEATURE: better logging for automation reports (#853)
A new feature_context json column was added to ai_api_audit_logs

This allows us to store rich json like context on any LLM request
made.

This new field now stores automation id and name.

Additionally allows llm_triage to specify maximum number of tokens

This means that you can limit the cost of llm triage by scanning only
first N tokens of a post.
2024-10-23 16:49:56 +11:00
Hoa Nguyen 94010a5f78
FEATURE: Tools for models from Ollama provider (#819)
Adds support for Ollama function calling
2024-10-11 07:25:53 +11:00
Sam 545500b329
FEATURE: allows forced LLM tool use (#818)
* FEATURE: allows forced LLM tool use

Sometimes we need to force LLMs to use tools, for example in RAG
like use cases we may want to force an unconditional search.

The new framework allows you backend to force tool usage.

Front end commit to follow

* UI for forcing tools now works, but it does not react right

* fix bugs

* fix tests, this is now ready for review
2024-10-05 09:46:57 +10:00
Hoa Nguyen 2063b3854f
FEATURE: Add Ollama provider (#812)
This allows our users to add the Ollama provider and use it to serve our AI bot (completion/dialect).

In this PR, we introduce:

    DiscourseAi::Completions::Dialects::Ollama which would help us translate by utilizing Completions::Endpoint::Ollama
    Correct extract_completion_from and partials_from in Endpoints::Ollama

Also

    Add tests for Endpoints::Ollama
    Introduce ollama_model fabricator
2024-10-01 10:45:03 +10:00
Sam 4b21eb7974
FEATURE: basic support for GPT-o models (#804)
Caveats

- No streaming, by design
- No tool support (including no XML tools)
- No vision

Open AI will revamt the model and more of these features may
become available.

This solution is a bit hacky for now
2024-09-17 09:41:00 +10:00
Sam 5b9add0ac8
FEATURE: add a SambaNova LLM provider (#797)
Note, at the moment the context window is quite small, it is
mainly useful as a helper backend or hyde generator
2024-09-12 11:28:08 +10:00
Sam cf1e15ef12
FIX: gemini 0801 tool calls (#748)
Gemini experimental model requires tool_config.

Previously defaults would apply.

This corrects prompts containing multiple tools on gemini.
2024-08-12 16:10:16 +10:00
Roman Rizzi 20efc9285e
FIX: Correctly save provider-specific params for new models. (#744)
Creating a new model, either manually or from presets, doesn't initialize the `provider_params` object, meaning their custom params won't persist.

Additionally, this change adds some validations for Bedrock params, which are mandatory, and a clear message when a completion fails because we cannot build the URL.
2024-08-07 16:08:56 -03:00
Sam 948cf893a9
FIX: Add tool support to open ai compatible dialect and vllm (#734)
* FIX: Add tool support to open ai compatible dialect and vllm

Automatic tools are in progress in vllm see: https://github.com/vllm-project/vllm/pull/5649

Even when they are supported, initial support will be uneven, only some models have native tool support
notably mistral which has some special tokens for tool support.

After the above PR lands in vllm we will still need to swap to
XML based tools on models without native tool support.

* fix specs
2024-08-02 09:52:33 -03:00
Roman Rizzi bed044448c
DEV: Remove old code now that features rely on LlmModels. (#729)
* DEV: Remove old code now that features rely on LlmModels.

* Hide old settings and migrate persona llm overrides

* Remove shadowing special URL + seeding code. Use srv:// prefix instead.
2024-07-30 13:44:57 -03:00
Roman Rizzi 5c196bca89
FEATURE: Track if a model can do vision in the llm_models table (#725)
* FEATURE: Track if a model can do vision in the llm_models table

* Data migration
2024-07-24 16:29:47 -03:00
Keegan George 1b0ba9197c
DEV: Add summarization logic from core (#658) 2024-07-02 08:51:59 -07:00
Roman Rizzi f622e2644f
FEATURE: Store provider-specific parameters. (#686)
Previously, we stored request parameters like the OpenAI organization and Bedrock's access key and region as site settings. This change stores them in the `llm_models` table instead, letting us drop more settings while also becoming more flexible.
2024-06-25 08:26:30 +10:00
Sam 1d5fa0ce6c
FIX: when creating an llm we were not creating user (#685)
This meant that if you toggle ai user early it surprisingly did
not work.

Also remove safety settings from gemini, it is overly cautious
2024-06-24 09:59:42 +10:00
Sam e04a7be122
FEATURE: LLM presets for model creation (#681)
* FEATURE: LLM presets for model creation

Previous to this users needed to look up complicated settings
when setting up models.

This introduces and extensible preset system with Google/OpenAI/Anthropic
presets.

This will cover all the most common LLMs, we can always add more as
we go.

Additionally:

- Proper support for Anthropic Claude Sonnet 3.5
- Stop blurring api keys when navigating away - this made it very complex to reuse keys
2024-06-21 17:32:15 +10:00
Rafael dos Santos Silva 714caf34fe
FEATURE: Support for Claude 3.5 Sonnet via AWS Bedrock (#680) 2024-06-20 17:51:46 -03:00