* DEV: refactor bot internals
This introduces a proper object for bot context, this makes
it simpler to improve context management as we go cause we
have a nice object to work with
Starts refactoring allowing for a single message to have
multiple uploads throughout
* transplant method to message builder
* chipping away at inline uploads
* image support is improved but not fully fixed yet
partially working in anthropic, still got quite a few dialects to go
* open ai and claude are now working
* Gemini is now working as well
* fix nova
* more dialects...
* fix ollama
* fix specs
* update artifact fixed
* more tests
* spam scanner
* pass more specs
* bunch of specs improved
* more bug fixes.
* all the rest of the tests are working
* improve tests coverage and ensure custom tools are aware of new context object
* tests are working, but we need more tests
* resolve merge conflict
* new preamble and expanded specs on ai tool
* remove concept of "standalone tools"
This is no longer needed, we can set custom raw, tool details are injected into tool calls
Currently in core re-flagging something that is already flagged as spam
is not supported, long term we may want to support this but in the meantime
we should not be silencing/hiding if the PostActionCreator fails
when flagging things as spam.
---------
Co-authored-by: Ted Johansson <drenmi@gmail.com>
This adds registration and last known IP information and email to scanning context.
This provides another hint for spam scanner about possible malicious users.
For example registered in India, replying from Australia or
email is clearly a throwaway email address.
When enabling spam scanner it there may be old unscanned posts
this can create a risky situation where spam scanner operates
on legit posts during false positives
To keep this a lot safer we no longer try to hide old stuff by
the spammers.
This introduces a comprehensive spam detection system that uses LLM models
to automatically identify and flag potential spam posts. The system is
designed to be both powerful and configurable while preventing false positives.
Key Features:
* Automatically scans first 3 posts from new users (TL0/TL1)
* Creates dedicated AI flagging user to distinguish from system flags
* Tracks false positives/negatives for quality monitoring
* Supports custom instructions to fine-tune detection
* Includes test interface for trying detection on any post
Technical Implementation:
* New database tables:
- ai_spam_logs: Stores scan history and results
- ai_moderation_settings: Stores LLM config and custom instructions
* Rate limiting and safeguards:
- Minimum 10-minute delay between rescans
- Only scans significant edits (>10 char difference)
- Maximum 3 scans per post
- 24-hour maximum age for scannable posts
* Admin UI features:
- Real-time testing capabilities
- 7-day statistics dashboard
- Configurable LLM model selection
- Custom instruction support
Security and Performance:
* Respects trust levels - only scans TL0/TL1 users
* Skips private messages entirely
* Stops scanning users after 3 successful public posts
* Includes comprehensive test coverage
* Maintains audit log of all scan attempts
---------
Co-authored-by: Keegan George <kgeorge13@gmail.com>
Co-authored-by: Martin Brennan <martin@discourse.org>