5.4 KiB
Conversation Component Conformance Tests
This directory contains conformance tests for all conversation components, including all langchaingo-based models.
Available Components
- echo - Simple echo component for testing (no configuration needed)
- openai - OpenAI GPT models (also supports Azure OpenAI)
- anthropic - Anthropic Claude models
- googleai - Google Gemini models
- mistral - Mistral AI models
- huggingface - HuggingFace models (using OpenAI compatibility layer)
- ollama - Local Ollama models
- bedrock - AWS Bedrock models
Running the Tests
To run the conformance tests:
# Run all conformance tests (will skip components without API keys)
go test -tags conftests ./tests/conformance -run TestConversationConformance -timeout 30s
# Run specific component test
go test -tags conftests ./tests/conformance -run TestConversationConformance/openai -timeout 30s
# Run with verbose output
go test -tags conftests ./tests/conformance -run TestConversationConformance -v -timeout 30s
Environment Variables
The tests will automatically skip components for which the required environment variables are not set. You can either set environment variables directly or use a .env file.
Using a .env file (Recommended)
- Copy the template file:
cp env.template .env
- Edit the
.envfile and add your API keys - Run the tests - they will automatically load the environment variables from the
.envfile
Setting Environment Variables Directly
Alternatively, you can set the following environment variables to run the respective tests:
OpenAI
export OPENAI_API_KEY="your_openai_api_key"
Get your API key from: https://platform.openai.com/api-keys
Azure OpenAI
export AZURE_OPENAI_API_KEY="your_openai_api_key"
export AZURE_OPENAI_ENDPOINT="your_azureopenai_endpoint_here"
export AZURE_OPENAI_API_VERSION="your_azreopenai_api_version_here"
Get your configuration values from: https://ai.azure.com/
Anthropic
export ANTHROPIC_API_KEY="your_anthropic_api_key"
Get your API key from: https://console.anthropic.com/
Google AI
export GOOGLE_AI_API_KEY="your_google_ai_api_key"
Get your API key from: https://aistudio.google.com/app/apikey
Mistral
export MISTRAL_API_KEY="your_mistral_api_key"
Get your API key from: https://console.mistral.ai/
HuggingFace
export HUGGINGFACE_API_KEY="your_huggingface_api_key"
Get your API key from: https://huggingface.co/settings/tokens
AWS Bedrock
export AWS_ACCESS_KEY_ID="your_aws_access_key"
export AWS_SECRET_ACCESS_KEY="your_aws_secret_key"
export AWS_REGION="us-east-1" # Optional, defaults to us-east-1
Get your credentials from AWS Console
Ollama
export OLLAMA_ENABLED="1"
Requires a local Ollama server running with the llama3.2:latest model available.
Test Configuration
Each component has its own configuration file in this directory:
echo/echo.yml- Echo component configurationopenai/openai.yml- OpenAI configuration with gpt-4o-mini modelanthropic/anthropic.yml- Anthropic configuration with Claude 3 Haikugoogleai/googleai.yml- Google AI configuration with Gemini 1.5 Flashmistral/mistral.yml- Mistral configuration with open-mistral-7bhuggingface/huggingface.yml- HuggingFace configuration (uses OpenAI compatibility layer)ollama/ollama.yml- Ollama configuration with llama3.2:latestbedrock/bedrock.yml- AWS Bedrock configuration with Claude 3 Haiku
The configurations use cost-effective models where possible to minimize testing costs.
HuggingFace OpenAI Compatibility Layer
The HuggingFace component uses a workaround due to issues with the native HuggingFace implementation in langchaingo. Instead of using the HuggingFace SDK directly, it uses the OpenAI SDK with HuggingFace's OpenAI-compatible API endpoints.
How it works:
- Model Selection: Any HuggingFace model can be used by specifying its full name (e.g.,
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) - Dynamic Endpoints: The endpoint URL is automatically generated based on the model name using the template:
https://router.huggingface.co/hf-inference/models/{{model}}/v1 - Custom Endpoints: You can override the endpoint by specifying a custom
endpointparameter - Authentication: Uses the same HuggingFace API key authentication
Example Configuration:
apiVersion: dapr.io/v1alpha1
kind: Component
metadata:
name: huggingface
spec:
type: conversation.huggingface
version: v1
metadata:
- name: key
value: "hf_your_api_key_here"
- name: model
value: "microsoft/DialoGPT-medium"
# Optional: custom endpoint (auto-generated if not specified)
- name: endpoint
value: "https://router.huggingface.co/hf-inference/models/microsoft/DialoGPT-medium/v1"
This approach provides better reliability and compatibility while maintaining access to the full range of HuggingFace models.
Notes
- The tests will automatically skip components when required environment variables are not set
- Cost-effective models are used by default to minimize API costs
- HuggingFace uses the OpenAI compatibility layer as a workaround due to langchaingo API issues
- Ollama requires a local server and must be explicitly enabled
- OpenAI component is tested for OpenAI and Azure
- All tests include proper initialization and basic conversation functionality testing