mirror of https://github.com/dapr/dapr-agents.git
floki --> dapr_agents changes
Signed-off-by: yaron2 <schneider.yaron@live.com>
This commit is contained in:
parent
37a48c2700
commit
dd27e2e4b0
|
|
@ -2,8 +2,7 @@
|
|||
|
||||
[](https://pypi.python.org/pypi/floki-ai)
|
||||
[](https://pypi.org/project/floki-ai/)
|
||||
[](https://github.com/Cyb3rWard0g/floki)
|
||||
[](https://github.com/Cyb3rWard0g/floki/blob/main/LICENSE)
|
||||
[](https://github.com/dapr-sandbox/dapr-agents)
|
||||
|
||||

|
||||
|
||||
|
|
@ -11,7 +10,7 @@
|
|||
|
||||
Dapr Agents is an open-source framework for researchers and developers to experiment with LLM-based autonomous agents. It provides tools to create, orchestrate, and manage agents while seamlessly connecting to LLM inference APIs. Built on [Dapr](https://docs.dapr.io/), Dapr Agents leverages a unified programming model that simplifies microservices and supports both deterministic workflows and event-driven interactions. Using Dapr’s Virtual Actor pattern, Dapr Agents enables agents to function as independent, self-contained units that process messages sequentially, eliminating concurrency concerns while seamlessly integrating into larger workflows. It also facilitates agent collaboration through Dapr’s Pub/Sub integration, where agents communicate via a shared message bus, simplifying the design of workflows where tasks are distributed efficiently, and agents work together to achieve shared goals. By bringing together these features, Dapr Agents provides a powerful way to explore agentic workflows and the components that enable multi-agent systems to collaborate and scale, all powered by Dapr.
|
||||
|
||||
## Documentation (WIP 🚧): https://cyb3rward0g.github.io/floki/
|
||||
## Documentation (WIP 🚧): https://github.com/dapr-sandbox/dapr-agents
|
||||
|
||||
## Why Dapr 🎩?
|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# Agents
|
||||
|
||||
Agents in `Floki` are autonomous systems powered by Large Language Models (LLMs), designed to execute tasks, reason through problems, and collaborate within workflows. Acting as intelligent building blocks, agents seamlessly combine LLM-driven reasoning with tool integration, memory, and collaboration features to enable scalable, agentic systems.
|
||||
Agents in `Dapr Agents` are autonomous systems powered by Large Language Models (LLMs), designed to execute tasks, reason through problems, and collaborate within workflows. Acting as intelligent building blocks, agents seamlessly combine LLM-driven reasoning with tool integration, memory, and collaboration features to enable scalable, agentic systems.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# Arxiv Fetcher
|
||||
|
||||
The Arxiv Fetcher module in `Floki` provides a powerful interface to interact with the [arXiv API](https://info.arxiv.org/help/api/index.html). It is designed to help users programmatically search for, retrieve, and download scientific papers from arXiv. With advanced querying capabilities, metadata extraction, and support for downloading PDF files, the Arxiv Fetcher is ideal for researchers, developers, and teams working with academic literature.
|
||||
The Arxiv Fetcher module in `Dapr Agents` provides a powerful interface to interact with the [arXiv API](https://info.arxiv.org/help/api/index.html). It is designed to help users programmatically search for, retrieve, and download scientific papers from arXiv. With advanced querying capabilities, metadata extraction, and support for downloading PDF files, the Arxiv Fetcher is ideal for researchers, developers, and teams working with academic literature.
|
||||
|
||||
## Why Use the Arxiv Fetcher?
|
||||
|
||||
|
|
@ -27,7 +27,7 @@ pip install arxiv
|
|||
Set up the `ArxivFetcher` to begin interacting with the arXiv API.
|
||||
|
||||
```python
|
||||
from floki.document import ArxivFetcher
|
||||
from dapr_agents.document import ArxivFetcher
|
||||
|
||||
# Initialize the fetcher
|
||||
fetcher = ArxivFetcher()
|
||||
|
|
@ -130,11 +130,11 @@ for paper in download_results:
|
|||
|
||||
### Step 5: Extract and Process PDF Content
|
||||
|
||||
Use `PyPDFReader` from `Floki` to extract content from downloaded PDFs. Each page is treated as a separate Document object with metadata.
|
||||
Use `PyPDFReader` from `Dapr Agents` to extract content from downloaded PDFs. Each page is treated as a separate Document object with metadata.
|
||||
|
||||
```python
|
||||
from pathlib import Path
|
||||
from floki.document import PyPDFReader
|
||||
from dapr_agents.document import PyPDFReader
|
||||
|
||||
reader = PyPDFReader()
|
||||
docs_read = []
|
||||
|
|
|
|||
|
|
@ -1,6 +1,6 @@
|
|||
# Text Splitter
|
||||
|
||||
The Text Splitter module is a foundational tool in `Floki` designed to preprocess documents for use in [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) workflows and other `in-context learning` applications. Its primary purpose is to break large documents into smaller, meaningful chunks that can be embedded, indexed, and efficiently retrieved based on user queries.
|
||||
The Text Splitter module is a foundational tool in `Dapr Agents` designed to preprocess documents for use in [Retrieval-Augmented Generation (RAG)](https://en.wikipedia.org/wiki/Retrieval-augmented_generation) workflows and other `in-context learning` applications. Its primary purpose is to break large documents into smaller, meaningful chunks that can be embedded, indexed, and efficiently retrieved based on user queries.
|
||||
|
||||
By focusing on manageable chunk sizes and preserving contextual integrity through overlaps, the Text Splitter ensures documents are processed in a way that supports downstream tasks like question answering, summarization, and document retrieval.
|
||||
|
||||
|
|
@ -26,7 +26,7 @@ The Text Splitter supports multiple strategies to handle different types of docu
|
|||
Example:
|
||||
|
||||
```python
|
||||
from floki.document.splitter.text import TextSplitter
|
||||
from dapr_agents.document.splitter.text import TextSplitter
|
||||
|
||||
# Character-based splitter (default)
|
||||
splitter = TextSplitter(chunk_size=1024, chunk_overlap=200)
|
||||
|
|
@ -41,7 +41,7 @@ splitter = TextSplitter(chunk_size=1024, chunk_overlap=200)
|
|||
|
||||
```python
|
||||
import tiktoken
|
||||
from floki.document.splitter.text import TextSplitter
|
||||
from dapr_agents.document.splitter.text import TextSplitter
|
||||
|
||||
enc = tiktoken.get_encoding("cl100k_base")
|
||||
|
||||
|
|
@ -101,7 +101,7 @@ pip install pypdf
|
|||
Then, initialize the reader to load the PDF file.
|
||||
|
||||
```python
|
||||
from floki.document.reader.pdf.pypdf import PyPDFReader
|
||||
from dapr_agents.document.reader.pdf.pypdf import PyPDFReader
|
||||
|
||||
reader = PyPDFReader()
|
||||
documents = reader.load(local_pdf_path)
|
||||
|
|
|
|||
|
|
@ -14,13 +14,13 @@ pip install floki-ai
|
|||
### Remotely from GitHub
|
||||
|
||||
```bash
|
||||
pip install git+https://github.com/Cyb3rWard0g/floki.git
|
||||
pip install git+https://github.com/dapr-sandbox/dapr-agents.git
|
||||
```
|
||||
|
||||
### From source with `poetry`:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/Cyb3rWard0g/floki
|
||||
git clone https://github.com/dapr-sandbox/dapr-agents
|
||||
|
||||
cd floki
|
||||
|
||||
|
|
|
|||
|
|
@ -3,14 +3,14 @@
|
|||
!!! info
|
||||
This quickstart requires `Dapr CLI` and `Docker`. You must have your [local Dapr environment set up](../installation.md).
|
||||
|
||||
Event-Driven Agentic Workflows in `Floki` take advantage of an event-driven system using pub/sub messaging and a shared message bus. Agents operate as autonomous entities that respond to events dynamically, enabling real-time interactions and collaboration. These workflows are highly adaptable, allowing agents to communicate, share tasks, and reason through events triggered by their environment. This approach is best suited for decentralized systems requiring dynamic agent collaboration across distributed applications.
|
||||
Event-Driven Agentic Workflows in `Dapr Agents` take advantage of an event-driven system using pub/sub messaging and a shared message bus. Agents operate as autonomous entities that respond to events dynamically, enabling real-time interactions and collaboration. These workflows are highly adaptable, allowing agents to communicate, share tasks, and reason through events triggered by their environment. This approach is best suited for decentralized systems requiring dynamic agent collaboration across distributed applications.
|
||||
|
||||
!!! tip
|
||||
We will demonstrate this concept using the [Multi-Agent Workflow Guide](https://github.com/Cyb3rWard0g/floki/tree/main/cookbook/workflows/multi_agent_lotr) from our Cookbook, which outlines a step-by-step guide to implementing a basic agentic workflow.
|
||||
We will demonstrate this concept using the [Multi-Agent Workflow Guide](https://github.com/dapr-sandbox/dapr-agents/tree/main/cookbook/workflows/multi_agent_lotr) from our Cookbook, which outlines a step-by-step guide to implementing a basic agentic workflow.
|
||||
|
||||
## Agents as Services
|
||||
|
||||
In `Floki`, agents can be exposed as services, making them reusable, modular, and easy to integrate into event-driven workflows. Each agent runs as a microservice, wrapped in a [Dapr-enabled FastAPI server](https://docs.dapr.io/developing-applications/sdks/python/python-sdk-extensions/python-fastapi/). This design allows agents to operate independently while communicating through [Dapr’s pub/sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/) messaging and interacting with state stores or other services.
|
||||
In `Dapr Agents`, agents can be exposed as services, making them reusable, modular, and easy to integrate into event-driven workflows. Each agent runs as a microservice, wrapped in a [Dapr-enabled FastAPI server](https://docs.dapr.io/developing-applications/sdks/python/python-sdk-extensions/python-fastapi/). This design allows agents to operate independently while communicating through [Dapr’s pub/sub](https://docs.dapr.io/developing-applications/building-blocks/pubsub/pubsub-overview/) messaging and interacting with state stores or other services.
|
||||
|
||||
The way to structure such a project is straightforward. We organize our services into a directory that contains individual folders for each agent, along with a components/ directory for Dapr configurations. Each agent service includes its own app.py file, where the FastAPI server and the agent logic are defined.
|
||||
|
||||
|
|
@ -42,7 +42,7 @@ services/ # Directory for agent services
|
|||
Create the `app.py` script and provide the following information.
|
||||
|
||||
```python
|
||||
from floki import Agent, AgentService
|
||||
from dapr_agents import Agent, AgentService
|
||||
from dotenv import load_dotenv
|
||||
import asyncio
|
||||
import logging
|
||||
|
|
@ -99,7 +99,7 @@ Types of Agentic Workflows:
|
|||
Next, we’ll define a `RoundRobin Agentic Workflow Service` to demonstrate how this concept can be implemented.
|
||||
|
||||
```python
|
||||
from floki import RoundRobinWorkflowService
|
||||
from dapr_agents import RoundRobinWorkflowService
|
||||
from dotenv import load_dotenv
|
||||
import asyncio
|
||||
import logging
|
||||
|
|
@ -132,7 +132,7 @@ Unlike `Agents as Services`, the `Agentic Workflow Service` does not require an
|
|||
|
||||
* **Max Iterations**: Defines the maximum number of iterations the workflow will perform, ensuring controlled task execution and preventing infinite loops.
|
||||
* **Workflow State Store Name**: Specifies the state store used to persist the workflow’s state, allowing for reliable recovery and tracking of workflow progress.
|
||||
* **LLM Inference Client**: Although an individual agent is not required, the LLM-based Agentic Workflow Service depends on an LLM Inference Client. By default, it uses the [OpenAIChatClient()](https://github.com/Cyb3rWard0g/floki/blob/main/src/floki/llm/openai/chat.py) from the Floki library.
|
||||
* **LLM Inference Client**: Although an individual agent is not required, the LLM-based Agentic Workflow Service depends on an LLM Inference Client. By default, it uses the [OpenAIChatClient()](https://github.com/dapr-sandbox/dapr-agents/blob/main/src/dapr-agents/llm/openai/chat.py) from the Floki library.
|
||||
|
||||
These differences reflect the distinct purpose of the Agentic Workflow Service, which acts as a centralized orchestrator rather than an individual agent service. The inclusion of the LLM Inference Client in the LLM-based workflows allows the orchestrator to leverage natural language processing for intelligent task routing and decision-making.
|
||||
|
||||
|
|
@ -274,7 +274,7 @@ In this example:
|
|||
|
||||
## Customizing the Workflow
|
||||
|
||||
The default setup uses the [workflow-roundrobin service](https://github.com/Cyb3rWard0g/floki/blob/main/cookbook/workflows/multi_agent_lotr/services/workflow-roundrobin/app.py), which processes agent tasks in a `round-robin` order. However, you can easily switch to a different workflow type by updating the `dapr.yaml` file.
|
||||
The default setup uses the [workflow-roundrobin service](https://github.com/dapr-sandbox/dapr-agents/blob/main/cookbook/workflows/multi_agent_lotr/services/workflow-roundrobin/app.py), which processes agent tasks in a `round-robin` order. However, you can easily switch to a different workflow type by updating the `dapr.yaml` file.
|
||||
|
||||
### Available Workflow Options
|
||||
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
# LLM-based AI Agents
|
||||
|
||||
In the `Floki` framework, agents are autonomous systems powered by large language models (LLMs) that serve as their reasoning engine. These agents use the LLM’s parametric knowledge to process information, reason in natural language, and interact dynamically with their environment by leveraging tools. Tools allow the agents to perform real-world tasks, gather new information, and adapt their reasoning based on feedback.
|
||||
In the `Dapr Agents` framework, agents are autonomous systems powered by large language models (LLMs) that serve as their reasoning engine. These agents use the LLM’s parametric knowledge to process information, reason in natural language, and interact dynamically with their environment by leveraging tools. Tools allow the agents to perform real-world tasks, gather new information, and adapt their reasoning based on feedback.
|
||||
|
||||
!!! info
|
||||
By default, `Floki` sets the agentic pattern for the `Agent` class to `toolcalling` mode, enabling AI agents to interact dynamically with external tools using [OpenAI's Function Calling](https://platform.openai.com/docs/guides/function-calling?ref=blog.openthreatresearch.com).
|
||||
By default, `Dapr Agents` sets the agentic pattern for the `Agent` class to `toolcalling` mode, enabling AI agents to interact dynamically with external tools using [OpenAI's Function Calling](https://platform.openai.com/docs/guides/function-calling?ref=blog.openthreatresearch.com).
|
||||
|
||||
`Tool Calling` empowers agents to identify the right tools for a task, format the necessary arguments, and execute the tools independently. The results are then passed back to the LLM for further processing, enabling seamless and adaptive agent workflows.
|
||||
|
||||
|
|
@ -25,12 +25,12 @@ load_dotenv() # take environment variables from .env.
|
|||
|
||||
## Create a Basic Agent
|
||||
|
||||
In `Floki`, tools bridge basic Python functions and `OpenAI's Function Calling` format, enabling seamless interaction between agents and external tasks. You can use `Pydantic` models to define the schema for tool arguments, ensuring structured input and validation.
|
||||
In `Dapr Agents`, tools bridge basic Python functions and `OpenAI's Function Calling` format, enabling seamless interaction between agents and external tasks. You can use `Pydantic` models to define the schema for tool arguments, ensuring structured input and validation.
|
||||
|
||||
By annotating functions with `@tool` and specifying the argument schema, you transform them into `Agent tools` that can be invoke dynamically during workflows. This approach makes your tools compatible with LLM-driven decision-making and execution.
|
||||
|
||||
```python
|
||||
from floki import tool
|
||||
from dapr_agents import tool
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class GetWeatherSchema(BaseModel):
|
||||
|
|
@ -57,7 +57,7 @@ tools = [get_weather,jump]
|
|||
Next, create your Agent by specifying key attributes such as `name`, `role`, `goal`, and `instructions`, while assigning the `tools` defined earlier. This setup equips your agent with a clear purpose and the ability to interact dynamically with its environment.
|
||||
|
||||
```python
|
||||
from floki import Agent
|
||||
from dapr_agents import Agent
|
||||
|
||||
AIAgent = Agent(
|
||||
name = "Stevie",
|
||||
|
|
|
|||
|
|
@ -102,14 +102,14 @@ dapr run --app-id originalwf --dapr-grpc-port 50001 --resources-path components/
|
|||
|
||||
## Dapr Workflow -> Floki Workflows
|
||||
|
||||
With `Floki`, the goal was to simplify workflows while adding flexibility and powerful integrations. I wanted to create a way to track the workflow state, including input, output, and status, while also streamlining monitoring. To achieve this, I built additional `workflow` and `activity` wrappers. The workflow wrapper stays mostly the same as Dapr's original, but the activity wrapper has been extended into a `task wrapper`. This change allows tasks to integrate seamlessly with LLM-based prompts and other advanced capabilities.
|
||||
With `Dapr Agents`, the goal was to simplify workflows while adding flexibility and powerful integrations. I wanted to create a way to track the workflow state, including input, output, and status, while also streamlining monitoring. To achieve this, I built additional `workflow` and `activity` wrappers. The workflow wrapper stays mostly the same as Dapr's original, but the activity wrapper has been extended into a `task wrapper`. This change allows tasks to integrate seamlessly with LLM-based prompts and other advanced capabilities.
|
||||
|
||||
!!! info
|
||||
The same example as before can be written in the following way. While the difference might not be immediately noticeable, this is a straightforward example of task chaining using Python functions. Create a file named `wf_taskchain_floki_activity.py`.
|
||||
|
||||
```python
|
||||
from floki import WorkflowApp
|
||||
from floki.types import DaprWorkflowContext
|
||||
from dapr_agents import WorkflowApp
|
||||
from dapr_agents.types import DaprWorkflowContext
|
||||
|
||||
wfapp = WorkflowApp()
|
||||
|
||||
|
|
@ -168,4 +168,4 @@ If we inspect the `Workflow State` in the state store, you would see something l
|
|||
}
|
||||
```
|
||||
|
||||
`Floki` processes the workflow execution and even extracts the final output.
|
||||
`Dapr Agents` processes the workflow execution and even extracts the final output.
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
# LLM Inference Client
|
||||
|
||||
In `Floki`, the LLM Inference Client is responsible for interacting with language models. It serves as the interface through which the agent communicates with the LLM, generating responses based on the input provided.
|
||||
In `Dapr Agents`, the LLM Inference Client is responsible for interacting with language models. It serves as the interface through which the agent communicates with the LLM, generating responses based on the input provided.
|
||||
|
||||
!!! info
|
||||
By default, `Floki` uses the `OpenAIChatClient` to interact with the OpenAI Chat endpoint. By default, the `OpenAIChatClient` uses the `gpt-4o` model
|
||||
By default, `Dapr Agents` uses the `OpenAIChatClient` to interact with the OpenAI Chat endpoint. By default, the `OpenAIChatClient` uses the `gpt-4o` model
|
||||
|
||||
## Set Environment Variables
|
||||
|
||||
|
|
@ -26,7 +26,7 @@ load_dotenv() # take environment variables from .env.
|
|||
By default, you can easily initialize the `OpenAIChatClient` without additional configuration. It uses the `OpenAI API` key from your environment variables.
|
||||
|
||||
```python
|
||||
from floki import OpenAIChatClient
|
||||
from dapr_agents import OpenAIChatClient
|
||||
|
||||
llm = OpenAIChatClient()
|
||||
|
||||
|
|
@ -44,7 +44,7 @@ ChatCompletion(choices=[Choice(finish_reason='stop', index=0, message=MessageCon
|
|||
Onge again, initialize `OpenAIChatClient`.
|
||||
|
||||
```python
|
||||
from floki import OpenAIChatClient
|
||||
from dapr_agents import OpenAIChatClient
|
||||
|
||||
llmClient = OpenAIChatClient()
|
||||
```
|
||||
|
|
@ -63,7 +63,7 @@ class dog(BaseModel):
|
|||
Finally, you can pass the response model to the LLM Client call.
|
||||
|
||||
```python
|
||||
from floki.types import UserMessage
|
||||
from dapr_agents.types import UserMessage
|
||||
|
||||
response = llmClient.generate(
|
||||
messages=[UserMessage("One famous dog in history.")],
|
||||
|
|
|
|||
|
|
@ -3,9 +3,9 @@
|
|||
!!! info
|
||||
This quickstart requires `Dapr CLI` and `Docker`. You must have your [local Dapr environment set up](../installation.md).
|
||||
|
||||
In `Floki`, LLM-based Task Workflows allow developers to design step-by-step workflows where LLMs provide reasoning and decision-making at defined stages. These workflows are deterministic and structured, enabling the execution of tasks in a specific order, often defined by Python functions. This approach does not rely on event-driven systems or pub/sub messaging but focuses on defining and orchestrating tasks with the help of LLM reasoning when necessary. Ideal for scenarios that require a predefined flow of tasks enhanced by language model insights.
|
||||
In `Dapr Agents`, LLM-based Task Workflows allow developers to design step-by-step workflows where LLMs provide reasoning and decision-making at defined stages. These workflows are deterministic and structured, enabling the execution of tasks in a specific order, often defined by Python functions. This approach does not rely on event-driven systems or pub/sub messaging but focuses on defining and orchestrating tasks with the help of LLM reasoning when necessary. Ideal for scenarios that require a predefined flow of tasks enhanced by language model insights.
|
||||
|
||||
Now that we have a better understanding of `Dapr` and `Floki` workflows, let’s explore how to use Dapr activities or Floki tasks to call LLM Inference APIs, such as [OpenAI Tex Generation endpoint](https://platform.openai.com/docs/guides/text-generation), with models like `gpt-4o`.
|
||||
Now that we have a better understanding of `Dapr` and `Dapr Agents` workflows, let’s explore how to use Dapr activities or Floki tasks to call LLM Inference APIs, such as [OpenAI Tex Generation endpoint](https://platform.openai.com/docs/guides/text-generation), with models like `gpt-4o`.
|
||||
|
||||
## Dapr Workflows & LLM Inference APIs
|
||||
|
||||
|
|
@ -102,13 +102,13 @@ dapr run --app-id originalllmwf --dapr-grpc-port 50001 --resources-path componen
|
|||
|
||||
## Floki LLM-based Tasks
|
||||
|
||||
Now, let’s get to the exciting part! `Tasks` in `Floki` build on the concept of `activities` and bring additional flexibility. Using Python function signatures, you can define tasks with ease. The `task decorator` allows you to provide a `description` parameter, which acts as a prompt for the default LLM inference client in `Floki` (`OpenAIChatClient` by default).
|
||||
Now, let’s get to the exciting part! `Tasks` in `Dapr Agents` build on the concept of `activities` and bring additional flexibility. Using Python function signatures, you can define tasks with ease. The `task decorator` allows you to provide a `description` parameter, which acts as a prompt for the default LLM inference client in `Dapr Agents` (`OpenAIChatClient` by default).
|
||||
|
||||
You can also use function arguments to pass variables to the prompt, letting you dynamically format the prompt before it’s sent to the text generation endpoint. This makes it simple to implement workflows that follow the [Dapr Task chaining pattern](https://docs.dapr.io/developing-applications/building-blocks/workflow/workflow-patterns/#task-chaining), just like in the earlier example, but with even more flexibility.
|
||||
|
||||
```python
|
||||
from floki import WorkflowApp
|
||||
from floki.types import DaprWorkflowContext
|
||||
from dapr_agents import WorkflowApp
|
||||
from dapr_agents.types import DaprWorkflowContext
|
||||
from dotenv import load_dotenv
|
||||
|
||||
# Load environment variables
|
||||
|
|
|
|||
|
|
@ -20,6 +20,6 @@ Floki provides a unified framework for designing, deploying, and orchestrating L
|
|||
|
||||
## Why the Name Floki?
|
||||
|
||||
The name `Floki` is inspired by both history and fiction. Historically, [Floki Vilgerðarson](https://en.wikipedia.org/wiki/Hrafna-Fl%C3%B3ki_Vilger%C3%B0arson) is known in Norse sagas as the first Norseman to journey to Iceland, embodying a spirit of discovery. In the [Vikings series](https://en.wikipedia.org/wiki/Vikings_(2013_TV_series)), Floki is portrayed as a skilled boat builder, creating vessels that allowed his people to explore and achieve their goals.
|
||||
The name `Dapr Agents` is inspired by both history and fiction. Historically, [Floki Vilgerðarson](https://en.wikipedia.org/wiki/Hrafna-Fl%C3%B3ki_Vilger%C3%B0arson) is known in Norse sagas as the first Norseman to journey to Iceland, embodying a spirit of discovery. In the [Vikings series](https://en.wikipedia.org/wiki/Vikings_(2013_TV_series)), Floki is portrayed as a skilled boat builder, creating vessels that allowed his people to explore and achieve their goals.
|
||||
|
||||
In the same way, this framework equips developers with the tools to build, prototype, and deploy their own agents or fleets of agents, enabling them to experiment and explore the potential of LLM-based workflows.
|
||||
|
|
@ -2,8 +2,8 @@
|
|||
|
||||
[](https://pypi.python.org/pypi/floki-ai)
|
||||
[](https://pypi.org/project/floki-ai/)
|
||||
[](https://github.com/Cyb3rWard0g/floki)
|
||||
[](https://github.com/Cyb3rWard0g/floki/blob/main/LICENSE)
|
||||
[](https://github.com/dapr-sandbox/dapr-agents)
|
||||
[](https://github.com/dapr-sandbox/dapr-agents/blob/main/LICENSE)
|
||||
|
||||

|
||||
|
||||
|
|
@ -28,7 +28,7 @@ Dapr provides Floki with a unified programming model that simplifies the develop
|
|||
|
||||
---
|
||||
|
||||
Install [`Floki`](https://github.com/Cyb3rWard0g/floki) with [`pip`](#) and set up your dapr environment in minutes
|
||||
Install [`Dapr Agents`](https://github.com/dapr-sandbox/dapr-agents) with [`pip`](#) and set up your dapr environment in minutes
|
||||
|
||||
[:octicons-arrow-right-24: Installation](home/installation.md)
|
||||
|
||||
|
|
@ -54,6 +54,6 @@ Dapr provides Floki with a unified programming model that simplifies the develop
|
|||
|
||||
Floki is licensed under MIT and available on [GitHub]
|
||||
|
||||
[:octicons-arrow-right-24: License](https://github.com/Cyb3rWard0g/floki/blob/main/LICENSE)
|
||||
[:octicons-arrow-right-24: License](https://github.com/dapr-sandbox/dapr-agents/blob/main/LICENSE)
|
||||
|
||||
</div>
|
||||
|
|
|
|||
|
|
@ -7,7 +7,7 @@ strict: false
|
|||
|
||||
# Repository
|
||||
repo_name: Cyb3rWard0g/floki
|
||||
repo_url: https://github.com/Cyb3rWard0g/floki
|
||||
repo_url: https://github.com/dapr-sandbox/dapr-agents
|
||||
edit_uri: edit/main/docs/
|
||||
|
||||
# Copyright
|
||||
|
|
@ -113,7 +113,7 @@ extra:
|
|||
Thanks for your feedback!
|
||||
social:
|
||||
- icon: fontawesome/brands/github
|
||||
link: https://github.com/Cyb3rWard0g/floki
|
||||
link: https://github.com/dapr-sandbox/dapr-agents
|
||||
- icon: fontawesome/brands/python
|
||||
link: https://pypi.org/project/floki-ai/
|
||||
- icon: fontawesome/brands/x-twitter
|
||||
|
|
|
|||
Loading…
Reference in New Issue