Skip to content

Commit

Permalink
Merge branch 'main' into prefect
Browse files Browse the repository at this point in the history
  • Loading branch information
jlowin committed Oct 9, 2024
2 parents aad68ae + 33c0701 commit 379f196
Show file tree
Hide file tree
Showing 105 changed files with 3,571 additions and 1,067 deletions.
10 changes: 8 additions & 2 deletions .github/labeler.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,13 @@
documentation:
- changed-files:
- any-glob-to-any-file: "docs/*"
- any-glob-to-any-file: "docs/**"

example:
- changed-files:
- any-glob-to-any-file:
- "examples/**"
- "docs/examples/**"

tests:
- changed-files:
- any-glob-to-any-file: "tests/*"
- any-glob-to-any-file: "tests/**"
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Next, configure your LLM provider. ControlFlow's default provider is OpenAI, whi
export OPENAI_API_KEY=your-api-key
```

To use a different LLM provider, [see the LLM configuration docs](https://controlflow.ai/guides/llms).
To use a different LLM provider, [see the LLM configuration docs](https://controlflow.ai/guides/configure-llms).


## Workflow Example
Expand Down
33 changes: 32 additions & 1 deletion docs/concepts/agents.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@ description: The intelligent workers in your AI workflows.
icon: robot
---

import { VersionBadge } from '/snippets/version-badge.mdx'

Agents are the intelligent, autonomous entities that power your AI workflows in ControlFlow. They represent AI models capable of understanding instructions, making decisions, and completing tasks.

```python
Expand Down Expand Up @@ -67,7 +69,36 @@ Tools are Python functions that the agent can call to perform specific actions o

Each agent has a model, which is the LLM that powers the agent responses. This allows you to choose the most suitable model for your needs, based on factors such as performance, latency, and cost.

ControlFlow supports any LangChain LLM that supports chat and function calling. For more details on how to configure models, see the [LLMs guide](/guides/llms).
ControlFlow supports any LangChain LLM that supports chat and function calling. For more details on how to configure models, see the [LLMs guide](/guides/configure-llms).

```python
import controlflow as cf


agent1 = cf.Agent(model="openai/gpt-4o")
agent2 = cf.Agent(model="anthropic/claude-3-5-sonnet-20240620")
```

### LLM rules
<VersionBadge version="0.11.0" />

Each LLM provider may have different requirements for how messages are formatted or presented. For example, OpenAI permits system messages to be interspersed between user messages, but Anthropic requires them to be at the beginning of the conversation. ControlFlow uses provider-specific rules to properly compile messages for each agent's API.

For common providers like OpenAI and Anthropic, LLM rules can be automatically inferred from the agent's model. However, you can use a custom `LLMRules` object to override these rules or provide rules for non-standard providers.

Here is an example of how to tell the agent to use the Anthropic compilation rules with a custom model that can't be automatically inferred:

```python
import controlflow as cf

# note: this is just an example
llm_model = CustomAnthropicModel()

agent = cf.Agent(
model=model,
llm_rules=cf.llm.rules.AnthropicRules(model=model)
)
```

### Interactivity

Expand Down
53 changes: 53 additions & 0 deletions docs/concepts/tasks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -264,6 +264,59 @@ task = cf.Task(
)
```

Note that this setting reflects the configuration of the `completion_tools` parameter.

### Completion tools

import { VersionBadge } from '/snippets/version-badge.mdx'

<VersionBadge version="0.10" />

In addition to specifying which agents are automatically given completion tools, you can control which completion tools are generated for a task using the `completion_tools` parameter. This allows you to specify whether you want to provide success and/or failure tools, or even provide custom completion tools.

The `completion_tools` parameter accepts a list of strings, where each string represents a tool to be generated. The available options are:

- `"SUCCEED"`: Generates a tool for marking the task as successful.
- `"FAIL"`: Generates a tool for marking the task as failed.

If `completion_tools` is not specified, both `"SUCCEED"` and `"FAIL"` tools will be generated by default.

You can manually create completion tools and provide them to your agents by calling `task.get_success_tool()` and `task.get_fail_tool()`.

<Warning>
If you exclude `completion_tools`, agents may be unable to complete the task or become stuck in a failure state. Without caps on LLM turns or calls, this could lead to runaway LLM usage. Make sure to manually manage how agents complete tasks if you are using a custom set of completion tools.
</Warning>

Here are some examples:

```
# Generate both success and failure tools (default behavior, equivalent to `completion_tools=None`)
task = cf.Task(
objective="Write a poem about AI",
completion_tools=["SUCCEED", "FAIL"],
)
# Only generate a success tool
task = cf.Task(
objective="Write a poem about AI",
completion_tools=["SUCCEED"],
)
# Only generate a failure tool
task = cf.Task(
objective="Write a poem about AI",
completion_tools=["FAIL"],
)
# Don't generate any completion tools
task = cf.Task(
objective="Write a poem about AI",
completion_tools=[],
)
```

By controlling which completion tools are generated, you can customize the task completion process to better suit your workflow needs. For example, you might want to prevent agents from marking a task as failed, or you might want to provide your own custom completion tools instead of using the default ones.

### Name

The name of a task is a string that identifies the task within the workflow. It is used primarily for logging and debugging purposes, though it is also shown to agents during execution to help identify the task they are working on.
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/call-routing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ def routing_flow():
),
agents=[trainee],
result_type=None,
tools=[main_task.create_success_tool()]
tools=[main_task.get_success_tool()]
)

if main_task.result == target_department:
Expand Down
102 changes: 102 additions & 0 deletions docs/examples/features/early-termination.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
---
title: Early Termination
description: Control workflow execution with flexible termination logic.
icon: circle-stop
---

import { VersionBadge } from "/snippets/version-badge.mdx"

<VersionBadge version="0.11" />

This example demonstrates how to use termination conditions with the `run_until` parameter to control the execution of a ControlFlow workflow. We'll create a simple research workflow that stops under various conditions, showcasing the flexibility of this feature. In this case, we'll allow research to continue until either two topics are researched or 15 LLM calls are made.

## Code

```python
import controlflow as cf
from controlflow.orchestration.conditions import AnyComplete, MaxLLMCalls
from pydantic import BaseModel


class ResearchPoint(BaseModel):
topic: str
key_findings: list[str]


@cf.flow
def research_workflow(topics: list[str]):
if len(topics) < 2:
raise ValueError("At least two topics are required")

research_tasks = [
cf.Task(f"Research {topic}", result_type=ResearchPoint)
for topic in topics
]

# Run tasks with termination conditions
results = cf.run_tasks(
research_tasks,
instructions="Research only one topic at a time.",
run_until=(
AnyComplete(min_complete=2) # stop after two tasks (if there are more than two topics)
| MaxLLMCalls(15) # or stop after 15 LLM calls, whichever comes first
)
)

completed_research = [r for r in results if isinstance(r, ResearchPoint)]
return completed_research
```

<CodeGroup>

Now, if we run this workflow on 4 topics, it will stop after two topics are researched:

```python Example Usage
# Example usage
topics = [
"Artificial Intelligence",
"Quantum Computing",
"Biotechnology",
"Renewable Energy",
]
results = research_workflow(topics)

print(f"Completed research on {len(results)} topics:")
for research in results:
print(f"\nTopic: {research.topic}")
print("Key Findings:")
for finding in research.key_findings:
print(f"- {finding}")
```

```text Result
Completed research on 2 topics:
Topic: Artificial Intelligence
Key Findings:
- Machine Learning and Deep Learning: These are subsets of AI that involve training models on large datasets to make predictions or decisions without being explicitly programmed. They are widely used in various applications, including image and speech recognition, natural language processing, and autonomous vehicles.
- AI Ethics and Bias: As AI systems become more prevalent, ethical concerns such as bias in AI algorithms, data privacy, and the impact on employment are increasingly significant. Ensuring fairness, transparency, and accountability in AI systems is a growing area of focus.
- AI in Healthcare: AI technologies are revolutionizing healthcare through applications in diagnostics, personalized medicine, and patient monitoring. AI can analyze medical data to assist in early disease detection and treatment planning.
- Natural Language Processing (NLP): NLP is a field of AI focused on the interaction between computers and humans through natural language. Recent advancements include transformers and large language models, which have improved the ability of machines to understand and generate human language.
- AI in Autonomous Systems: AI is a crucial component in developing autonomous systems, such as self-driving cars and drones, which require perception, decision-making, and control capabilities to navigate and operate in real-world environments.
Topic: Quantum Computing
Key Findings:
- Quantum Bits (Qubits): Unlike classical bits, qubits can exist in multiple states simultaneously due to superposition. This allows quantum computers to process a vast amount of information at once, offering a potential exponential speed-up over classical computers for certain tasks.
- Quantum Entanglement: This phenomenon allows qubits that are entangled to be correlated with each other, even when separated by large distances. Entanglement is a key resource in quantum computing and quantum communication.
- Quantum Algorithms: Quantum algorithms, such as Shor's algorithm for factoring large numbers and Grover's algorithm for searching unsorted databases, demonstrate the potential power of quantum computing over classical approaches.
- Quantum Error Correction: Quantum systems are prone to errors due to decoherence and noise from the environment. Quantum error correction methods are essential for maintaining the integrity of quantum computations.
- Applications and Challenges: Quantum computing holds promise for solving complex problems in cryptography, material science, and optimization. However, significant technological challenges remain, including maintaining qubit coherence, scaling up the number of qubits, and developing practical quantum software.
```
</CodeGroup>
## Key Concepts

1. **Custom Termination Conditions**: We use a combination of `AnyComplete` and `MaxLLMCalls` conditions to control when the workflow should stop.

2. **Flexible Workflow Control**: By using termination conditions with the `run_until` parameter, we can create more dynamic workflows that adapt to different scenarios. In this case, we're balancing between getting enough research done and limiting resource usage.

3. **Partial Results**: The workflow can end before all tasks are complete, so we handle partial results by filtering for completed `ResearchPoint` objects.

4. **Combining Conditions**: We use the `|` operator to combine multiple termination conditions. ControlFlow also supports `&` for more complex logic.

This example demonstrates how termination conditions provide fine-grained control over workflow execution, allowing you to balance between task completion and resource usage. This can be particularly useful for managing costs, handling time-sensitive operations, or creating more responsive AI workflows.
101 changes: 101 additions & 0 deletions docs/examples/features/memory.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
---
title: Using Memory
description: How to use memory to persist information across different conversations
icon: brain
---
import { VersionBadge } from '/snippets/version-badge.mdx'

<VersionBadge version="0.10" />


Memory in ControlFlow allows agents to store and retrieve information across different conversations or workflow executions. This is particularly useful for maintaining context over time or sharing information between separate interactions.

## Setup

In order to use memory, you'll need to configure a [memory provider](/patterns/memory#provider). For this example, we'll use the default Chroma provider. You'll need to `pip install chromadb` to install its dependencies.

## Code

In this example, we'll create a simple workflow that remembers a user's favorite color across different conversations. For simplicity, we'll demonstrate the memory by using two different flows, which represent two different threads.

```python
import controlflow as cf


# Create a memory module for user preferences
user_preferences = cf.Memory(
key="user_preferences",
instructions="Store and retrieve user preferences."
)


# Create an agent with access to the memory
agent = cf.Agent(memories=[user_preferences])


# Create a flow to ask for the user's favorite color
@cf.flow
def remember_color():
return cf.run(
"Ask the user for their favorite color and store it in memory",
agents=[agent],
interactive=True,
)


# Create a flow to recall the user's favorite color
@cf.flow
def recall_color():
return cf.run(
"What is the user's favorite color?",
agents=[agent],
)
```

Ordinarily, running the flows above would result in two separate -- unconnected -- conversations. The agent in the `recall_color` flow would have no way of knowing about the information from the first flow, even though its the same agent, because the conversation histories are not shared.

However, because we gave the agent a memory module and instructions for how to use it, the agent *will* be able to recall the information from the first flow.

Run the first flow:
<CodeGroup>
```python First flow
remember_color()
```
```text Result
Agent: Hello! What is your favorite color?
User: I really like a blue-purple shade.
Agent: Great, thank you.
```
</CodeGroup>

When we run the second flow, the agent correctly recalls the favorite color:
<CodeGroup>
```python Second flow
result = recall_color()
print(result)
```
```text Result
The user's favorite color is a blue-purple shade.
```
</CodeGroup>

## Key concepts

1. **[Memory creation](/patterns/memory#creating-memory-modules)**: We create a `Memory` object with a unique key and instructions for its use.

```python
user_preferences = cf.Memory(
key="user_preferences",
instructions="Store and retrieve user preferences."
)
```

2. **[Assigning memory to agents](/patterns/memory#assigning-memories)**: We assign the memory to an agent, allowing it to access and modify the stored information.

```python
agent = cf.Agent(name="PreferenceAgent", memories=[user_preferences])
```

3. **[Using memory across flows](/patterns/memory#sharing-memories)**: By using the same memory in different flows, we can access information across separate conversations.

This example demonstrates how ControlFlow's memory feature allows information to persist across different workflow executions, enabling more context-aware and personalized interactions.
2 changes: 1 addition & 1 deletion docs/examples/features/private-flows.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Private flows
title: Private Flows
description: Create isolated execution environments within your workflows.
icon: lock
---
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/features/tools.mdx
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Custom tools
title: Custom Tools
description: Provide tools to expand agent capabilities.
icon: wrench
---
Expand Down
18 changes: 0 additions & 18 deletions docs/examples/library.mdx

This file was deleted.

Loading

0 comments on commit 379f196

Please sign in to comment.