Show HN: PicoFlow – 极简 Python 工作流,用于 LLM 智能体
1 分•作者: shijizhi_1919•23 天前
嗨 HN,
我一直在试验 LLM 代理一段时间了,并且经常觉得,对于简单的流程(聊天、工具调用、小循环),现有的框架增加了大量的抽象和样板代码。
因此,我构建了一个名为 PicoFlow 的小型 Python 库。目标很简单:
使用普通的异步 Python 来表达代理工作流程,而不是特定于框架的图或链。
最小的聊天代理
每个步骤都只是一个异步函数,并且工作流程通过 >> 组合:
```python
from picoflow import flow, llm, create_agent
LLM_URL =
“llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY”
@flow
async def input_step(ctx):
return ctx.with_input(input(“You:”))
agent = create_agent(
input_step >>
llm(“Answer the user: {input}”, llm_adapter=LLM_URL)
)
agent.run()
```
没有链,没有图,没有单独的提示/模板对象。您可以通过直接在异步步骤中设置断点来调试。
控制流就是 Python
循环和分支使用普通的 Python 逻辑编写,而不是 DSL 节点:
```python
def repeat(step):
async def run(ctx):
while not ctx.done:
ctx = await step.acall(ctx)
return ctx
return Flow(run)
```
该框架只调度步骤;它不会试图控制您的控制流。
切换模型提供商 = 更改 URL
另一个设计选择:模型后端通过单个 LLM URL 配置。
OpenAI:
```python
LLM_URL =
“llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY”
```
切换到另一个与 OpenAI 兼容的提供商(例如 SiliconFlow 或本地网关):
```python
LLM_URL =
“llm+openai://api.siliconflow.cn/v1/chat/completions?model=Qwen/Qwen2.5-7B-Instruct&api_key_env=SILICONFLOW_API_KEY”
```
工作流程代码完全没有变化。只有运行时配置会发生变化。这使得 A/B 测试模型和切换提供商在实践中更便宜。
何时有用(以及何时无用)
如果您满足以下条件,PicoFlow 可能会很有用:
- 希望快速原型代理
- 更喜欢显式的控制流
- 不想学习大型框架抽象
如果满足以下条件,它可能不是理想选择:
- 严重依赖预构建的组件和集成
- 想要一个包含所有功能的编排平台
仓库:
[https://github.com/the-picoflow/picoflow](https://github.com/the-picoflow/picoflow)
这仍然处于早期阶段,并且具有主观性。我非常感谢大家对这种“工作流程即 Python”风格是否对其他人有用的反馈,或者人们是否已经用更好的方法解决了这个问题。
谢谢!
查看原文
Hi HN,<p>I’ve been experimenting with LLM agents for a while and often felt that
for simple workflows (chat, tool calls, small loops), existing
frameworks add a lot of abstraction and boilerplate.<p>So I built a small Python library called PicoFlow. The goal is simple:<p>express agent workflows using normal async Python, not
framework-specific graphs or chains.<p>Minimal chat agent<p>Each step is just an async function, and workflows are composed with >>:<p><pre><code> from picoflow import flow, llm, create_agent
LLM_URL =
“llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY”
@flow
async def input_step(ctx):
return ctx.with_input(input(“You:”))
agent = create_agent(
input_step >>
llm(“Answer the user: {input}”, llm_adapter=LLM_URL)
)
agent.run()
</code></pre>
No chains, no graphs, no separate prompt/template objects. You can debug
by putting breakpoints directly in the async steps.<p>Control flow is just Python<p>Loops and branching are written with normal Python logic, not DSL nodes:<p><pre><code> def repeat(step):
async def run(ctx):
while not ctx.done:
ctx = await step.acall(ctx)
return ctx
return Flow(run)
</code></pre>
The framework only schedules steps; it doesn’t try to own your control
flow.<p>Switching model providers = change the URL<p>Another design choice: model backends are configured via a single LLM
URL.<p>OpenAI:<p><pre><code> LLM_URL =
“llm+openai://api.openai.com/v1/chat/completions?model=gpt-4.1-mini&api_key_env=OPENAI_API_KEY”
</code></pre>
Switch to another OpenAI-compatible provider (for example SiliconFlow or
local gateways):<p><pre><code> LLM_URL =
“llm+openai://api.siliconflow.cn/v1/chat/completions?model=Qwen/Qwen2.5-7B-Instruct&api_key_env=SILICONFLOW_API_KEY”
</code></pre>
The workflow code doesn’t change at all. Only runtime configuration
does. This makes A/B testing models and switching providers much cheaper
in practice.<p>When this is useful (and when it’s not)<p>PicoFlow is probably useful if you:<p>- want to prototype agents quickly
- prefer explicit control flow
- don’t want to learn a large framework abstraction<p>It’s probably not ideal if you:<p>- rely heavily on prebuilt components and integrations
- want a batteries-included orchestration platform<p>Repo:<p><a href="https://github.com/the-picoflow/picoflow" rel="nofollow">https://github.com/the-picoflow/picoflow</a><p>This is still early and opinionated. I’d really appreciate feedback on
whether this style of “workflow as Python” is useful to others, or if
people are solving this in better ways already.<p>Thanks!