Show HN: LLML: 数据结构 => 提示词
1 分•作者: knrz•7 个月前
我一直在构建 AI 系统,并一直遇到同样的瓶颈——提示工程感觉就像字符串拼接的地狱。每个复杂的提示都变成了一场由 f-string 和模板字面量构成的维护噩梦。
因此,我构建了 LLML——可以把它想象成提示的 React。就像 React 是数据 => UI,LLML 是数据 => 提示。
问题:
```
# 我们都写过这样的代码...
prompt = f"角色:{role}\n"
prompt += f"上下文:{json.dumps(context)}\n"
for i, rule in enumerate(rules):
prompt += f"{i+1}. {rule}\n"
# 解决方案:
from zenbase_llml import llml
# 通过组合数据来组合提示
context = get_user_context()
prompt = llml({
"role": "高级工程师",
"context": context,
"rules": ["从不跳过测试", "总是审查依赖项"],
"task": "安全地部署服务"
})
# 输出:
<role>高级工程师</role>
<context>
...
</context>
<rules>
<rules-1>从不跳过测试</rules-1>
<rules-2>总是审查依赖项</rules-2>
</rules>
<task>安全地部署服务</task>
```
为什么像 XML? 我们发现 LLM 能够比 JSON 或 YAML 更可靠地解析具有清晰边界的结构化格式(<tag>内容</tag>)。编号列表(<rules-1>,<rules-2>)可以防止顺序混淆。
提供 Python 和 TypeScript 版本:
```
pip/poetry/uv/rye install zenbase-llml
npm/pnpm/yarn/bun install @zenbase/llml
```
实验性的 Rust 和 Go 实现也已提供,供喜欢冒险的人使用 :)
主要特点:
```
- ≤1 个依赖项
- 可扩展的格式化程序系统(为您的领域对象创建自定义格式化程序)
- 100% 测试覆盖率(TypeScript),92%(Python)
- 所有语言实现中的输出相同
```
格式化程序系统特别出色——您可以覆盖任何数据类型的序列化方式,从而可以轻松处理特定于领域对象或敏感数据。
GitHub: [https://github.com/zenbase-ai/llml](https://github.com/zenbase-ai/llml)
很想听听其他人是否遇到过类似的提示工程挑战,以及您是如何解决的!
查看原文
I've been building AI systems for a while and kept hitting the same wall - prompt engineering felt like string concatenation hell. Every complex prompt became a maintenance nightmare of f-strings and template literals.<p>So I built LLML - think of it as React for prompts. Just as React is data => UI, LLML is data => prompt.<p>The Problem:<p><pre><code> # We've all written this...
prompt = f"Role: {role}\n"
prompt += f"Context: {json.dumps(context)}\n"
for i, rule in enumerate(rules):
prompt += f"{i+1}. {rule}\n"
# The Solution:
from zenbase_llml import llml
# Compose prompts by composing data
context = get_user_context()
prompt = llml({
"role": "Senior Engineer",
"context": context,
"rules": ["Never skip tests", "Always review deps"],
"task": "Deploy the service safely"
})
# Output:
<role>Senior Engineer</role>
<context>
...
</context>
<rules>
<rules-1>Never skip tests</rules-1>
<rules-2>Always review deps</rules-2>
</rules>
<task>Deploy the service safely</task>
</code></pre>
Why XML-like? We found LLMs parse structured formats with clear boundaries (<tag>content</tag>) more reliably than JSON or YAML. The numbered lists (<rules-1>, <rules-2>) prevent ordering confusion.<p>Available in Python and TypeScript:<p><pre><code> pip/poetry/uv/rye install zenbase-llml
npm/pnpm/yarn/bun install @zenbase/llml
</code></pre>
Experimental Rust and Go implementations also available for the adventurous :)<p>Key features:<p><pre><code> - ≤1 dependencies
- Extensible formatter system (create custom formatters for your domain objects)
- 100% test coverage (TypeScript), 92% (Python)
- Identical output across all language implementations
</code></pre>
The formatter system is particularly neat - you can override how any data type is serialized, making it easy to handle domain-specific objects or sensitive data.<p>GitHub: <a href="https://github.com/zenbase-ai/llml">https://github.com/zenbase-ai/llml</a><p>Would love to hear if others have faced similar prompt engineering challenges and how you've solved them!