Show HN: LLML: 数据结构 => 提示词

1作者: knrz7 个月前
我一直在构建 AI 系统,并一直遇到同样的瓶颈——提示工程感觉就像字符串拼接的地狱。每个复杂的提示都变成了一场由 f-string 和模板字面量构成的维护噩梦。 因此,我构建了 LLML——可以把它想象成提示的 React。就像 React 是数据 => UI,LLML 是数据 => 提示。 问题: ``` # 我们都写过这样的代码... prompt = f"角色:{role}\n" prompt += f"上下文:{json.dumps(context)}\n" for i, rule in enumerate(rules): prompt += f"{i+1}. {rule}\n" # 解决方案: from zenbase_llml import llml # 通过组合数据来组合提示 context = get_user_context() prompt = llml({ "role": "高级工程师", "context": context, "rules": ["从不跳过测试", "总是审查依赖项"], "task": "安全地部署服务" }) # 输出: <role>高级工程师</role> <context> ... </context> <rules> <rules-1>从不跳过测试</rules-1> <rules-2>总是审查依赖项</rules-2> </rules> <task>安全地部署服务</task> ``` 为什么像 XML? 我们发现 LLM 能够比 JSON 或 YAML 更可靠地解析具有清晰边界的结构化格式(<tag>内容</tag>)。编号列表(<rules-1>,<rules-2>)可以防止顺序混淆。 提供 Python 和 TypeScript 版本: ``` pip/poetry/uv/rye install zenbase-llml npm/pnpm/yarn/bun install @zenbase/llml ``` 实验性的 Rust 和 Go 实现也已提供,供喜欢冒险的人使用 :) 主要特点: ``` - ≤1 个依赖项 - 可扩展的格式化程序系统(为您的领域对象创建自定义格式化程序) - 100% 测试覆盖率(TypeScript),92%(Python) - 所有语言实现中的输出相同 ``` 格式化程序系统特别出色——您可以覆盖任何数据类型的序列化方式,从而可以轻松处理特定于领域对象或敏感数据。 GitHub: [https://github.com/zenbase-ai/llml](https://github.com/zenbase-ai/llml) 很想听听其他人是否遇到过类似的提示工程挑战,以及您是如何解决的!
查看原文
I&#x27;ve been building AI systems for a while and kept hitting the same wall - prompt engineering felt like string concatenation hell. Every complex prompt became a maintenance nightmare of f-strings and template literals.<p>So I built LLML - think of it as React for prompts. Just as React is data =&gt; UI, LLML is data =&gt; prompt.<p>The Problem:<p><pre><code> # We&#x27;ve all written this... prompt = f&quot;Role: {role}\n&quot; prompt += f&quot;Context: {json.dumps(context)}\n&quot; for i, rule in enumerate(rules): prompt += f&quot;{i+1}. {rule}\n&quot; # The Solution: from zenbase_llml import llml # Compose prompts by composing data context = get_user_context() prompt = llml({ &quot;role&quot;: &quot;Senior Engineer&quot;, &quot;context&quot;: context, &quot;rules&quot;: [&quot;Never skip tests&quot;, &quot;Always review deps&quot;], &quot;task&quot;: &quot;Deploy the service safely&quot; }) # Output: &lt;role&gt;Senior Engineer&lt;&#x2F;role&gt; &lt;context&gt; ... &lt;&#x2F;context&gt; &lt;rules&gt; &lt;rules-1&gt;Never skip tests&lt;&#x2F;rules-1&gt; &lt;rules-2&gt;Always review deps&lt;&#x2F;rules-2&gt; &lt;&#x2F;rules&gt; &lt;task&gt;Deploy the service safely&lt;&#x2F;task&gt; </code></pre> Why XML-like? We found LLMs parse structured formats with clear boundaries (&lt;tag&gt;content&lt;&#x2F;tag&gt;) more reliably than JSON or YAML. The numbered lists (&lt;rules-1&gt;, &lt;rules-2&gt;) prevent ordering confusion.<p>Available in Python and TypeScript:<p><pre><code> pip&#x2F;poetry&#x2F;uv&#x2F;rye install zenbase-llml npm&#x2F;pnpm&#x2F;yarn&#x2F;bun install @zenbase&#x2F;llml </code></pre> Experimental Rust and Go implementations also available for the adventurous :)<p>Key features:<p><pre><code> - ≤1 dependencies - Extensible formatter system (create custom formatters for your domain objects) - 100% test coverage (TypeScript), 92% (Python) - Identical output across all language implementations </code></pre> The formatter system is particularly neat - you can override how any data type is serialized, making it easy to handle domain-specific objects or sensitive data.<p>GitHub: <a href="https:&#x2F;&#x2F;github.com&#x2F;zenbase-ai&#x2F;llml">https:&#x2F;&#x2F;github.com&#x2F;zenbase-ai&#x2F;llml</a><p>Would love to hear if others have faced similar prompt engineering challenges and how you&#x27;ve solved them!