Show HN: TurnZero – LLM 的持久化专家

1作者: dmilicev25 天前
为了减少 AI 会话中的冷启动,我制作了一个工具,它作为 MCP 服务器运行,并在第 0 轮之前加载上下文。 它会执行两件事: 个人先验知识 - 您的工作流程和标准会在每个会话中加载一次,并在所有支持的 AI 客户端中持续存在。 专家先验知识 - 当提示与特定堆栈相关时,会根据语义相似性注入相关的先验知识。这是为了减少 AI 的错误和不期望的行为。 隐私保证:本地优先设计。原始提示永远不会被存储。注入始终在客户端进行。 ```bash pipx install turnzero turnzero setup # 将 MCP 服务器注册到 Claude Code、Cursor、Claude Desktop、Gemini CLI turnzero verify # 确认一切都已正确连接 ``` 演示:[https://asciinema.org/a/8IV2yoLNTloSlZo0](https://asciinema.org/a/8IV2yoLNTloSlZo0) 代码库:[https://github.com/turnzero-ai/turnzero](https://github.com/turnzero-ai/turnzero)
查看原文
In an attempt to reduce cold starts in AI sessions Ive made a tool that runs as an MCP server and loads the context before Turn 0.<p>Two things happen:<p>Personal Priors - your workflows and standards loads once per session and persists across every supported AI client.<p>Expert Priors - when prompt is stack specific, relevar priors inject based on semantic similarity. This is to reduce errors and unwanted behaviour of the AI.<p>Privacy guarantee: local-first by design. Raw prompts are never stored. Injection is always client-side.<p>```bash pipx install turnzero turnzero setup # registers MCP server with Claude Code, Cursor, Claude Desktop, Gemini CLI turnzero verify # confirms everything is wired correctly ```<p>Demo:<a href="https:&#x2F;&#x2F;asciinema.org&#x2F;a&#x2F;8IV2yoLNTloSlZo0" rel="nofollow">https:&#x2F;&#x2F;asciinema.org&#x2F;a&#x2F;8IV2yoLNTloSlZo0</a><p>Repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;turnzero-ai&#x2F;turnzero" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;turnzero-ai&#x2F;turnzero</a>