Ask HN: 如何在不赋予过多权限的情况下,让 AI 智能体访问资源?
5 分•作者: NBenkovich•1 天前
为了提高 AI 智能体的效率,我们需要构建与真实系统之间的反馈循环:部署、日志、配置、环境、仪表盘。<p>但问题就出在这里。<p>大多数现代应用程序没有细粒度的权限控制。<p>举个具体的例子:Vercel。如果我想让一个智能体读取日志或检查环境变量,我必须给它一个令牌,而这个令牌也允许它修改或删除东西。没有干净的只读或能力范围限定的访问权限。<p>这不仅仅是 Vercel 的问题。我在云仪表盘、CI/CD 系统和 SaaS API 中也看到了同样的模式,这些系统都是围绕着值得信赖的人类设计的,而不是自主智能体。<p>所以真正的问题是:<p>人们今天在生产环境中是如何限制 AI 智能体的?<p>你们是在构建强制执行策略的代理层吗?用白名单封装 API 吗?还是仅仅接受风险?<p>感觉我们正在试图将自主系统连接到从未为它们设计的基础设施上。<p>很想知道其他人是如何在实际环境中处理这个问题的,而不是停留在理论层面。
查看原文
To make AI agents more efficient, we need to build feedback loops with real systems: deployments, logs, configs, environments, dashboards.<p>But this is where things break down.<p>Most modern apps don’t have fine-grained permissions.<p>Concrete example: Vercel.
If I want an agent to read logs or inspect env vars, I have to give it a token that also allows it to modify or delete things. There’s no clean read-only or capability-scoped access.<p>And this isn’t just Vercel. I see the same pattern across cloud dashboards, CI/CD systems, and SaaS APIs that were designed around trusted humans, not autonomous agents.<p>So the real question:<p>How are people actually restricting AI agents in production today?<p>Are you building proxy layers that enforce policy? Wrapping APIs with allowlists? Or just accepting the risk?<p>It feels like we’re trying to connect autonomous systems to infrastructure that was never designed for them.<p>Curious how others are handling this in real setups, not theory.