Ask HN:我们距离搞懂 LLM/Agent 的记忆还有多远?

2作者: wkyleg3 天前
我使用大型语言模型时遇到的主要问题,也是最阻碍我进一步使用的,是智能体无法记住相关上下文。 几年前,大家都在使用 RAG、嵌入、模型之上的数据库。现在,能够访问本地 Markdown 和记忆文件(如 OpenClaw)的模型,似乎很容易就能用 grep 和简单的 UNIX 工具超越这些数据库。 这是大型语言模型规模化固有的问题吗?Obsidian 对大多数人来说真的好用得多吗?有人发现有什么能真正超越 Markdown 的吗? 目前,我采用的主要瓶颈似乎是记忆和持久的长期上下文,而不是模型的质量或可靠性。 我很好奇,我们是否可以使用任何技术或规模化指标来预测这最终会发展成什么样子。
查看原文
The main issue I experience with LLMs, and the one that seems to most inhibit my further adoption is lack of ability of agents to remember relevant context.<p>A few years ago everyone was using RAG, embeddings, databases on top of models. Now models with access to local markdown and memory files (like OpenClaw) seem to be readily outperforming these databases with grep and simple UNIX tools.<p>Is this an inherent issue in scaling LLMS? Does Obsidian work that much better for most people? It anyone finding anything that actually outperforms markdown?<p>At this point the main bottleneck in my adoption seems to be memory and persistent long term context, not quality or reliability of the models.<p>I&#x27;m curious if there are any technical or scaling metrics we could use to forecast where this will end up going.