AGI 在数学上是不可能的(3):柯尔莫哥洛夫复杂度

2作者: ICBTheory6 个月前
大家好。 这是我过去几年一直在开发的持续理论的第三部分,称为无限选择壁垒(ICB)。核心思想很简单: 通用智能——尤其是AGI——在某些认知条件下在结构上是不可能的。 不是道德上,不是实践上。是数学上。 该论点分为三个壁垒: 1. 可计算性(哥德尔、图灵、莱斯):你无法决定你的系统无法看到什么。 2. 熵(香农):超过某个点,信号在结构上会崩溃。 3. 复杂性(柯尔莫哥洛夫、柴廷):大多数现实世界的问题从根本上是不可压缩的。 本文重点关注(3):柯尔莫哥洛夫复杂性。 它认为,人类所关心的的大部分内容不仅难以建模,而且在形式上是不可建模的——因为问题的最短描述就是问题本身。 换句话说:你无法从无法压缩的内容中进行泛化。 ⸻ 以下是摘要: 人们普遍误解,认为通用人工智能(AGI)将通过规模、记忆或递归优化而出现。本文则持相反观点:随着系统规模的扩大,它们会接近泛化本身的结构极限。 利用柯尔莫哥洛夫复杂性,我们表明许多现实世界的问题——特别是那些涉及社会意义、上下文差异和语义波动的问题——在形式上是不可压缩的,因此无法被任何有限算法学习。 这并非性能问题。而是一道数学壁垒。它不在乎你拥有多少个token。 本文内容并不轻松,但很精确。 如果你对极限、结构以及为什么大多数智能发生在优化之外感兴趣,它可能值得你花时间阅读。 [https://philpapers.org/archive/SCHAII-18.pdf](https://philpapers.org/archive/SCHAII-18.pdf) 很高兴阅读你的观点。
查看原文
Hi folks. This is the third part in an ongoing theory I’ve been developing over the last few years called the Infinite Choice Barrier (ICB). The core idea is simple:<p>General intelligence—especially AGI—is structurally impossible under certain epistemic conditions.<p>Not morally, not practically. Mathematically.<p>The argument splits across three barriers: 1.Computability (Gödel, Turing, Rice): You can’t decide what your system can’t see. 2.Entropy (Shannon): Beyond a certain point, signal breaks down structurally. 3.Complexity (Kolmogorov, Chaitin): Most real-world problems are fundamentally incompressible.<p>This paper focuses on (3): Kolmogorov Complexity. It argues that most of what humans care about is not just hard to model, but formally unmodellable—because the shortest description of a problem is the problem.<p>In other words: you can’t generalize from what can’t be compressed.<p>⸻<p>Here’s the abstract:<p>There is a common misconception that artificial general intelligence (AGI) will emerge through scale, memory, or recursive optimization. This paper argues the opposite: that as systems scale, they approach the structural limit of generalization itself. Using Kolmogorov complexity, we show that many real-world problems—particularly those involving social meaning, context divergence, and semantic volatility—are formally incompressible and thus unlearnable by any finite algorithm.<p>This is not a performance issue. It’s a mathematical wall. And it doesn’t care how many tokens you’ve got<p>The paper isn’t light, but it’s precise. If you’re into limits, structures, and why most intelligence happens outside of optimization, it might be worth your time.<p>https:&#x2F;&#x2F;philpapers.org&#x2F;archive&#x2F;SCHAII-18.pdf<p>Happy to read your view.