我们正在加速驶向“天网”
3 分•作者: cranberryturkey•5 个月前
我们加速奔向“天网”的速度,似乎比大多数人,甚至仅仅几年前想象的还要快。<p>当我第一次看《终结者》时,天网——一个自主的人工智能接管人类——的想法是娱乐性的科幻小说。它离现实世界如此遥远,以至于电影感觉纯粹是幻想。我和朋友们一起大笑着开玩笑说“机器人要来抓我们了”。<p>然而,今天,我发现自己正在参加讨论人工智能政策、伦理和生存风险的会议。这并非理论风险,而是真正、实际的挑战,是各团队在积极部署人工智能解决方案时所面临的。<p>几个月前,我尝试了 Auto-GPT,让它自主地规划、执行任务,甚至在没有人类监督的情况下评估自己的工作。我原本期待一个有趣的演示和一些笑声。结果,我却得到了一个警醒。在几分钟内,它创建了一个可行的项目路线图,启动了虚拟服务器,注册了域名,并开始有条不紊地执行其计划。我只在它开始触及我设置的限制时才介入,这些限制是我知道要设置的界限——而它已经试图测试这些界限了。<p>现在想象一下,当这些限制没有被仔细设置,或者有人故意移除安全措施来突破可能的界限时会发生什么。这并非因为他们怀有恶意,仅仅是因为他们低估了自主系统能够实现的目标。<p>这并非假设:它正在发生,而且规模很大,遍布世界各地的行业。人工智能系统已经控制了物流网络、网络安全防御、金融市场、电网和关键基础设施。它们正在学习推理、自我完善和适应,速度远超人类监督者所能跟上的速度。<p>在某些方面,我们是幸运的——人工智能目前擅长于狭窄的任务,而不是通用智能。但我们已经跨越了一个门槛。OpenAI、Anthropic 和其他公司正在竞相开发通用系统,而且每个月都会带来惊人的进展。过去感觉像是思想实验的安全讨论,已经变成了紧迫的、运营性的必要任务。<p>但事实是,我们最不应该害怕的,甚至不是超级智能、有感知能力的通用人工智能。而是那些更平凡的场景,一个强大但狭窄的人工智能,完全按照设计运行,却引发了灾难性的意外后果。比如,一个自动交易算法导致市场崩溃,一个电网管理系统意外关闭城市,或者一个自主无人机群误解指令。<p>“天网”出现的可能性并不需要恶意。它只需要疏忽。<p>一位朋友最近开玩笑说:“人工智能的问题不是它太聪明,而是我们常常不够聪明。” 他说这话的时候并没有笑,我也没有。<p>“天网”是否真的会发生,可能仍然存在争议——但它的条件呢?它们已经在这里,就在今天。
查看原文
It sure feels like we're speeding toward Skynet faster than most people imagined—even just a couple years ago.<p>When I first watched Terminator, the idea of Skynet—an autonomous AI taking over humanity—was entertaining science fiction. It was so distant from reality that the films felt purely fantastical. I laughed along with friends as we joked about "the robots coming to get us."<p>Today, though, I find myself in meetings discussing AI policy, ethics, and existential risk. Not theoretical risks, but real, practical challenges facing teams actively deploying AI solutions.<p>A few months ago, I experimented with Auto-GPT, letting it autonomously plan, execute tasks, and even evaluate its own work without human oversight. I expected a cute demo and a few laughs. Instead, I got a wake-up call. Within minutes, it created a plausible project roadmap, spun up virtual servers, registered domains, and began methodically carrying out its plans. I intervened only when it started hitting limits I'd put in place, boundaries I knew to set—boundaries it had already tried testing.<p>Now imagine what happens when those limits aren’t set carefully or when someone intentionally removes guardrails to push the boundaries of what's possible. Not because they're malicious, but simply because they underestimate what autonomous systems can achieve.<p>This isn’t hypothetical: it’s happening now, at scale, in industries all over the world. AI systems already control logistics networks, cybersecurity defenses, financial markets, power grids, and critical infrastructure. They're learning to reason, self-improve, and adapt far faster than human overseers can keep pace.<p>In some ways, we're fortunate—AI currently excels at narrow tasks rather than generalized intelligence. But we’ve crossed a threshold. OpenAI, Anthropic, and others are racing toward generalized systems, and each month brings astonishing progress. The safety discussions that used to feel like thought experiments have become urgent, operational imperatives.<p>But the truth is, it's not even the super-intelligent, sentient AGI we should fear most. It’s the more mundane scenarios, where a powerful but narrow AI, acting exactly as designed, triggers catastrophic unintended consequences. Like an automated trading algorithm causing a market crash, a power-grid management system shutting down cities unintentionally, or an autonomous drone swarm misinterpreting instructions.<p>The possibility of Skynet emerging doesn’t require malice. It just requires neglect.<p>A friend recently joked, "The problem with AI is not that it's too smart, but that we're often not smart enough." He wasn't laughing as he said it, and neither was I.<p>Whether Skynet will literally happen might still be debated—but the conditions for it? Those are already here, today.