Ask HN: 在 Claude/ChatGPT 时代,你们如何进行技术面试?
2 分•作者: jonjou•9 天前
我是一名创始人/开发者,正在努力寻找更好的技术面试方法,因为目前的面试方式简直是一场噩梦。
现在,任何标准的家庭作业或 HackerRank/LeetCode 测试都轻而易举地被大型语言模型(LLMs)解决了。因此,公司无意中雇佣了我们称之为“氛围型程序员”的人,他们擅长引导 AI 生成样板代码,但在架构变得复杂、出现问题或 AI 出现细微幻觉时,他们就会完全卡住。
我们正在研究一种新方法,并希望与实际进行这些面试的人一起验证其工程逻辑。
我们不想试图禁止 AI(这注定是一场失败的战斗),而是想测试“AI 操控”能力。
我们的想法是:
1. 将候选人放入一个真实的、有点混乱的沙盒代码库中。
2. 让他们使用他们想要的任何 AI。
3. 注入一个微妙的架构变化、一个破坏性的依赖关系,或一个 AI 幻觉。
4. 完全通过遥测数据(Git diffs、CI/CD 运行、调试路径)来衡量他们如何恢复和修复混乱。
基本上:在 AI 时代,停止测试语法,开始测试架构和调试技能。
在我们花费数月时间构建这个模拟的后端之前,我需要从经验丰富的领导者那里获得现实反馈:
1. 测试候选人“操控”和调试 AI 生成代码的能力,对您来说是否比传统的算法测试更有意义?
2. 您目前是如何防止这些“仅靠提示”的开发者通过您自己的面试流程的?
(这里不提供任何链接,因为还没有什么可卖的,只是想寻求对该方法的严厉反馈。)
查看原文
I’m a founder/dev trying to figure out a better way to do technical interviews, because the current state is a nightmare.<p>Right now, every standard take-home or HackerRank/LeetCode test is easily solved by LLMs. As a result, companies are accidentally hiring what we call vibe coders, candidates who are phenomenal at prompting AI to generate boilerplate, but who completely freeze when the architecture gets complex, when things break, or when the AI subtly hallucinates.<p>We are working on a new approach and I want to validate the engineering logic with the people who actually conduct these interviews.<p>Instead of trying to ban AI (which is a losing battle), we want to test for "AI Steering".<p>The idea:
1. Drop the candidate into a real, somewhat messy sandbox codebase.<p>2. Let them use whatever AI they want.<p>3. Inject a subtle architectural shift, a breaking dependency, or an AI hallucination.<p>4. Measure purely through telemetry (Git diffs, CI/CD runs, debugging paths) how they recover and fix the chaos.<p>Basically: Stop testing syntax, start testing architecture and debugging skills in the age of AI.<p>Before we spend months building out the backend for this simulation, I need a reality check from experienced leads:
1. Does testing a candidate's ability to "steer" and debug AI-generated code make more sense to you than traditional algorithms?<p>2. How are you currently preventing these "prompt-only" developers from slipping through your own interview loops?<p>(Not linking anything here because there's nothing to sell yet, just looking for brutal feedback on the methodology.)