问 HN:如何礼貌地对同事提交的、主要由 AI 生成的 PR 提出反馈

2作者: chfritz2 天前
我很难找到合适的反馈方式,来评审那些主要由 AI 生成代码构成的拉取请求(PR)。提交这些 PR 的同事已经学会了事先说明这一点——我发现他们不说明时会让人很沮丧——并且现在会说他们已经评审过并迭代过了。但结果往往仍然是我所说的“偏离目标的大量贡献”,这意味着很多代码都采用了错误的方法。 通常,当某人做了大量工作(我们过去可以用代码行数来衡量),事后批评他们似乎是不公平的。一个好的开发流程,辅以需求讨论,可以确保在对总体方法达成一致之前,某人不会做大量工作。但现在,有了 AI,这种模式不再适用,部分原因在于,在甚至没有确定方向之前就做这件事“太容易”了。 所以,我一直在问自己,现在也问问大家:直接指出整个 PR 都是垃圾,应该直接丢弃,这样可以吗?我怎么知道我的同事花了多少“脑力”在上面,以及他们现在可能对它有多么依恋,如果我甚至不知道他们是否真的理解他们提交的代码呢? 我必须承认,我非常讨厌评审巨大的 PR,而 AI 生成代码的问题在于,通常情况下,找到并使用现有的开源库来完成任务会更好,而不是(重新)生成大量代码。但除非我花时间评审并理解这些庞大的、新提出的贡献,否则我怎么会知道这一点呢?即使我现在确实花时间去理解代码和隐含的方法,我又怎么知道哪一部分反映了他们真实的观点和智慧(我不会犹豫去批评),而哪些是 AI 生成的废话,我可以毫不客气地进行拆解,而不会冒犯到他们呢?如果答案是“让我们开个会”,那么我会说这个流程已经失败了。 我不确定这里是否有正确的答案,但我很想听听大家的看法。
查看原文
I struggle to find the right way to provide feedback on pull requests (PRs) that mostly consist of AI generated code. Co-workers submitting them have learned to disclose this -- I found it frustrating when they didn&#x27;t -- and now say they have reviewed and iterated on it. But often the result is still what I would describe as &quot;a big contribution off the mark&quot;, meaning a lot of code that just follows the wrong approach.<p>Usually, when someone does a lot of work, which we used to be able to measure in lines of code, it would seem unfair to criticize them afterwards. A good development process with ticket discussions would ensure that someone <i>doesn&#x27;t</i> do a lot of work before there is agreement on the general approach. But now, with AI, this script no longer works, partially because it&#x27;s &quot;too easy&quot; to do it before even deciding this.<p>So I&#x27;m asking myself and now HN: is it OK to point out when an entire PR as such is garbage and should simply be discarded? How can I tell how much &quot;brain juice&quot; a co-worker has spent on it and how attached they might be to it by now if I don&#x27;t even know whether they even know the code they submitted or not?<p>I have to admit that I <i>hate</i> reviewing huge PRs and the problem with AI generated code is that often it would have been much better to find and use an existing open-source library to get the task done rather than (re-)generate a lot of code for it. But how will I know this until I&#x27;ve actually taken the time to review and understand the big, new proposed contributions? And even if I now <i>do</i> spend the time to actually understand the code and implied approach, how will I know which part of it reflects their genuine opinion and intellect (which I&#x27;d be hesitant to criticize) and what is AI-fluff I can rip apart without stepping on their toes? If the answer is &quot;let&#x27;s have a meeting&quot;, then I&#x27;d say the process has failed.<p>Not sure there is a right answer here, but I would love to hear people&#x27;s take on this.