自然 | ChatGPT等工具威胁着科学的透明性


来源:《自然》

原文刊登日期:2023年1月24日


It has been clear for several years that artificial intelligence (AI) is gaining the ability to generate fluent language, churning out sentences that are increasingly hard to distinguish from text written by people. Some scientists are already using chatbots as research assistants — to help organize their thinking, generate feedback on their work, assist with writing code and summarize research literature.

翻译

近年来,人工智能(AI)正在获得生成流畅语言的能力,大量生成的句子越来越难以与人类书写的文本区分开来。一些科学家已经在使用聊天机器人作为研究助手——帮助组织思路,对工作作出反馈,协助编写代码和总结研究文献。


But the release of the AI chatbot ChatGPT has brought the capabilities of such tools, known as large language models (LLMs), to a mass audience. Its developers, OpenAI, have made the chatbot free to use and easily accessible for people who don’t have technical expertise. Millions are using it, and the result has been an explosion of fun and sometimes frightening writing experiments.

翻译

但人工智能聊天机器人ChatGPT的发布,将这种被称为大型语言模型(LLMs)的工具的功能带给了大众。它的开发者OpenAI已经让这款聊天机器人免费使用,没有技术专业知识的人也可以轻松访问。数以百万计的人正在使用它,其结果是一场有趣的、有时令人恐惧的写作实验的爆发。


ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. It has produced research abstracts good enough that scientists found it hard to spot that a computer had written them. Worryingly for society, it could also make spam, ransomware and other malicious outputs easier to produce. Although OpenAI has tried to put guard rails on what the chatbot will do, users are already finding ways around them.

翻译

ChatGPT可以写出像样的学生论文,总结研究论文,回答问题足以通过医学考试,并生成有用的计算机代码。它已经能够生成足够好的研究摘要,以至于科学家们很难发现是计算机写的。令社会担忧的是,它还可能使垃圾邮件、勒索软件和其他恶意输出更容易产生。尽管OpenAI试图对聊天机器人的行为设置护栏,但用户已经找到了绕过它们的方法。


The big worry in the research community is that students and scientists could deceitfully pass off LLM-written text as their own and produce work that is unreliable. Several preprints and published articles have already credited ChatGPT with formal authorship.

翻译

学术界最大的担忧是,学生和科学家可能会欺骗性地将LLM撰写的文本当作自己的,从而产生不可靠的工作。一些预印本和已发表的文章已经将ChatGPT列为正式作者。


That’s why it is high time researchers and publishers laid down ground rules about using LLMs ethically. Nature, along with all Springer Nature journals, has formulated the following two principles, which have been added to our existing guide to authors.

翻译

这就是为什么现在是研究人员和出版商制定道德使用LLM的基本规则的时候了。《自然》和所有施普林格自然旗下杂志制定了以下两个原则,这两个原则已添加到我们现有的作者指南中。


First, no LLM tool will be accepted as a credited author on a research paper. That is because any attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.

翻译

首先,LLM工具不会被接受为一篇研究论文的可信作者。这是因为任何作者身份都带有对作品的责任,而人工智能工具无法承担这样的责任。


Second, researchers using LLM tools should document this use in the methods or acknowledgements sections. If a paper does not include these sections, the introduction or another appropriate section can be used to document the use of the LLM.

翻译

其次,使用LLM工具的研究人员应该在方法或致谢部分记录这种使用。如果论文不包括这些章节,则可以使用引言或其他适当章节来记录LLM的使用。


From its earliest times, science has operated by being open and transparent about methods and evidence. Researchers should ask themselves how the transparency and trust-worthiness that the process of generating knowledge relies on can be maintained if they or their colleagues use software that works in a fundamentally opaque manner.

翻译

从最早的时候起,科学的运作方式就是对方法和证据公开透明。研究人员应该扪心自问,如果他们或同事使用的软件以一种根本不透明的方式工作,那么产生知识的过程所依赖的透明度和可信度如何保持。


That is why Nature is setting out these principles: ultimately, research must have transparency in methods, and integrity and truth from authors. This is, after all, the foundation that science relies on to advance.

翻译

这就是《自然》提出这些原则的原因:最终,研究必须在方法上透明,作者必须诚实和真实。毕竟,这是科学进步的基础。




意见反馈  ·  辽ICP备2021000238号