It is time AI started to play by the rules | 人工智能是时候遵守规则了 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT英语电台

It is time AI started to play by the rules
人工智能是时候遵守规则了

Creating regulations for something so fast-changing is difficult but that is no reason not to try
00:00

Late last year, California almost passed a law that would force makers of large artificial intelligence models to come clean about the potential for causing large-scale harms. It failed. Now, New York is trying on a law of its own. Such proposals have wrinkles, and risk slowing the pace of innovation. But they are still better than doing nothing.

The risks from AI have increased since California’s fumble last September. Chinese developer DeepSeek has shown that powerful models can be made on a shoestring. Engines capable of complex “reasoning” are supplanting those that simply spit out quick-fire answers. And perhaps the biggest shift: AI developers are furiously building “agents”, designed to carry out tasks and engage with other systems, with minimal human supervision.

undefined

How to create rules for something so fast-moving? Even deciding what to regulate is a challenge. Law firm BCLP has tracked hundreds of bills on everything from privacy to accidental discrimination. New York’s bill focuses on safety: large developers would have to create plans to reduce the risk that their models produce mass casualties or large financial losses, withhold models that present “unreasonable risk” and notify the state authorities within three days when an incident occurs.

Even with the best intentions, laws governing new technologies can end up ageing like milk. But as AI scales up, so do the concerns. A report published on Tuesday by a band of California AI luminaries outlines a few: for example, OpenAI’s o3 model outperforms 94 per cent of expert virologists. Evidence that a model could facilitate the production of chemical or nuclear weapons, it adds, is emerging in real time.

Disseminating dangerous information to bad actors is only one danger. Models’ adherence to users’ objectives is also raising concerns. Already, the California report notes mounting evidence of “alignment scheming”, where models follow orders in the lab, but not in the wild. Even the pope fears AI could pose a threat to “human dignity, justice and labour.”

Many AI boosters disagree, of course. Venture capital firm Andreessen Horowitz, a backer of OpenAI, argues rules should target users, not models. That lacks logic in a world where agents are designed to act with minimal user input.

Nor does Silicon Valley appear willing to meet in the middle. Andreessen has described the New York law as “stupid”. A lobby group it founded proposed New York’s law exempt any developer with $50bn or less of AI-specific revenue, Lex has learned. That would spare OpenAI, Meta and Google — in other words, everyone of substance.

undefined

Big Tech should reconsider this stance. Guardrails benefit investors too, and there is scant likelihood of meaningful federal rulemaking. As Lehman Brothers or AIG’s former shareholders can attest, backing a company that brings about systemic calamity is no fun.

The path ahead involves much horse-trading; New York governor Kathy Hochul has until the end of 2025 to request amendments to the state’s bill. Some Republicans in Congress have proposed blocking states from regulating AI altogether. And with every week that passes, AI reveals new powers. The regulatory landscape is a mess, but leaving it to chance will create one far bigger and harder to clean up.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

我们是否正处于核复兴的边缘?

为什么小型核电站会对可再生能源构成巨大威胁?

内塔尼亚胡与伊朗的战争:“对他来说,这是私人恩怨”

在哈马斯于10月7日发动袭击后,这位以色列总理的政治生涯似乎已经走到尽头。但如今,他正推动着一场自己多年来一直主张的冲突。

玛格丽特•米切尔:通用人工智能不过是“氛围和蛇油”

人工智能伦理领域的先驱之一解释了为何人类需求应成为科技发展的核心驱动力。

谁能在伊朗问题上影响特朗普?

从JD•万斯到“猩猩”,MAGA忠诚支持者和军方领导人正争夺在椭圆形办公室的影响力。

为什么华尔街害怕一个33岁的政治局外人

进步派候选人佐赫兰•马姆达尼搅动了纽约市长选举,城市精英们想要阻止他。

以色列空袭伊朗伊斯法罕核设施,特朗普权衡是否介入战争

美国总统认为欧洲领导的停火谈判无效。
设置字号×
最小
较小
默认
较大
最大
分享×