Sudden breakthroughs in AI could hold the key to digital progress | 人工智能的突飞猛进可能成为数字化发展的关键 - FT中文网
登录×
电子邮件/用户名
密码
记住我
请输入邮箱和密码进行绑定操作:
请输入手机号码,通过短信验证(目前仅支持中国大陆地区的手机号):
请您阅读我们的用户注册协议隐私权保护政策,点击下方按钮即视为您接受。
FT英语电台

Sudden breakthroughs in AI could hold the key to digital progress
人工智能的突飞猛进可能成为数字化发展的关键

TikTok’s recommendation algorithm and OpenAI’s language model are exemplars of key tipping points in deep learning
TikTok的推荐算法和OpenAI的语言模型是深度学习中最受关注的关键引爆点。
00:00

With the steady pace at which the building blocks of computing technology advance, it is easy to be lulled into a belief in the incremental and predictable nature of digital progress. But that doesn’t take account of the sudden and disruptive new applications that suddenly become possible along the way.

There have been few fields that make this case as clearly as deep learning, the main technique behind recent advances in AI. This is a technology that has been many years in the making: it was just a case of waiting for computing power to become abundant and cheap enough, and for data to become available in large enough quantities to train the systems. At that point, the algorithms would start to bootstrap themselves.

Two highly visible current examples have shown just how disruptive the results can be when the technology reaches a critical point. The first case, involving data and algorithms, is TikTok. The huge success of the Chinese-owned app at the centre of a political storm in the US can be traced to many things. Among them are its slick automated editing, freeing of “watermarked” videos to travel beyond its own network, and a format that touched a nerve with its target audience.

But the thing that has excited the techies most has been its use of AI to serve up the videos that are most likely to keep its audience hooked. The results of its personalisation technology have been addictive, yielding the heightened engagement that is gold dust to a social media company.

When deep learning systems first reached the mainstream, there seemed to be a real risk that start-ups would struggle to compete. Big companies with access to masses of data and computing power would be able to train the most effective models, in turn bringing them more users (and data) and ensuring an unassailable lead.

It turns out that a viral app can act as the flywheel. Recommendation systems have been around for years, but TikTok was still able to achieve meaningful lift-off.

Microsoft, with one of the biggest AI research efforts in the world, is now hoping to buy part or all of the upstart, partly to get access to its deep learning insights — though a White House seemingly bent on barring the app from the US could thwart the effort.

The second example of the sudden breakthroughs that have come from steady advances in the building blocks of AI involves hardware, and it also touches on Microsoft. OpenAI — a San Francisco research organisation that received a $1bn investment from the software company last year — recently released a new, large-scale language system, known as GPT-3, to an invited audience.

There is a race on to build ever-larger language models, where massive volumes of text are ingested by systems that use them to try to gain a better understanding of how language works. OpenAI’s own GPT-2 was one of the first to use the technology for automated writing. Google’s version of the technology, called BERT, now works so well that it has been put to work in the company’s search engine, acting invisibly in the background to decipher what searchers mean with their more complex queries.

What would happen if you threw even more computing power at the problem? That is the whole idea behind OpenAI’s research programme, and the reason it took the investment from Microsoft, much of it “in kind” in the form of technology. Earlier this year, the software company revealed that it had built what it claimed was the world’s fifth most powerful supercomputer, to be used exclusively by OpenAI.

The result of all this hardware — along with further adaptations to the algorithms — is an automated writing system that can reportedly do a passable impression of a real person on almost any topic. That may sound like a gimmick with few practical applications, other than spewing out reams of realistic-sounding fake news. But it could eventually lead to the automation of many simple text-based tasks where humans are currently required. By mining the sum of human knowledge, it could also make connections and yield insights that humans haven’t thought of.

The thought experiment involving an infinite number of monkeys, hammering away at an infinite number of typewriters, posits that one of them must eventually write the complete works of Shakespeare. Far more interesting, though, could be the many other things the monkeys would come up with along the way, including the oeuvres of writers who never existed.

It would still take human intelligence to “understand” the systems’ mindless output. But as with TikTok’s recommendation engine, the results, if properly channelled, could be significant.

版权声明:本文版权归FT中文网所有,未经允许任何单位或个人不得转载,复制或以任何其他方式使用本文全部或部分,侵权必究。

央行行长们的新年计划

决策者应承诺公布“中性”利率水平的估计值。

马斯克会成为英国民粹政党的政治捐赠人吗?

科技行业亿万富翁正在“认真考虑”向奈杰尔•法拉奇领导的英国改革党捐款。

Lex专栏:本田和日产要用越野思维来解决电动化挑战

传统汽车制造商与其试图建立电动汽车制造规模,不如另辟蹊径。

Lex专栏:投资者厌倦了“画饼式”能源转型公司

无论战略多么高瞻远瞩,股东的耐心都会被消磨殆尽。

在特朗普执政期间,加密货币监管需要经过深思熟虑的重新审视

期待已久的公共政策支持可以提升美国在区块链技术、人工智能和加密货币领域的领导地位。
1天前

特斯拉努力避免取消马斯克薪酬方案的高昂成本

如果这家电动汽车制造商和首席执行官被迫放弃2018年的交易,他们可能会面临超过1000亿美元的会计和税务费用。
设置字号×
最小
较小
默认
较大
最大
分享×