The Language Model Too Dangerous to Release

Published: March 25, 2019, 1:39 a.m.

b'OpenAI recently created a cutting-edge new natural language processing model, but unlike all their other projects so far, they have not released it to the public. Why? It seems to be a little too good. It can answer reading comprehension questions, summarize text, translate from one language to another, and generate realistic fake text. This last case, in particular, raised concerns inside OpenAI that the raw model could be dangerous if bad actors had access to it, so researchers will spend the next six months studying the model (and reading comments from you, if you have strong opinions here) to decide what to do next. Regardless of where this lands from a policy perspective, it\\u2019s an impressive model and the snippets of released auto-generated text are quite impressive. We\\u2019re covering the methodology, the results, and a bit of the policy implications in our episode this week.'