Connect with us

Natural Language Processing

GPT-2, Artificial Intelligence Text-Generator Is Being Released In Full

mm

Updated

 on

As TheNextWeb (TNW) reports, OpenAI, the non-profit organization behind a number of artificial intelligence projects has just published the final model in the planned staged release for GPT-2, a text generator that has caused quite a debate since its announced release in February.

Based on OpenAI’s research paper titled Language Models are Unsupervised Multitask Learners, “GPT-2 uses machine learning to generate novel text-based on limited input.” What that means is that a user can type in a sentence or two about any subject and the AI generator will come up with a text that has some relation to the original input. In essence, as TNW notes, unlike most ‘text generators’ it doesn’t output pre-written strings. GPT-2 makes up text that didn’t previously exist.”

In his tweet, Scott B. Weingart, program director of Carnegie Mellon University Libraries gives a concrete example:

 

OpenAI was initially concerned about possible malicious uses of their system so back in February 2019 it decided to release GPT-2 in four parts over eight months. As they explained in their blog, “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.”

As explained, the full model contains 1.5 billion parameters. “The more parameters a model is trained with, the ‘smarter’ it appears to be – just like humans, practice makes perfect.”

TNW notes that initially OpenAI released a model with 124 million parameters subsequently followed by releases with 355 and 774 million. According to them, after testing the released models, “each iteration showed a significant improvement in capability over previous iterations.”

To prevent misuse OpenAI released GPT-2 detection models that are supposed “to preemptively combat misuse.” To their own admission in a blog post, these detection models still need additional work to reach the quality level achieved so far in GPT-2 itself.

Those interested can download the GPT-2 model here on Github, check out the model card here, and read OpenAI‘s blog post here.

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.