Facebook Creates Natural Sounding Open-Domain Chatbot Trained On 1.5bn Reddit Posts
Facebook has recently launched an open-domain chatbot dubbed “Blender”. As the BBC reported, Facebook’s Blender is its largest-ever open-domain chatbot and according to the company, it’s able to show empathy, personality, and knowledge, outperforming and feeling more human than other chatbots. Blender is available for use in three different variants, each with a different level of complexity and model sophistication. There is a 90 million parameter neural model, a 2.7 billion parameter model, and a 9.4 billion parameter model. The 9.4 billion parameter model is approximately 3.6 times larger than the largest currently existing chatbot.
Facebook claims that Blender is the first widely available chatbot that blends together a complex, diverse set of conversational skills like knowledge and empathy. The Blender chatbot is able to discuss a variety of topics and carry on a conversation that tracks up to 14 conversational “turns”. Reportedly, approximately 49% of people who interacted with the chatbot preferred to interact with the bot instead of another human.
Blender was trained using data collected from public domain conversations collected from social media sites like Reddit. The dataset that was used to train Blender included approximately 1.5 billion examples of conversations. While the training data and the sophistication of the model enabled the chatbot to carry out complex discussions and have a more natural, human-like feel, there were some problems that manifested themselves as a result of this training technique. According to the researchers, Blender would sometimes respond to conversational prompts aggressively or with offensive language. Additionally, the chatbot would occasionally respond with falsehoods and made-up facts. The research team is optimistic that revisions to the model will deal with these problems, but they wanted to release the models anyway.
According to a Facebook spokesperson, as quoted by the BBC, “releasing models is essential to enable full, reliable insights into their capabilities.”
The Facebook researchers will still need to overcome two major challenges to be successful, according to the chief executive of artificial intelligence consultancy The Envisioners, Dave Choplin. As quoted by the BBC, Choplin believes that the researchers will face major challenges in replicating the many nuances of conversations that humans often take for granted and that the data used to train the model might inhibit it in ways that Facebook’s researchers may find difficult to overcome.
“As great a platform as Reddit is, training algorithms based on the conversations you find there is going to get you a lot of chaff amongst the wheat,” said Choplin.
Facebook made comparisons between its own Blender chatbot and Google’s contemporary chatbot dubbed Meena. Facebook’s blog post displayed comparisons between Meena and Blender, and according to Facebook’s data approximately 75% of the respondents who had carried on a conversation with Blender said that they would rather have a prolonged discussion with Blender over Meena. Additionally, approximately 67% of respondents stated that Blender was more natural and human-like than Meena when it came to long conversations.
In the future, the Facebook research team wants to improve the conversational ability of the models, designing models that can support even longer conversations with the assistance of new loss functions and architectures. In addition, the researchers are aiming to deal with gender bias and potentially harmful language by building better classifiers and more representative training data.
- Beyond AI Technophobia: Formation of Citizens and Global Education Uplifting
- Rethinking Robot Rights: A Confucian Approach
- 10 Best AI Email Inbox Management Tools (June 2023)
- Tech Leaders Highlighting the Risks of AI & the Urgency of Robust AI Regulation
- NVIDIA: From Chipmaker to Trillion-Dollar AI Powerhouse