Google says the conversational neural network with 2.6 billion parameters can chat with people better than any AI generator out there. The team trained the model with 40 billion words — 341 GB of text data including social media conversations — using the seq2seq model. Seq2seq is a variation of Google’s Transformer — a neural network that compares words in a paragraph to each other to understand the relationship between them.

  Meena has a single evolved transformer encoder block and 13 evolved transformer decoder blocks. While encoder blocks help it understand the context of the conversation, decoders help it to form a response. Google claims Meena has 1.7x more model capacity and was trained on 8.5x more data than OpenAI’s GTP-2. [Read: Google’s new AI language model can comprehend entire books] The team of researchers also devised a new matric called Sensibleness and Specificity Average (SSA) to measure how sensible and specific a conversation or a response is.   This is not the first time Google has experimented with chatbots. In 2015, it released a paper on a model that helped with tech support. Since then, the company has developed a ton of language models to understand the context of a conversation in a better manner. There have been some other conversational apps like Replika, which claims to be like a friend who’s always ready to talk. In my personal experience, the app can be a bit of a hit and miss. Unfortunately, Google isn’t releasing the bot to the open-source community as of now. That, however, might change in the future. You can read about Meena in detail in this paper.

Google claims its new chatbot Meena is the best in the world - 47