Now that nearly every possible mobile device and appliance has either adopted or at least experimented with voice control, conversational AI is quickly becoming the new frontier. Instead of handling one query and providing one response or action, conversational AI aims to provide a realtime interactive system that can span multiple questions, answers, and comments. While the fundamental building blocks of conversational AI, like BERT and RoBERTa for language modeling, are similar to those for one-shot speech recognition, the concept comes with additional performance requirements for training, inferencing, and model size. Today, Nvidia released and open-sourced three technologies designed to address those issues.
Faster Training of BERT
While in many cases it’s possible to use a pre-trained language model for new tasks with just some tuning, for optimal performance in a particular context re-training is a necessity. Nvidia has demonstrated that it can now train BERT (Google’s reference language model) in under an hour on a DGX SuperPOD consisting of 1,472 Tesla V100-SXM3-32GB GPUs, 92 DGX-2H servers, and 10 Mellanox Infiniband per node. No, I don’t want to even try and estimate what the per-hour rental is for one of those. But since models like this have typically taken days to train even on high-end GPU clusters, this will definitely help time to market for companies who can afford the cost.
Faster Language Model Inferencing
For natural conversations, the industry benchmark is 10ms response time. Understanding the query and coming up with a suggested reply is just one part of the process, so it needs to take less than 10ms. By optimizing BERT using TensorRT 5.1, Nvidia has it inferencing in 2.2ms on an Nvidia T4. What’s cool is that a T4 is actually within the reach of just about any serious project. I used them in the Google Compute Cloud for my text generation system. A 4-vCPU virtual server with a T4 rented for just over $1/hour when I did the project.
Support for Even Larger Models
One of the Achilles’ Heels of neural networks is the requirement that all of the model’s parameters (including a large number of weights) need to be in memory at once. That limits the complexity of the model that can be trained on a GPU to the size of its RAM. In my case, for example, my desktop Nvidia GTX 1080 can only train models that fit in its 8GB. I can train larger models on my CPU, which has more RAM, but it takes a lot longer. The full GPT-2 language model has 1.5 billion parameters, for example, and an extended version has 8.3 billion.
Nvidia, though, has come up with a way to allow multiple GPUs to work on the language modeling task in parallel. Like with the other announcements today, they have open-sourced the code to make it happen. I’ll be really curious if the technique is specific to language models or can be applied to allow multiple-GPU training for other classes of neural networks.
Along with these developments and releasing the code on GitHub, Nvidia announced that they will be partnering with Microsoft to improve Bing search results, as well as with Clinc on voice agents, Passage AI on chatbots, and RecordSure on conversational analytics.