Julien Salinas, Founder & CTO of NLP Cloud – Interview Series
Julien Salinas is the Founder & CTO of NLP Cloud. The NLP Cloud platform serves high performance production-ready NLP models based on spaCy and HuggingFace transformers, for multiple use cases including NER, sentiment analysis, text classification, summarization, question answering, text generation, translation, language detection, grammar and spelling correction, intent classification, and semantic similarity.
What initially got you interested in computer science?
I started programming in… business school! I know it sounds surprising. Actually, I quickly realized that business itself was boring and that I would be quickly limited if I didn't have the technical skills to achieve my projects.
The first project at the time was a small website for my music teacher, then another one for my family, then I started learning Python… and so on and so forth. Now I've been a Python/Go developer and DevOps for 15 years.
Could you share the genesis story behind NLP Cloud?
It started 2 years ago when I realized that, as a developer, it was kind of hard to properly deploy machine learning models into production.
I was amazed by the progress made by frameworks like Hugging Face Transformers and spaCy, and I was able to leverage very advanced NLP models in my projects. But using these models in production was another beast and, surprisingly, I couldn't find any interesting No-Ops cloud on the market for NLP.
So, I decided to start my own platform for NLP models deployment. Very quickly we had great customer feedbacks and we added many features based on these feedbacks (pre-trained models, fine-tuning, playground…).
The NLP Cloud platform supports the GPT-3 open-source alternative GPT-J. What is GPT-J specifically?
GPT-J has been released by a team of researchers called EleutherAI in June this year. They believe that GPT-3 should be an open-source model, like its predecessors (GPT and GPT-2). They claim that, even if we should all be concerned about potential misuse of powerful AI models like GPT, it's not a good reason not to make these model open-source. Quite the opposite: they believe that if AI models remain open-source, it's the best way for the community to understand how these models are working under the hood, and then make sure that these models don't behave the wrong way (misogyny, racism, …).
GPT-J is a direct equivalent to GPT-3 Curie as both are trained on more or less 6 billion parameters.
Both can almost be used interchangeably.
Why is GPT-J a superior alternative to GPT-3?
GPT-3 belongs to Microsoft and the only way for people to use it is to go through the official GPT-3 API.
But this API is very expensive, and extremely restrictive: you need to request access to the API and, even if your application gets accepted, your access can be shut down anytime if they consider that your business model doesn't comply with their guidelines. For examples, you can't generate “open-ended” text (long text made up of several paragraphs) as it's against their policy.
There is no such restrictions with GPT-J as it's open-source and anyone can install it and use it.
What were some of the technical challenges with integrating GPT-J on NLP Cloud?
GPT-J is complex to install because of its high resource consumption (RAM, CPU, GPU…). It works without a GPU but it's so slow that it's very impractical.
In the end, the hardware needed to run GPT-J is very expensive so, in order to lower the costs, we had to work on many implementation details.
Also, in order to ensure high-availability of GPT-J on NLP Cloud and make it suited for production, we had to work on redundancy and failover strategies for GPT-J that can be quite challenging.
Could you discuss some of the pre-trained AI models that are offered?
We are doing our best to select the best pre-trained AI model per use case.
For text summarization, the best one – in our opinion – is Facebook's Bart Large CNN that gives very good results but that can be quite slow without a GPU.
For text classification, we implemented Facebook's Bart Large MNLI (for English classification) and Joe Davison's XLM Roberta Large XLNI (for non-English languages). Both are fast and very accurate.
For question answering, we use Deepset's Roberta Base Squad 2. It is fast and accurate but for more advanced question answering you might want to use GPT-J.
And many others!
What are some of the best use cases for NLP Cloud?
The use cases that seem to be used the most are text summarization, text classification, and text generation with GPT-J for product description generations, paraphrase, article generation…
But the use cases we can see among our customers are extremely diverse, and it's quite impressive to witness so many great ideas coming up!
Is there anything else that you would like to share about NLP Cloud?
It seems to us that AI for text understanding and text generation is finally used “for real” in actual products or internal workflow, by more and more companies.
This is great to see that NLP is not only a pure research field anymore, but that there are real business use cases that can leverage NLP.
At NLP Cloud we'll keep doing our best to make it easy for anyone to test and use NLP in production.
Thank you for the great interview, readers who wish to learn more should visit NLP Cloud.
- NVIDIA: From Chipmaker to Trillion-Dollar AI Powerhouse
- Laura Petrich, PhD Student in Robotics & Machine Learning – Interview Series
- Liquid Neural Networks: Definition, Applications, & Challenges
- Patrick M. Pilarski, Ph.D. Canada CIFAR AI Chair (Amii) – Interview Series
- AI Leaders Warn of ‘Risk of Extinction’