GluonNLP: NLP made easy

Get Started: A Quick Example

Here is a quick example that downloads and creates a word embedding model and then computes the cosine similarity between two words.

(You can click the play button below to run this example.)

Model Zoo

Word Embeddingmodel_zoo/word_embeddings/index.html

Mapping words to vectors.

Language Modelingmodel_zoo/language_model/index.html

Learning the distribution and representation of sequences of words.

Machine Translationmodel_zoo/machine_translation/index.html

From “Hello” to “Bonjour”.

Text Classificationmodel_zoo/text_classification/index.html

Categorize texts and documents.

Sentiment Analysismodel_zoo/sentiment_analysis/index.html

Classifying polarity of emotions and opinions.

Parsingmodel_zoo/parsing/index.html

Dependency parsing.

Natural Language Inferencemodel_zoo/natural_language_inference/index.html

Determine if the premise semantically entails the hypothesis.

Text Generationmodel_zoo/text_generation/index.html

Generating language from models.

BERTmodel_zoo/bert/index.html

Transfer pre-trained language representations to language understanding tasks.

And more in tutorials.

Installation

GluonNLP relies on the recent version of MXNet. The easiest way to install MXNet is through pip. The following command installs the latest version of MXNet.

pip install --upgrade mxnet>=1.3.0

Note

There are other pre-build MXNet packages that enable GPU supports and accelerate CPU performance, please refer to this tutorial for details. Some training scripts are recommended to run on GPUs, if you don’t have a GPU machine at hands, you may consider running on AWS.

After installing MXNet, you can install the GluonNLP toolkit by

pip install gluonnlp

About GluonNLP

Hint

You can find our the doc for our master development branch here.

GluonNLP provides implementations of the state-of-the-art (SOTA) deep learning models in NLP, and build blocks for text data pipelines and models. It is designed for engineers, researchers, and students to fast prototype research ideas and products based on these models. This toolkit offers four main features:

  1. Training scripts to reproduce SOTA results reported in research papers.
  2. Pre-trained models for common NLP tasks.
  3. Carefully designed APIs that greatly reduce the implementation complexity.
  4. Tutorials to help get started on new NLP tasks.
  5. Community support.

This toolkit assumes that users have basic knowledge about deep learning and NLP. Otherwise, please refer to an introduction course such as Deep Learning—The Straight Dope or Stanford CS224n.