Flair Tutorial on Document Classification

(C) 2019 by Damir Cavar

Version: 0.2, September 2019

Download: This and various other Jupyter notebooks are available from my GitHub repo.

This material was used in my Advanced Topics in AI class, introduction to Deep Learning environments in Spring 2019 at Indiana University.

Corpus Preparation

We will use a script to split a corpus into a training, development, and test corpus. The corpus format will use the FastText format. We will split the corpus into:

  • training set
  • development set
  • test set

We will use the dev data for measuring over-fitting.

In [ ]:
from flair.data_fetcher import NLPTaskDataFetcher
from flair.data import TaggedCorpus
from pathlib import Path

Set the path to the corpus files:

In [ ]:
data_folder = Path('./data')

Load the corpus files:

In [ ]:
corpus: TaggedCorpus = NLPTaskDataFetcher.load_classification_corpus(data_folder,
                                                                     test_file='test.txt',
                                                                     dev_file='dev.txt',
                                                                     train_file='train.txt')

Print out some stats for the corpus:

In [ ]:
stats = corpus.obtain_statistics()
print(stats)

Load the modules for training a network:

In [ ]:
from flair.data_fetcher import NLPTask
from flair.embeddings import WordEmbeddings, FlairEmbeddings, DocumentRNNEmbeddings
from flair.models import TextClassifier
from flair.trainers import ModelTrainer

Create a label dictionary:

In [ ]:
label_dict = corpus.make_label_dictionary()

Load the different word embeddings:

In [ ]:
word_embeddings = [WordEmbeddings('glove'),
                   FlairEmbeddings('news-forward'),
                   FlairEmbeddings('news-backward'),
                   ]

The three embedding models will be concatenated and should give state of the art results. If this is too slow and complicated on your computer, try first without the FlairEmbeddings.

Document Embeddings generate one embedding for an entire text. The produced embeddings are PyTorch vectors. There are two different methods using the word embeddings to obtain a document embedding.

  • Pooling Operation
  • RNN

Pooling Operation

The Pooling Operation calculates a pooling operation over all word embeddings in a document. The default operation is mean which gives us the mean of all words in the sentence. The resulting embedding is taken as document embedding.

To create a mean document embedding simply create any number of TokenEmbeddings first and put them in a list. Afterwards, initiate the DocumentPoolEmbeddings with this list of TokenEmbeddings. If you want to create a document embedding using GloVe embeddings together with CharLMEmbeddings, use the following code:

In [ ]:
from flair.embeddings import WordEmbeddings, FlairEmbeddings, DocumentPoolEmbeddings, Sentence

glove_embedding = WordEmbeddings('glove')
flair_embedding_forward = FlairEmbeddings('news-forward')
flair_embedding_backward = FlairEmbeddings('news-backward')

document_embeddings = DocumentPoolEmbeddings([glove_embedding,
                                              flair_embedding_backward,
                                              flair_embedding_forward])

Now, create an example sentence and call the embedding's embed() method.

In [ ]:
sentence = Sentence('The grass is green . And the sky is blue .')

document_embeddings.embed(sentence)

print(sentence.get_embedding())

Since the document embedding is derived from word embeddings, its dimensionality depends on the dimensionality of word embeddings you are using.

Next to the mean pooling operation you can also use min or max pooling. Simply pass the pooling operation you want to use to the initialization of the DocumentPoolEmbeddings:

In [ ]:
document_embeddings = DocumentPoolEmbeddings([glove_embedding,
                                             flair_embedding_backward,
                                             flair_embedding_backward],
                                             mode='min')

Use an RNN to obtain Embeddings

The RNN takes the word embeddings of every token in the document as input and provides its last output state as document embedding. You can choose which type of RNN you wish to use.

Create a document embeddings RNN:

In [ ]:
from flair.embeddings import WordEmbeddings, DocumentRNNEmbeddings

glove_embedding = WordEmbeddings('glove')

document_embeddings = DocumentRNNEmbeddings([glove_embedding])

By default, a GRU-type RNN is instantiated. Now, create an example sentence and call the embedding's embed() method.

See Cho, et al. (2014) for GRU (Gated Recurrent Unit). It aims to solve the vanishing gradient problem which comes with a standard recurrent neural network (RNN). GRU can also be considered as a variation on the LSTM because both are designed similarly and, in some cases, produce equally excellent results.

In [ ]:
sentence = Sentence('The grass is green . And the sky is blue .')

document_embeddings.embed(sentence)

print(sentence.get_embedding())

This will output a single embedding for the complete sentence. The embedding dimensionality depends on the number of hidden states you are using and whether the RNN is bidirectional or not.

If you want to use a different type of RNN, you need to set the rnn_type parameter in the constructor. So, to initialize a document RNN embedding with an LSTM, do:

In [ ]:
from flair.embeddings import WordEmbeddings, DocumentRNNEmbeddings

glove_embedding = WordEmbeddings('glove')

document_lstm_embeddings = DocumentRNNEmbeddings([glove_embedding], rnn_type='LSTM')

Note that while DocumentPoolEmbeddings are immediately meaningful, DocumentRNNEmbeddings need to be tuned on the downstream task. This happens automatically in Flair, if you train a new model with these embeddings. Once the model is trained, you can access the tuned DocumentRNNEmbeddings object directly from the classifier object and use it to embed sentences.

The model takes word embeddings, puts them into an RNN to obtain a text representation, and puts the text representation in the end into a linear layer to get the actual class label. The model can handle single and multi class data sets.

In [ ]:
from flair.models import TextClassifier
classifier = TextClassifier(document_embeddings, label_dictionary=label_dict, multi_label=False)
document_embeddings = classifier.document_embeddings

sentence = Sentence('The grass is green . And the sky is blue .')

document_embeddings.embed(sentence)

print(sentence.get_embedding())

DocumentRNNEmbeddings have a number of hyper-parameters that can be tuned to improve learning:

  • hidden_size: the number of hidden states in the rnn.
  • rnn_layers: the number of layers for the rnn.
  • reproject_words: boolean value, indicating whether to reproject the token embeddings in a separate linear layer before putting them into the rnn or not.
  • reproject_words_dimension: output dimension of reprojecting token embeddings. If None the same output dimension as before will be taken.
  • bidirectional: boolean value, indicating whether to use a bidirectional rnn or not.
  • dropout: the dropout value to be used.
  • word_dropout: the word dropout value to be used, if 0.0 word dropout is not used.
  • locked_dropout: the locked dropout value to be used, if 0.0 locked dropout is not used.
  • rnn_type: one of 'RNN', 'LSTM', 'RNN_TANH' or 'RNN_RELU'

In our current example of Amazon reviews, we will use the following settings:

In [ ]:
document_embeddings: DocumentRNNEmbeddings = DocumentRNNEmbeddings(word_embeddings,
                                                                     hidden_size=512,
                                                                     reproject_words=True,
                                                                     reproject_words_dimension=256,
                                                                     )

...

Create a classifier using the document_embedding:

In [ ]:
classifier = TextClassifier(document_embeddings, label_dictionary=label_dict, multi_label=False)

Create a trainer from the classifier and the corpus:

In [ ]:
trainer = ModelTrainer(classifier, corpus)

Train the model:

In [ ]:
trainer.train('resources/taggers/ag_news',
              learning_rate=0.1,
              mini_batch_size=32,
              anneal_factor=0.5,
              patience=5,
              max_epochs=150)

Visualize the training curve:

In [ ]:
from flair.visual.training_curves import Plotter
plotter = Plotter()
plotter.plot_training_curves('resources/taggers/ag_news/loss.tsv')
plotter.plot_weights('resources/taggers/ag_news/weights.txt')

(C) 2019 by Damir Cavar