Universal Language Model Fine-Tuning (ULMfit)

It is a method to use pre-trained models for the different tasks of Natural Language Processing, and it can be said as a way to use the concept of transfer learning in the field of NLP. It is a method of pre-training a language model on a massive amount of general-domain corpus and tuning it according to the tasks using different approaches.

Comparining Universal Language Model Fine-Tuning (ULMfit) with other traditional approaches

  • This method uses single training and architecture.
  • This method does not require labels or in-domain documents.

Universal Language Model Fine-Tuning Usages

  • In a project where the tasks vary according to the size of documents, labels, and numbers.
  • In a task where there is no requirement of pre-processing and feature engineering.