Deep learning Implementation: Multilabel Classifier

No Bullshit: Multi-label Text classification Using Keras

Discusses Implementation details with code

Pritish Jadhav
2 min readFeb 8, 2022

The goal of this blog is to dig right into the weeds of implementing a multilabel text classifier using Keras. If you have clicked on this story, I assume you are familiar with the problem formulation and how it differs from a traditional multiclass classifier. So let’s cut right to the chase.

https://thedatamint.github.io/pages/meraki-hues-art-work.html

Implementation notes:

  • For a multilabel text classifier, for each training example, we have multiple labels. As a result, LabelBinarizer should be replaced by MultiLabelBinarizer.
  • ‘‘Sigmoid’’ activation should be used in the final layer instead of “Softmax” activation.
  • Each class in a multilabel setup is assumed to be a Bernoulli random variable and hence we shall use binary_cross_entropy loss function instead of acategorical_cross_entropy loss.
  • Finally, the choice of evaluation metric is pivotal while training a multilabel classifier. Traditional evaluation metrics like accuracy do NOT make sense in the multilabel universe. We shall leverage a custom evaluation metric called α- Evaluation metric introduced by Boutell et. al. in Learning multi-label scene classification.

That's all you need to know to train a robust neural network for solving the multilabel classification problem.

Implementation:

Final Thoughts:

  • This blog post aims at laying out the deal-breaking differentiators between a multiclass classifier and a multilabel classifier.
  • If you are looking for a more comprehensive discussion on various aspects of a Multilabel classifier, I would encourage you to check out the blogs mentioned below.
  • As always, feel free to reach out to me via LinkedIn if you have any questions or feedback.

--

--