Why are precision, recall and F1 score equal when using micro averaging in a multi-class problem?

In a recent project I was wondering why I get the exact same value for precision, recall and the F1 score when using scikit-learn’s metrics. The project is about a simple classification problem where the input is mapped to exactly \(1\) of \(n\) classes. I was using micro averaging for the metric functions, which means the following according to sklearn’s documentation:

Calculate metrics globally by counting the total true positives, false negatives and false positives.

According to the documentation this behaviour is correct:

Note that for “micro”-averaging in a multiclass setting with all labels included will produce equal precision, recall and F, while “weighted” averaging may produce an F-score that is not between precision and recall.

After thinking about it a bit I figured out why this is the case. In this article, I will explain the reasons.

Read More

Lemmatize whole sentences with Python and nltk’s WordNetLemmatizer

Lemmatization is the process of converting words (e.g. in a sentence) to their stemming while respecting their context. For example, the sentence “You are not better than me” would become “You be not good than me”. This is useful when dealing with NLP preprocessing, for example to train doc2vec models. The python module nltk.stem contains a class called WordNetLemmatizer. In order to use it, one must provide both the word and its part-of-speech tag (adjective, noun, verb, …) because lemmatization is highly dependent on context. Read More

NLP: Approaches for Sentence Embeddings (Overview)

In 2013, Mikolov et. al published ‘Distributed Representations of Words and Phrases and their Compositionality‘, a paper about a new approach to represent words by dense vectors. This was an improvement over the alternative, representing words as one-hot vectors, as these dense vector embeddings encode some meaning of the words they represent. In other terms, words with similar meaning are be close to each other in the vector space of the embedding. For example, “blue” would be close to “red” but far from “cat”. A commonly used name for their approach is word2vec.

Read More

Calculate power set (set of all subsets) in Python without recursion

If you want to calculate a set containing all subsets of set (also called power set) you could either choose an recursive approach or try this iterative approach which is faster than the recursive one.

def get_subsets(fullset):
  listrep = list(fullset)

  subsets = []
  for i in range(2**len(listrep)):
    subset = []
    for k in range(len(listrep)):			
      if i & 1<<k:
        subset.append(listrep[k])
    subsets.append(subset)		

  return subsets

subsets = get_subsets(set([1,2,3,4]))
print(subsets)
print(len(subsets))

You can also find a shorter version at the end of the article, but to understand the principle the algorithm above is more suitable.

Read More

Image Style Transfer using Convolutional Neural Networks

There are many tasks in image processing that can be solved with Convolutional Neural Networks (CNNs). One of these tasks is called image style transfer. The goal of image style transfer is to apply the style of one image to the content of another image. This way you can create an drawing showing you in the style of Van Gogh, for example.

I am going to explain how style can be extracted from one image and transferred to the content of another image in this article.

I also wrote an overview paper on Image Style Transfer using Convolutional Neural Networks for a computer vision seminar at my university.

Read More