A straightforward implementation of KL Divergence. Note that both lists must be probability distributions, i.e. they sum to one. This is not checked internally and will result in obscure results if not taken care of.
dist_kl_divergence(vec1, vec2)
vec1 | A list |
---|---|
vec2 | Another list |
The KL Divergence between vec1
and vec2
The KL Divergence does not fulfil triangle inequality and, hence, is no metric. See https://en.wikipedia.org/wiki/Kullback Both lists feeded into this function must have the same length and contain only numeric values.
library(RUnit) suppressMessages(library(nctx)) a <- c(-0.3805950,-1.4635000,1.7565629,1.1039740,0.4493004,0.4984236,-0.8446116,2.2833076,0.2598573,-0.9920936) b <- c(0.03065272,0.08561547,1.35419445,1.21674446,1.46020546,1.75870975,-0.46519233,0.03100334,-0.12786839,0.04064652) a <- abs(a) a <- a/sum(a) b <- abs(b) b <- b/sum(b) checkEquals(dist_kl_divergence(a,b), 1.3689655011086466)#> [1] TRUE