A straightforward implementation of JS Divergence. Note that both lists must be probability distributions, i.e. they sum to one. This is not checked internally and will result in obscure results if not taken care of.
dist_js_divergence(vec1, vec2)
vec1 | A list |
---|---|
vec2 | Another list |
The Jensen Shannon Divergence between vec1
and vec2
The JS Divergence is based on the KL Divergence. Its square root yields a metric. See https://en.wikipedia.org/wiki/Jensen Both lists feeded into this function must have the same length and contain only numeric values.
library(RUnit) suppressMessages(library(nctx)) a = c(-0.3805950,-1.4635000,1.7565629,1.1039740,0.4493004,0.4984236,-0.8446116,2.2833076,0.2598573,-0.9920936) b = c(0.03065272,0.08561547,1.35419445,1.21674446,1.46020546,1.75870975,-0.46519233,0.03100334,-0.12786839,0.04064652) a <- abs(a) a <- a/sum(a) b <- abs(b) b <- b/sum(b) checkEquals(dist_js_divergence(a,b), 0.21286088091616043)#> [1] TRUE