this is for holding javascript data
Volker Strobel edited subsection_Evaluation_Scheme_label_sec__.tex
almost 8 years ago
Commit id: 53d38b53409592fc52b06b74c11ed9920ac91fc6
deletions | additions
diff --git a/subsection_Evaluation_Scheme_label_sec__.tex b/subsection_Evaluation_Scheme_label_sec__.tex
index bc35455..104ba56 100644
--- a/subsection_Evaluation_Scheme_label_sec__.tex
+++ b/subsection_Evaluation_Scheme_label_sec__.tex
...
\begin{align}
\ell(x, y) &= x - y\\
d_a(h_i, h_j) &=
\text{cosine\_similarity}(h_i, \text{CS}(h_i, h_j)\\
d_e(h_i, h_j) &= f_X(pos_i \mid \mu, \Sigma) = f_X(x_i, y_i \mid \mu, \Sigma)\\
\end{align}
...
\end{bmatrix}
\end{align}
The
cosine similarity (CS) is defined as:
\begin{align}
CS(h_i, h_j) = \frac{h_i^Th_j}{||h_i||||h_j||}
\end{align}
The cosine similarity has the convenient property that its values are bounded between $-1$ and $1$. In the present case, since the elements of $h_i$ and $h_j$ are non-negative, it is even bounded between $0$ and $1$. The function $f_x$ describes the
non-normalized probability density function of the normal
distribution. distribution: $f_x = e^{- \frac{(x - \mu)^2}{2 \sigma ^ 2}}$. This function is also bounded between $0$ and $1$, which makes the functions $f_X$ and $CS$ easily comparable.
The idea behind the global loss function $L$ is that histograms in closeby areas
should be similar and the similarity should decrease the further away
two positions are. This is modeled as a 2-dimensional Gaussian with zero
covariance. The variance is depended on the