this is for holding javascript data
Volker Strobel edited chapter_Analysis_label_chap_analysis__.tex
almost 8 years ago
Commit id: 8c3887930a376683690a9cf30b449eb379eb11de
deletions | additions
diff --git a/chapter_Analysis_label_chap_analysis__.tex b/chapter_Analysis_label_chap_analysis__.tex
index 0fd775a..fbad894 100644
--- a/chapter_Analysis_label_chap_analysis__.tex
+++ b/chapter_Analysis_label_chap_analysis__.tex
...
similarity was determined as well as the standard deviation of the
cosine similarity was measured. Comparing the cosine similarity
between the histograms has the advantage that the number of samples
can be determined independent of a specific task.
\section{Analysis -- Setting the Baseline for kNN and determining k}
\label{sec:numtextons}
In a standard setting, the training error $\epsilon_t$ of a
$k$=1-nearest neighbor algorithm is $\epsilon_t = 0$ because the
nearest neighbor of the sample will be the sample itself. However, in
this scenario, we deal with random sampling such that each image will
be represented by a slightly different histogram each time the
histogram is extracted.