this is for holding javascript data
Volker Strobel edited subsection_Texton_Dictionary_Generation_label__.tex
almost 8 years ago
Commit id: baf93bba76fcfd521af3303f7cd518bbe1f0f8d4
deletions | additions
diff --git a/subsection_Texton_Dictionary_Generation_label__.tex b/subsection_Texton_Dictionary_Generation_label__.tex
index 65ead6e..70fe678 100644
--- a/subsection_Texton_Dictionary_Generation_label__.tex
+++ b/subsection_Texton_Dictionary_Generation_label__.tex
...
\subsection{Texton Dictionary Generation}
\label{sec:text-dict-gener}
The proposed approach requires a dictionary of textons, such that the histograms can be determined. Different maps and environmental settings require different textons. While we set the number of textons to 20 for all maps, this parameter is also map-dependent, and ideally, if time allows can be adapted to the given map. For learning a suitable dictionary for
a given an environment, image patches were clustered. The resulting cluster centers---the prototypes of the
clustering result---are the textons~\cite{varma2003texture}.
Different situations require different textons and a different number of them. The choice of these parameters is map-dependent, and we set it to 20 textons for all maps. The clustering was performed using a competitive learning scheme with a winner-take-all strategy.
In the beginning, the
textons are dictionary is initialized with 20 random patches from the first
image. image, which form the first guess for cluster centers. Then,
each patch is new images patches are extracted and compared to each texton
in the tentative dictionary using the Euclidean distance. The most similar texton to the current patch is declared as the ``winner".
The This texton is then adapted to be more similar to the current patch, by
calculating the difference in pixel values between the texton and the current images patch, and updating
it the texton with a learning rate of $\alpha = 0.02$. The first 100 images of each dataset were used to generate the dictionary. For each image, 1000 randomly selected image patches of size $w \times h = 6 \times 6$ were extracted, yielding $100,000$ image patches in total that were clustered. An example of a learned dictionary can be found in Figure~\ref{fig:dictionary}.