Roland Szabo edited autoencoders.tex  almost 10 years ago

Commit id: e2b8619d563032c5324bb60228515275e7f15091

deletions | additions      

       

\subsection{Autoencoders}  In 2007, Yoshua Bengio, professor at the University of Montreal, presented an alternative to RBMs: autoencoders\cite{NIPS2006_3048}. autoencoders\cite{NI  S2006_3048}.  Autoencoders are neural networks that learn to compress and process their input data. After the procesing that they do, the most relevant features of the input data are extracted and they can be used to solve our machine learning problem, such as recognizing objects in images, more easily.   Usually autoencoders have at least 3 layers:  \begin{itemize}  \item an input layer (if we work with images, this will correspond to the pixels in an image)  \item one or more hidden layers, that do the actual processing  \item a hidden layer, set to be equal to the first one.   \end{itemize}