Roland Szabo edited unsupervised.tex  almost 10 years ago

Commit id: cbfc480aa3ec74883e7c23ebad16521aaf06e5eb

deletions | additions      

       

\subsection{Restricted Boltzmann Machines}  Deep learning had its first major success in 2006, when Geoffrey Hinton and Ruslan Salakhutdinov published a paper introducing the first efficient and fast training algorithm of Restricted Boltzmann Machines (RBMs)\cite{Hinton_2006}.  As the name suggests, RBMs are a type of Boltzmann machines, with some constraints. These have been proposed by Geoffrey Hinton and Terry Sejnowski Sejnowski\cite{Ackley_1985}  in 1985 and they were the first neural networks that could learn internal representations (models) of the input data and then use this representation to solve different problems (such as completing images with missing parts). They weren’t used for a long time because, without any constraints, the learning algorithm for the internal representation was very inefficient. According to the definition, Boltzmann machines are generative stochastic recurrent neural networks. The stochastic part means that they have a probabilistic element to them and that the neurons that make up the network are not fired deterministically, but with a certain probability, determined by their inputs. The fact that they are generative means that they learn the joint probability of input data, which can then be used to generate new data, similar to the original one.