Restricted Boltzmann Machines (Rbm)
Created: 2022-12-20 16:38
#note
Invented by Geoffrey Hinton, a Restricted Boltzmann machine is an algorithm useful for dimensionality reduction, classification, regression, collaborative filtering, feature learning and topic modelling.
RBMs are shallow, two-layer neural nets that constitute the building blocks of deep-belief networks. The first layer of the RBM is called the visible, or input, layer, and the second is the hidden layer.
Note: While RBMs are occasionally used, most practitioners in the machine-learning community have deprecated them in favour of generative adversarial networks or variational autoencoders.
A hidden node's result is computed by multiplying each input by a separate weight, summing such products, adding a bias and the by passing the result through an activation layer.
Since inputs from all visible nodes are being passed to all hidden nodes, an RBM can be defined as a symmetrical bipartite graph.
Training is obtained thanks to backward passes, during which the model tries to reconstruct the inputs. To measure the distance between its estimated probability distribution and the ground truth distribution of the input, RBMs use Kullback-Leibler divergence.
An RBM model, combined with a Collaborative filtering one, won the Netflix Prize.
References
Code
Tags
#dl