\documentclass[10pt]{article}
\usepackage{fullpage}
\usepackage{setspace}
\usepackage{parskip}
\usepackage{titlesec}
\usepackage[section]{placeins}
\usepackage{xcolor}
\usepackage{breakcites}
\usepackage{lineno}
\usepackage{hyphenat}
\PassOptionsToPackage{hyphens}{url}
\usepackage[colorlinks = true,
linkcolor = blue,
urlcolor = blue,
citecolor = blue,
anchorcolor = blue]{hyperref}
\usepackage{etoolbox}
\makeatletter
\patchcmd\@combinedblfloats{\box\@outputbox}{\unvbox\@outputbox}{}{%
\errmessage{\noexpand\@combinedblfloats could not be patched}%
}%
\makeatother
\usepackage[round]{natbib}
\let\cite\citep
\renewenvironment{abstract}
{{\bfseries\noindent{\abstractname}\par\nobreak}\footnotesize}
{\bigskip}
\titlespacing{\section}{0pt}{*3}{*1}
\titlespacing{\subsection}{0pt}{*2}{*0.5}
\titlespacing{\subsubsection}{0pt}{*1.5}{0pt}
\usepackage{authblk}
\usepackage{graphicx}
\usepackage[space]{grffile}
\usepackage{latexsym}
\usepackage{textcomp}
\usepackage{longtable}
\usepackage{tabulary}
\usepackage{booktabs,array,multirow}
\usepackage{amsfonts,amsmath,amssymb}
\providecommand\citet{\cite}
\providecommand\citep{\cite}
\providecommand\citealt{\cite}
% You can conditionalize code for latexml or normal latex using this.
\newif\iflatexml\latexmlfalse
\AtBeginDocument{\DeclareGraphicsExtensions{.pdf,.PDF,.eps,.EPS,.png,.PNG,.tif,.TIF,.jpg,.JPG,.jpeg,.JPEG}}
\usepackage[utf8]{inputenc}
\usepackage[english]{babel}
\begin{document}
\title{Notes - QML}
\author[1]{Luca Innocenti}%
\affil[1]{Affiliation not available}%
\vspace{-1em}
\date{\today}
\begingroup
\let\center\flushleft
\let\endcenter\endflushleft
\maketitle
\endgroup
\sloppy
\section{Reviews}
\cite{Wittek_2014, Schuld_2014,Schuld_2014a,Biamonte_2017,perdomo2017opportunities}
\section{Quantum-assisted machine learning}
This is the class of quantum algorithms that are devised as generalizations of classical machine learning algorithms.
The idea is usually to have a quantum algorithm able to process "big data".
One of the first works on this topic is \cite{Harrow_2009}.
Others include \cite{Lloyd_2014,Rebentrost_2014,lloyd2013quantum}.
While most (all?) developed quantum algorithms for ML have been developed and thought in the discrete variable setting, \cite{Lau_2017} show how to generalize the core ideas developed in \cite{Lloyd_2014,Rebentrost_2014,lloyd2013quantum} in the CV setting.
The main drawback is that the interactions required by their protocol are highly non-trivial, even though they claim they are "hard but within reach of near term technology".
For example, they require very strong Kerr nonlinearities.
\begin{itemize}
\tightlist
\item
Lau et al.~\cite{Lau_2017}:~\emph{Quantum machine learning over
infinite dimensions}.
\item
Fujii and Nakajima \cite{Fujii_2017},~\emph{Harnessing disordered
ensemble quantum dynamics for machine learning.}
\end{itemize}
\par\null
\section{Quantum learning theory}
\cite{Monras_2017}
\section{Quantum reinforcement learning}
\cite{Daoyi_Dong_2008,Paparo_2014,Dunjko_2016,crawford2016reinforcement,Lamata_2017, cardenas2017generalized}
Dong et al.~\cite{Daoyi_Dong_2008} propose a very generic procedure to ``quantize'' a general reinforcement learning framework (I think at least... the paper is very unclear).
Paparo et al.~\cite{Paparo_2014} consider the potential advantages of a quantum agent embedded in a classical environment.
This means that the agent cannot interact coherently with the environment, but it can process information previously extracted from the environment in a quantum processor.
Roughly speaking, this means that they are considering how a classical device is able to learn better about its environment when endowed with a quantum oracle.
The main reinforcement learning model they consider is the projective simulation model~\cite{Briegel_2012}.
Crawford et al.~\cite{crawford2016reinforcement} investigate whether quantum annealers can be used to outperform classical computers in reinforcement learning tasks.
More specifically, they use simulated quantum annealing to demonstrate the advantage of RL using quantum Boltzmann machines over its classical counterpart.
The goal is to find tasks suitable for near-term devices to demonstrate quantum advantages.
Dunjko et al.~\cite{Dunjko_2016} also present a general framework to quantize a general reinforcement learning problem, and provide conditions to determine whether a given environment can be used to obtain quantum speedups.
\section{Machine learning of many-body physics}
Carleo and Troyer~\cite{Carleo_2017} marked the beginning of a surge of papers applying artificial neural networks (usually restricted Boltzmann machines) to a number of different problems in quantum many-body theory.
The core idea is to take the probability distribution provided by a RBM over the visible neurons (that is, summing over the hidden neurons' configurations), and use it as Ansatz for the structure of a quantum state.
Notably, they mostly ignore the majority of the theory about RBMs, just exploiting the analytical relation that can be obtained when one sums over the hidden neurons (and possibly the theorems stating the universality of the resulting expression?).
The Ansatz obtained is used to train the network into representing the ground state of a specified Hamiltonian.
In other words, the algorithm looks for the network parameters such that the corresponding quantum state minimizes the energy of a given Hamiltonian.
Many methods could be used for the training here, but they claim the one that worked better was a reinforcement learning approach, based on a MCMC algorithm.
They also say in the SM that it is possible that other approaches, like SGD, would work just as well (or better).
\hrule
\begin{itemize}
\item
Mills et al.~\cite{Mills_2017} trained a convolutional neural network to predict the ground-state energy of an electron in four classes of confining two-dimensional electrostatic potentials. The model is then used to predict the ground-state energy to within chemical accuracy.
\item
Kaubruegger et al.~\cite{kaubruegger2017chiral} investigate ``\textit{as to what extent the flexibility of artificial neural networks can be used to efficiently study systems that host chiral topological phases such as fractional quantum Hall phases.}''
They use restricted Boltzmann machines and variational Monte Carlo for optimization.
\item
Liu et al.~\cite{liu2017machine} train two-dimensional hierarchical tensor networks to solve image recognition problems, using a training algorithm derived from the multipartite entanglement renormalization ansatz.
They also more generally study the relations between tensor network states and deep learning architectures.
\end{itemize}
\begin{itemize}
\tightlist
\item
Dong-Ling Deng et al.~\cite{Deng2017}:~~\emph{Machine Learning Bell
Nonlocality in Quantum Many-body Systems}.
\item
Glasser et al.~\cite{Glasser_2017}:~\emph{Neural Networks Quantum
States, String-Bond States and chiral topological states}.
\item
Gao and Duan~\cite{Gao2017}:~\emph{Efficient representation of
quantum many-body states with deep neural networks}.
\end{itemize}
\section{ML with D-Wave devices, and quantum
annealers}
{\label{140039}}
\begin{itemize}
\tightlist
\item
Benedetti et al.~\cite{Benedetti_2017}:~\emph{Quantum-Assisted Learning
of Hardware-Embedded Probabilistic Graphical Models}.
\item
Mott et al.~\cite{Mott2017}:~\emph{Solving a Higgs optimization
problem with quantum annealing for machine learning}.
\end{itemize}
\par\null
\section{Other references}
{\label{658664}}
List of references on QML, mantained by Roger Melko:
\href{https://physicsml.github.io/pages/papers.html}{Link}.
\begin{itemize}
\tightlist
\item
Breuckmann et al.~\cite{Breuckmann_2017} show how CNN can be proficiently
used to tackle quantum fault-tolerance problems. From the paper:
"\emph{In this work the existence of local decoders for higher
dimensional codes leads us to use a low-depth convolutional neural
network to locally assign a likelihood of error on each qubit.}"
\item
Liu and Robentrost~\cite{Liu_2017}:~\emph{Quantum machine learning
for quantum anomaly detection}.
\item
Romero et al.~\cite{Romero_2017}:~ \emph{Quantum autoencoders for
efficient compression of quantum data}. "\emph{The quantum autoencoder
is trained to compress a particular dataset of quantum states, where a
classical compression algorithm cannot be employed. The parameters of
the quantum autoencoder are trained using classical optimization
algorithms. We show an example of a simple programmable circuit that
can be trained as an efficient autoencoder. We apply our model in the
context of quantum simulation to compress ground states of the Hubbard
model and molecular Hamiltonians.}``. Their idea is to implement an
autoencoder in the form of a quantum circuit that takes an
input~\(\rho\), evolves it through a map~\(\mathcal E(\rho)\),
and then measures a number of the outputs. The state is thus
''reduced" to the fewer degrees of freedom that are left after the
measurements. The decoding operation is just the opposite of this.
\item
Rocchetto et al.~\cite{Rocchetto_2017}:~\emph{Experimental learning of
quantum states}. They present a probabilistic setting in which quantum
states can be learned using only a linear number of measurements, and
present an experimental demonstration of this protocol with up to six
qubits. They work in the context of the Probably Approximately Correct
(PAC) model, introduced by Valiant in 1984.
\item
Huembeli et al.~\cite{Huembeli_2017}:~ \emph{Adversarial Domain
Adaptation for Identifying Phase Transitions}.~
\item
Zhao-Yu Han et al.~\cite{Han_2017}:~\emph{Efficient Quantum
Tomography with Fidelity Estimation}.
\item
Benedetti et al.~\cite{Benedetti_2017}:~\emph{Quantum-Assisted Learning
of Hardware-Embedded Probabilistic Graphical Models}.
\item
Yudong Cao et al.~\cite{Cao_2017}:~\emph{Quantum Neuron: an
elementary building block for machine learning on quantum computers}.
\item
Lumino et al.~\cite{Lumino_2017}:~\emph{Experimental Phase Estimation
Enhanced By Machine Learning}.
\item
Agresti et al.~\cite{Agresti_2017}:~ \emph{Pattern recognition
techniques for Boson Sampling validation}.
\item
Melnikov et al.~\cite{Melnikov_2017}: ~\emph{Active learning machine
learns to create new quantum experiments}.~They develop an algorithm
that, using the projective simulation model, is able to automatically
discover schemes to create a variety of entangled states.
\item
Otterbach et al.~\cite{Otterbach_2017}:~\emph{Unsupervised Machine
Learning on a Hybrid Quantum Computer}. They solve a clustering
problem, translating it into a combinatorial optimization problem,
that can be solved via the Quantum Approximate Optimization algorithm
(\href{https://arxiv.org/abs/1411.4028}{Farhi 2014},
~\href{https://arxiv.org/abs/1709.03489}{Hadfield 2017}). They
implement this algorithm on the Rigetti 19-qubit architecture. More
specifically, the clustering problem is rephrased as a MAXCUT problem.
\end{itemize}
\par\null
\selectlanguage{english}
\FloatBarrier
\bibliographystyle{plainnat}
\bibliography{bibliography/converted_to_latex.bib%
}
\end{document}