Comparison of Policy Functions from the Optimal Learning and Adaptive Control Frameworks
Abstract
\iflatexml
\documentclass[a4paper, 11pt]{article}
\fi
\usepackage[T1]{fontenc}
\usepackage{graphicx}
\usepackage{amsthm, amsmath}
\usepackage{amsmath}
\usepackage{natbib}
\usepackage{graphicx}
\usepackage{mtpro2}
\renewcommand{\figurename}{Figure}
\renewcommand{\theequation}{}
In this paper we turn our attention to comparing the policy function
obtained by \verb|\cite{bw2002}| to the one obtained with adaptive
control methods. It is an integral part of the \textit{optimal learning} method
used by Beck and Wieland to obtain a policy function that provides the
optimal control as a feedback function of the state of the system. However,
computing this function is not necessary when doing Monte Carlo experiments
with adaptive control methods. Therefore, we have modified our software in
order to obtain the policy function for comparison to the BW results.\\
\\
\emph{\textbf{Keywords}} Active learning, dual control, optimal experimentation,
stochastic optimization, time-varying parameters, numerical
experiments.\\
\\
\emph{\textbf{JEL Classification}}: C63, E61.