The full derivation was added to the Appendix, specifically:

The proof for this is found in Introduction to the Theory of Statistics (1974) by Mood, Graybill and Boes \cite{boes1974introduction}, section 2.3, Thoerem 3:

Let \(X\) and \(Y\) be two random variables where \(var[XY]\) exists, then

\[\begin{split} var[XY] = \mu_Y^2var[X] + \mu_X^2var[Y] + 2\mu_X\mu_Ycov[X,Y] \\ - (cov[X,Y])^2 + E[(X-\mu_X)^2(Y-\mu_Y)^2] \\ + 2\mu_YE[(X-\mu_X)^2(Y-\mu_Y)] + 2\mu_XE[(X-\mu_X)(Y-\mu_Y)^2] \end{split}\]

which can be obtained by computing \(E[XY]\) and \(E[(XY)^2]\) when \(XY\) is expressed as

\[XY = \mu_X\mu_Y + (X-\mu_X)\mu_Y + (Y-\mu_X)\mu_X + (X-\mu_X)(Y-\mu_Y)\]

If \(X\) and \(Y\) are independent, then \(E[XY] = \mu_X\mu_Y\), the covariance terms are 0, and

\[E[(X-\mu_X)^2(Y-\mu_Y)^2] = E[(X-\mu_X)^2]E[(Y-\mu_Y)^2] = var[X]var[Y]\]

and

\[\mu_YE[(X-\mu_X)^2(Y-\mu_Y)] = E[(X-\mu_X)^2]E[(Y-\mu_Y)] = 0\]

\[\mu_XE[(Y-\mu_Y)^2(X-\mu_X)] = E[(Y-\mu_Y)^2]E[(X-\mu_X)] = 0\]

Which gives

\[var[XY] = \mu_X^2var[Y] + \mu_Y^2var[X] + var[X]var[Y]\]