ROUGH DRAFT authorea.com/11562
Main Data History
Export
Show Index Toggle 0 comments
  •  Quick Edit
  • Inverse and Exchange Matrix, Quantum Mechanics, Slater Determinants

    Inverse

    For a 3x3 non-singular matrix \(A\) with a determinant \(|A|\) defined by \[A=\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix}\] we can calculate the inverse as \[A^{-1}=\frac{1}{|A|} \begin{bmatrix} \begin{vmatrix} a_{22} & a_{23} \\ a_{32} & a_{33}\end{vmatrix} & \begin{vmatrix} a_{13} & a_{12} \\ a_{33} & a_{32}\end{vmatrix} & \begin{vmatrix} a_{12} & a_{13} \\ a_{22} & a_{23}\end{vmatrix} \\ \begin{vmatrix} a_{23} & a_{21} \\ a_{33} & a_{31}\end{vmatrix} & \begin{vmatrix} a_{11} & a_{13} \\ a_{31} & a_{33}\end{vmatrix} & \begin{vmatrix} a_{13} & a_{11} \\ a_{23} & a_{21}\end{vmatrix} \\ \begin{vmatrix} a_{21} & a_{22} \\ a_{31} & a_{32}\end{vmatrix} & \begin{vmatrix} a_{12} & a_{11} \\ a_{32} & a_{31}\end{vmatrix} & \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{vmatrix} \end{bmatrix}\]

    If we define an operation that is \(RCR_{ij}(A):=R_{ij}\), remove the \(i^{th}\) column and the \(j^{th}\) row of \(A\), then this is expressible as \[A^{-1}=\begin{bmatrix} |R_{11}| & |R_{12}J| & |R_{13}| \\ |R_{21}J| & |R_{22}| & |R_{23}J| \\ |R_{31}| & |R_{32}J| & |R_{33}| \end{bmatrix}\]

    Where \[J=\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\]

    Clearly the condition for a right multiplication of the \(J\) matrix being \(i+j=odd\).

    Alternatively \[|A|(A^{-1})_{ij}=|R_{ij}J^{(i+j-1)}|=|J^{(i+j-1)R_{ij}}|\]

    This likely works because for any 2x2 matrix \(|A|=-|AJ_2|=-|J_2A|=|J_2AJ_2|\). This property \(|A|=-|AJ_2|\) also appears to hold for a 3x3 matrix.

    Extrapolating backwards for a two by two matrix we get the correct formula on the proviso we define \(J_1\equiv-1\). This makes some sense, as for any \(J_nJ_n=I\) and \(J_nJ_nA=A\).

    We can further extrapolate to the inverse of a 1x1 matrix \(A=A_{11}\), taking the \(R_{11}\) element to be the zero matrix, the determinant of this matrix is \(1\) and the reciprocal of the determinant of \(A\) is then just the reciprocal of \(A_{11}\), which again is the inverse of the 1x1 matrix.

    Proof \(J_1=-1\):

    For any \(J_n\), \(n>1\), \(|J_n|=-1\) as \(J_n\) is defined to be an antidiagonal matrix.
    \(|AB|=|A||B|\).
    Therefore \(|AJ_n|=-|A|\).
    If one extrapolates to the case \(n=1\), For the above to remain true, \(J_1=-1\)

    Other Concept

    Looking at the same formula we can define four 3x3 matrices, top left, top right, bottom left, bottom right \[A^{TL}=\begin{bmatrix} a_{22} & a_{13} & a_{12} \\ a_{23} & a_{11} & a_{13} \\ a_{21} & a_{12} & a_{11} \end{bmatrix} \\ A^{TR}=\begin{bmatrix} a_{23} & a_{12} & a_{13} \\ a_{21} & a_{13} & a_{11} \\ a_{22} & a_{11} & a_{12} \end{bmatrix} \\ A^{BL}=\begin{bmatrix} a_{32} & a_{33} & a_{22} \\ a_{33} & a_{31} & a_{23} \\ a_{31} & a_{32} & a_{21} \end{bmatrix} \\ A^{BR}=\begin{bmatrix} a_{33} & a_{32} & a_{23} \\ a_{31} & a_{33} & a_{21} \\ a_{32} & a_{31} & a_{22} \end{bmatrix}\]

    These matrices are constructed from the rows of the original matrix as such, if R_i is the ith row of the original matrix, and P_n is an operator which cycles that row forward n times we have \[A^{TL}=\begin{bmatrix} R_2^TP_2 & R_1^TP_1 & R_1^TP_2 \end{bmatrix} =A^{r(211)}_{p(212)}\\ A^{TR}=\begin{bmatrix} R_2^TP_1 & R_1^TP_2 & R_1^TP_1 \end{bmatrix} =A^{r(211)}_{p(121)}\\ A^{BL}=\begin{bmatrix} R_3^TP_2 & R_3^TP_1 & R_2^TP_2 \end{bmatrix} =A^{r(332)}_{p(212)}\\ A^{BR}=\begin{bmatrix} R_3^TP_1 & R_3^TP_2 & R_2^TP_1 \end{bmatrix} =A^{r(332)}_{p(121)}\]

    These matricies are then such that \[\frac{1}{|A|}(A^{TL} \circ A^{BR} - A^{TR} \circ A^{BL}) = A^{-1}\]

    Where \(\circ\) is the Hadamard product or element-wise product.

    Imaginary Equivalence

    If we have the parallel that \(AJ\) is to \(A\) what \(-a\) is to \(a\), then what of the parallell \(ia\) to \(a\), consider the operation \[A \to J((JA)J) = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\bigg(\bigg(\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\begin{bmatrix} a & b \\ c & d \end{bmatrix}\bigg)\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}\bigg) =\begin{bmatrix} b & a \\ d & c\end{bmatrix} Nope just ends up at $-1$ equivalence\]

    Slater Determinants

    If we have the matrix of individual wavefunctions for a two electron system \[\begin{bmatrix} \phi_1(r_1) & \phi_2(r_1) \\ \phi_1(r_2) & \phi_2(r_2) \end{bmatrix}\]

    We know that the antisymmetric wavefunction of that system is \[\Psi(r_1,r_2)= \begin{vmatrix} \phi_1(r_1) & \phi_2(r_1) \\ \phi_1(r_2) & \phi_2(r_2) \end{vmatrix}\]

    But from equation \(4\) we know that \(|A|A^{-1}=M\), where \(M\) is some matrix. However, each element of this matrix is also a determinant.

    For growing numbers of electrons these states can be expressed in terms of the old states. \[\Psi(r_1)=\phi_1(r_1) \\ \sqrt{2}\Psi(r_1,r_2)=\phi_1(r_1)\phi_2(r_2)-\phi_2(r_1)\phi_1(r_2) = \Psi(r_1)\phi_2(r_2)-\phi_2(r_1)\Psi(r_2) \\ \sqrt{6}\Psi(r_1,r_2,r_3)=\dots=\Psi(r_1,r_2)\phi_3(r_3)+\Psi(r_2,r_3)\phi_3(r_1)+\Psi(r_3,r_1)\phi_3(r_2)\]

    This ends with \[\Psi(r_1\dots r_N)=\frac{1}{\sqrt{N!}}\sum_{i=1}^n \Psi(r_{i+1 mod N?},r_{i+2 mod N?}\dots r_{N mod N?})\phi_N(r_i)\]

    But we know \[\Psi(r_1,\dots,r_N)=\prod_i \psi_{n_i}(r_i)\]

    Combination

    If