WORKING DRAFT authorea.com/5717
Main Data History
Export
Show Index Toggle 0 comments
  •  Quick Edit
  • Polynomial Representation in Alternative Matrix Form

    Abstract

    A new representation of polynomials is investigated by defining the concept of summation \(\Sigma\) and product \(\Pi\) inverses. Then by defining an augmented matrix product it is found that two polynomials can be multiplied with their components retained in a resulting matrix. This is then shown to work for compositions of polynomials by using block matrices.

    Introduction

    This paper addresses the problem \(P_3(x) = P_1(x) P_2(x)\), where each \(P\) is a polynomial with (at the moment) constant powers of \(x\). After searching for a representation a possible technique was found.

    Let \[P=αx^2+βx+γ.\]

    Here, introduce the concept of an inverse sum, to turn a string of terms such as equation 1 into a vector for example \[P=(αx^2+βx+γ) \to v_i = \begin{bmatrix} αx^2 \\ βx \\ γ \end{bmatrix}\]

    Such that \[P = \begin{bmatrix} 1 & 1 & 1 \end{bmatrix} \cdot \begin{bmatrix} αx^2 \\ βx \\ γ \end{bmatrix}\]

    We then extrude the vector, (inverse product) as follows \[v_i \to M_{ij} = \begin{bmatrix} x & x & α \\ 1 & x & β \\ 1 & 1 & γ \end{bmatrix}\]

    Such that \[v_i = \begin{bmatrix} x & x & α \\ 1 & x & β \\ 1 & 1 & γ \end{bmatrix} \star \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}\]

    where the \(\star\) augmented matrix-vector product is defined as \[M \star u = \prod_{j} M_{ij}u_j = v_i\]

    and the \(\star\) augmented matrix-matrix product is defined as \[A \star B = \prod_{j} A_{ij}B_{jk} = C_{ik}\]

    as opposed to the regular matrix-matrix product \[A \cdot B = \sum_{j} A_{ij}B_{jk} = C_{ik}\]

    With these tools it will be possible to progress onto the multiplication of polynomials.

    Multiplying Polynomials

    For successful polynomial multiplication it was found that the right hand matrix in the \(\star\) product must be the transpose of its representation matrix. For example, if there are two polynomials \[P_1=αx^2+βx+γ,\] \[P_2=ax^2+bx+c,\]

    which can be mapped to their respective matrix representations \[P_1 \to A_{ij} = \begin{bmatrix} x & x & \alpha \\ 1 & x & \beta \\ 1 & 1 & \gamma \end{bmatrix},\]

    \[P_2 \to B_{ij} = \begin{bmatrix} x & x & a \\ 1 & x & b \\ 1 & 1 & c \end{bmatrix},\]

    then their product \(P_3\) will be \[P_3=\sum_{i,k} A_{ij} \star B_{jk}^T.\]

    This can be written explicitly in matrix form as \[P_3=\sum_{i,k} \begin{bmatrix} x & x & \alpha \\ 1 & x & \beta \\ 1 & 1 & \gamma \end{bmatrix} \star \begin{bmatrix} x & 1 & 1 \\ x & x & 1 \\ a & b & c \end{bmatrix},\]

    which, keeping the positions of terms for clarity, is \[P_3=\sum_{i,k} \begin{bmatrix} xxxx \alpha a & x1xx \alpha b & x1x1 \alpha c \\ 1xxx \beta a & 11xx \beta b & 11x1 \beta c \\ 1x1x \gamma a & 111x \gamma b & 1111 \gamma c \end{bmatrix} = \sum_{i,k} \begin{bmatrix} \alpha a x^4 & \alpha b x^3 & \alpha c x^2\\ \beta a x^3& \beta b x^2 & \beta c x\\ \gamma a x^2 & \gamma b x& \gamma c \end{bmatrix}.\]

    After the sum it can be seen that \[P_3 = \alpha a x^4 + (\alpha b + \beta a)x^3 + (\alpha c + \beta b + \gamma a)x^2 + (\beta c + \gamma b)x + \gamma c,\]

    which is indeed \((αx^2+βx+γ)(ax^2+bx+c)\) by standard multiplication.

    To obtain the representation of \(P_3\) the process can be repeated to obtain \[P_3 \to C_{ij}= \begin{bmatrix} x & x & x & x & \alpha a \\ 1 & x & x & x & \alpha b + \beta a \\ 1 & 1 & x & x & \alpha c + \beta b + \gamma a \\ 1 & 1 & 1 & x & \beta c + \gamma b \\ 1 & 1 & 1 & 1 & \gamma c \end{bmatrix}.\]

    The form of the coefficient column are clearly the the inner product of the two independent coefficient vectors (right most column). This can be separated into the sum of polynomial representations \[P_3 \to C_{ij}= \begin{bmatrix} x & x & x & x & \alpha a \\ 1 & x & x & x & \alpha b \\ 1 & 1 & x & x & \alpha c \\ 1 & 1 & 1 & x & 0 \\ 1 & 1 & 1 & 1 & 0 \end{bmatrix} + \begin{bmatrix} x & x & x & x & 0 \\ 1 & x & x & x & \beta a \\ 1 & 1 & x & x & \beta b \\ 1 & 1 & 1 & x & \beta c \\ 1 & 1 & 1 & 1 & 0 \end{bmatrix} + \begin{bmatrix} x & x & x & x & 0\\ 1 & x & x & x & 0\\ 1 & 1 & x & x & \gamma a \\ 1 & 1 & 1 & x & \gamma b \\ 1 & 1 & 1 & 1 & \gamma c \end{bmatrix}.\]

    Any row with a coefficient of \(0\) can be deleted, leading to \[P_3 \to C_{ij}= \begin{bmatrix} x & x & x & x & \alpha a \\ 1 & x & x & x & \alpha b \\ 1 & 1 & x & x & \alpha c \end{bmatrix} + \begin{bmatrix} 1 & x & x & x & \beta a \\ 1 & 1 & x & x & \beta b \\ 1 & 1 & 1 & x & \beta c \end{bmatrix} + \begin{bmatrix} 1 & 1 & x & x & \gamma a \\ 1 & 1 & 1 & x & \gamma b \\ 1 & 1 & 1 & 1 & \gamma c \end{bmatrix}.\]

    Any column which is constant throughout the matrix can be taken out as a factor “polynomial”. In this case that is two columns of \(x\) and a column of \(\alpha\) in the first term (\(\alpha x^2\)), one column of \(1\),\(x\) and \(\beta\) in term two, and so on, leading to \[P_3 \to C_{ij}= \alpha x^2\begin{bmatrix} x & x & a \\ 1 & x & b \\ 1 & 1 & c \end{bmatrix} + \beta x \begin{bmatrix} x & x & a \\ 1 & x & b \\ 1 & 1 & c \end{bmatrix} + \gamma \begin{bmatrix} x & x & a \\ 1 & x & b \\ 1 & 1 & c \end{bmatrix}.\]

    Which clearly leaves a common factor of the polynomial that is represented by the matrix, that is, \(P_2\). This equation could equally have been written in representation form however it is not yet clear how to move directly from a product of two polynomial representations to a single new (larger) representation.

    It is a hope that with some clever notation the large string of x’s and 1’s need not be written in the future.