ABSTRACT We investigate an apparently fundamental operation on commonly occurring mathematical series. MAIN If we take the hypergeometric series for example: \;_2F_1(a,b,c,x) = ^\infty {(c)_k}{k!} we can notice that for the transform $$ _n[f(n)](m) = ^m f(n) = g(m) $$ which is almost the indefinite product, we have the inverse transform of the summation kernel $$ ^{-1}_k \left[{(c)_k}{k!}\right](n) = {(n+c-1)}{n} $$ we now proceed to define a generating function of this new kernel as G(t) = ^\infty {(n+c-1)}{n} t^n with $$ G(t) = _1(2,c;c+1;t))}{c-1} $$ for an example of $f(x) = {\pi}K(x)$ with a = b = 1/2, c = 1, then $$ G(t) = _2(t)}{4}-{t-1}+x \log (1-t) $$ we have that the inverse Z-transform of $G({t})$ gives $$ ^{-1}\left[G({t})\right] = {4n^2}, n>0 $$ and we can re-extrude this as $$ \prod_n {4n^2} = {2}+n)x^n}{\pi \Gamma(1 + n)^2} $$ yielding $$ ^\infty ^n {4n^2} = {\pi}K(x) $$ TRANSFORM From this we essentially have an operator $_x$ that maps a function to the discrete difference reduced term, for the hypergeometric example this would be $$ _x[\,_2F_1(a,b,c,x)] = {(n+c-1)}{n}. $$ The inverse operation $^{-1}$ is then exactly $$ ^{-1}_n[\square](x) = ^\infty ^m x \square $$ this is used as $$ ^{-1}_n[f(n)](x) = ^\infty ^m x f(n) $$ this means the operator Q is a composition of “coefficient of” operator, commonly denoted [xn] and the discrete difference derivative Δn* as $$ _x[\square](n) = \Delta^*_n [x^n] \square $$ CONNECTION TO MELLIN TRANSFORM We can connect this to the Mellin transform and the Ramanujan master theorem, which essentially extracts coefficients. For a function f(x) = ^\infty {k!} \phi(k) x^k we have that the Mellin transform is related to the coefficient function by $$ [f](s) = \Gamma(s)\phi(-s) $$ for suitable functions. In effect this becomes the method of coefficient extraction, but brings a sign flip, σ, operation in. Thus for a function defined as in equation [eqn:RMT] we have $$ Q f = \Delta^* \sigma {\Gamma(s)} ^{-1} f $$ then an operator G would indicate summing over positive non-zero integers $$ G_n[\square](t) = ^\infty t^n \square $$ some important identities that are not immediately obvious when reducing more complex series expansions such as elliptic integrals ^n {(2n-2)!} = \Gamma(2n+1) \\ ^n {(mn-m)!} = \Gamma(mn+1) \\ ^n {(mn-m+b)!} = {b!} \\ ^n {1-2n} = {1-2n} EXAMPLES Transforms from function to generating function: G Q[e^x] = -\log(1-t) \\ G Q[e^{-x}] = \log(1-t) \\ GQ\left[{1-x}\right] = {1-t} \\ GQ\left[{1+x}\right] = -{1-t} \\ GQ\left[I_0()\right] = _2(t)}{4} \\ GQ\left[I_0()\right] = _2(t)}{4} \\ GQ\left[J_0()\right] = -_2(t)}{4} \\ GQ[ {\pi} K(x)] = _2(t)}{4}-{t-1}+\log (1-t) \\ GQ[ {\pi} E(x)] = _2(t)}{4}+{1-t}+2\log (1-t) \\ GQ[ )}{}] = 3 + {1-t} - 4 {}}{} - {2}\log(1-t) \\ GQ[ 3)/3)}{}] = -{t-1}-{9} \log (1-t)-\left(\right)}{9 }+{9} \\ GQ\left[(1-x)^{-5/9}\right] = {1-t} + {9} \log(1-t) \\ GQ\left[(1-x)^{a-1}\right] = {1-t} + a \log(1-t)\\ GQ\left[1-\tanh^{-1}()\right] = {1-t} - 2 \tanh^{-1}() \\ \cosh() \to \tanh^{-1}() + {2}\log(1-t) \\ \cos() \to - \tanh^{-1}() - {2}\log(1-t) \\ ^\infty {(3k)!} \to {2} \, _2F_1\left({3},1;{3};t\right)-{2} \, _2F_1\left({3},1;{3};t\right)-{6} \log (1-t) here we see that $$ GQ\left[ {\pi} K(x)\right] = GQ\left[I_0()\right] + GQ\left[{1+x}\right] + G Q[e^{-x}] $$ There could be some secret equivalence between the function on the left and that on the right. I.e. the elliptic K function may transform under an operator and the combination of functions on the right may transform in analogy under a different operator. For example $$ tD_t \pm \log(1-t) \to {1-t} $$ so this meta derivative converts $$ e^{\pm x} \to {1\pm x} $$ this can be seen to be similar to an inverse Borel transform! From the above list of transforms it is clear that we see repeating units or “elements”, for example log(1 − t) is very common. It may be instructive to find a naming system for these units to give a compact representation of the resulting function. Whether these elements form some kind of basis for the underlying function space is yet to be investigated. We appear to have functions of the form x ₂F₁(a, b, c, x), or at least for shorthand -\log(1-t) = t\;_2F_1(1,1,2,t) = t_{1,1;2} \\ \sin^{-1}() = t_{{2},{2};{2}} with this we can immediately see $$ ^\infty {(3k)!} \to {3},1;{3}}}{2}-{3},1;{3}}}{2}+}{6} = {2} & -{2} & {6} t_{{3},1;{3}} \\ t_{{3},1;{3}} \\ t_{1,1;2} $$ important terms might include $$ ^\infty H_n t^n = -{1-t} = }{1-t} $$ to handle this we would need to evaluate $$ ^n H_k = f(n) $$ and apparently little is understood about these terms in OEIS A097423 and A097424. DERIVATIVES Consider the derivative of a sequence, we have $$ {dx} ^\infty a_k x^k = ^\infty (k+1)a_{k+1}x^k $$ where we have made sure to keep the sequence index from 0 to ∞. We can write $$ ^k {n} = k+1 $$ which tells us $$ Q[f'(x)] = ^\infty {n} \Delta^*_k[a_{k+1}](n) t^n $$ for example if for ex we have ak = 1/k!, then the derivative gives $$ Q[e^x] = ^\infty {n} \Delta^*_k[{k!}](n) t^n $$ and $\Delta^*_k[{k!}](n) = (n+1)^{-1}$ which consistently gives $$ Q[e^x] = ^\infty {n} t^n = -\log(1-t) $$ this is powerful, and we can use this to calculate unknown derivatives Δk*, and potentially solve differential equations in a mirror domain. In general we have a beautiful relationship $$ {dx^n} ^\infty a_k x^k = ^\infty (k+1)_n a_{k+n}x^k $$ for ex this means $$ \Delta^*_k[{\Gamma(k+n+1)}]= \Delta^*_k[{\Gamma(k+1)}] = {n} $$
ABSTRACT Consider a partial product transform and how this is related to gamma functions and Pochhammer symbols. MAIN There is a nice mapping between pair of functions under a transform of a function f(n) we will loosely define as $$ [f] = ^m f(n) \to g(m) $$ some key examples include $$ [n^k] = \Gamma^k(m+1) \\ [n+k-1] = {\Gamma(k)}\\ [1+{n}] = {m!} \\ [(1+{n})e^{-z/n}] = {e^{z H_m} m!} \\ z e^{\gamma z}[(1+{n})e^{-z/n}] = z e^{-z\psi(m+1)}{m!} = {\Gamma(z;m)}\\ [1+{n}] = 1+m \\ [(1+{n^2})e^{-z/n^2}] = )_m(1+i)_m}{e^{z H^{(2)}_m} m!^2} \\ $$ here Γ(z; m) is a truncation of the infinite product representation. In the limit m → ∞ we get 1/Γ(z) which for simple f(n) has a lot to do with gamma functions and Pochammer symbols. Here we consider that $$ [a_0 + a_1 n] = a_1^m p({a_1};m) = a_1^m p(1+\lambda;m) $$ for Pochammer symbol p(x, m). We notice that λ is the eigenvalue of the 1 × 1 matrix $$ M_1 = {a_1} a_0 $$ We also have $$ [a_0 + a_1 n + a_2 n^2] = a_2^m p(1+\lambda_1;m)p(1+\lambda_2;m) $$ where we have $$ = }{2a_2} $$ we note that these are the eigenvalues of $$ M_2 = {a_2} a_1 & i a_2 \\ ia_0 & 0 $$ we can then form an ansatz that for any number of terms there is a matrix that gives $$ \left[^N a_k n^k\right] = a_N^m ^N p(1+\lambda_k,m) $$ for eigenvalues λk of a matrix of the form $$ M_N ={a_N} a_? & (-1)^{1/N}a_? & \cdots & (-1)^{(N-1)/N} a_? \\ $$ we can likely absorb the factor of aNm into the Pochammer symbols. We also have a similar relation $$ [a_0 + {n} + {n^2} + \cdots] \to {m!^N} \prod_\lambda P(1+\lambda,m) $$ for similar expressions it seems we can express $$ ^m (1 + {n^k}) = {m!^k} (1+\lambda)_m $$ where λ are the eigenvalues of the k × k circulant matrix $$ M_k = 0 & 0 & \cdots & ((-1)^{k+1} z^{1/k}) \\ ((-1)^{k+1} z^{1/k}) & 0 & \cdots & 0 & \\ 0 & ((-1)^{k+1} z^{1/k}) & \cdots & 0 & \\ $$ which are essentially roots of unity. This is nice because the structure is only related to the power, and the limit to infinity does not require a very large matrix. Considering nice formulae such as $$ {\Gamma(z)}=-e^{2\gamma z}^\infty\left(1-{x_k} \right)e^{{x_k}} $$ where xk are the zeroes of ψ(x). This means the n in previous series turns into a function. Consider for example a linear function $$ ^m \left(1 + {a_0 + a_1 n}\right) = {a_1} + {a_1})_m}{(1+ {a_1})_m} $$ and a non-linear $$ ^m \left(1 + {a_0 + a_1 n + a_2 n^2}\right) = {(1+ \lambda_1(0))_m}{(1+ \lambda_2(0))_m} $$ where for λ(z), it appears that the constant term a₀ → a₀ + z in previous root expressions. It is interesting to consider if there is a generalised relationship between the product of Pochhammers of some function of the elements (i.e. the roots of the characteristic polynomial), and some finite product with prodegrand relating to the matrix elements? That is to say, for a 2 × 2 matrix, what is the f such that $$ [f(A)](m) = (1+_1(p(A)))_m (1+_2(p(A)))_m $$ where p(A)=t² − tr(A)t + det(A). This language is better we find that $$ ^m (\det(A) + (A) n +n^2 ) = (1+\lambda_1)_m (1+ \lambda_2)_m $$ this will likely carry across to other eigensystems. The general expectation is then that $$ ^m \left(^N c_\alpha n^\alpha\right) = ^N (1+\lambda_k)_m $$ where $$ c_i = ^N }{l^{k_l} k_l !} (A^l)^{k_l} $$ where the sum ranges over $$ ^N l k_l = N-i $$ it then motivates a simple investigation into the distribution of these products of Pochhammers of eigenvalues, especially for random matrices. Companion Matrices We can use the companion matrix [1] construction to evaluate some of these sums, for example $$ ^m 8-6n+n^2 = (-3)_m(-1)_m = (1+\lambda_1)_m(1+\lambda_2)_m $$ if we consider the companion matrix of the polynomial 8 − 6n + n² we get 0 & -1 \\ 8 & -6 \to \lambda_k = (-4,-2) likewise $$ ^m 24-10n-3n^2+n^3 = (4)_m(-3)_m(-1)_m = (1+\lambda_1)_m(1+\lambda_2)_m(1+\lambda_3)_m $$ if we consider the companion matrix of the polynomial 24 − 10n − 3n² + n³ we get 0 & -1 & 0 \\ 0 & 0 & -1 \\ 24 & -10 & -3 \\ \to \lambda_k = (3,-4,-2) INFINITE CASE One question is whether we can continue to the infinite expansion type situation? The eigenvalues of an infinite matrix are quite different to the finite truncation and care should be taken. But what we have established is that the ’prodegrand’ related to the characteristic equation of an operator, and the result relates to the eigenvalues of that operator... Consider then for example the Schrödinger equation and the eigenvalues. INDEFINITE PRODUCT There is a clear link with the indefinite product [2] which appears to be the transform 𝒯. There are further examples there such as $$ \prod_x a^{1/x} = C a^{\psi(x)} $$ we can find similar relationships $$ \prod_x e^{{a x + b}} = C e^{( H_{{a}-x}-H_{{a}})/a} $$ there is again a link to eigenvalues and roots as we get clean forms using the example above $$ \prod_x e^{{8-6x+x^2}} = e^{{12 (x-3) (x-2)}} $$ by inspecting a system with eigenvalues π, γ $$ \prod_x e^{{\pi \gamma-(\pi + \gamma)x+x^2}} = \exp \left(-H_{x-\pi }+H_{-\pi }-H_{-\gamma }}{\gamma -\pi }\right) $$ similarly $$ \prod_x e^{{(x-\pi)(x-\gamma)(x-e)}}= \exp \left(+(e-\pi ) H_{x-\gamma }+(\gamma -e) H_{x-\pi }+(e-\gamma ) H_{-\pi }+(\pi -e) H_{-\gamma }+(\gamma -\pi ) H_{-e}}{(\gamma -e) (\pi -e) (\pi -\gamma )}\right) $$ by this logic, we might consider extending to functions such as $$ \prod_x {\Gamma(x)} = {G(1+x)} $$ where the “roots” are the zeroes, i.e. for x = ∈{0, −1, −2, ⋯}. Here G(x) is the Barnes-G function. $$ \prod_x e^{{\Gamma(x)}} = e^{{\Gamma(x)}} $$ Matrix Representation of a Function Consider a matrix representation of a function $$ Fx = f $$ for a matrix F and domain vector x resulting in a function vector f at each point in the domain... Consider the function $$, can we have a series of 2 × 2, 3 × 3 and so on matrices that map the sample points $$(0,1) \to (0,0) \\ (0,{2},1)\to(0,{2},0) \\ (0,{3},{3},1) \to (0,}{3},}{3},0)$$ PRODUCT INTEGRAL Consider the case where m is not longer an integer. We can analytically continue some of the right hand expressions to consider fractional values of the product. ITERATION OF PRODUCT Consider a transform (which is the indefinite sum) $$ Q[f] = \log\prod_x e^{f(x)} = \sum_x f(x) $$ we find that $$ Q^{(n)}(x) = {\Gamma(2+n)\Gamma(x-n)} = {(n+1)!}^{n-1} (x-k) $$ where the n implies iterated n times. We also have $$ [x^2]}{Q^{(n)}[x]} = {n+2} $$ $$ [x^3]}{Q^{(n)}[x]} = {(n+1) (n+2)} $$ $$ [x+1]}{Q^{(n)}[2x+1]} = {n+2x} $$ $$ Q^{(n)}\left[{\gamma}\right] = (n \pi + \gamma x)(1-x)_{n-1}}{\gamma \Gamma(n+1)} $$ $$ [ax+b]}{Q^{(n)}[cx+d]} = {(n+1)d+cx} $$ $$ [1+x+x^2]}{Q^{(n)}[1+x]} = {(1+n)(n+x)} $$ $$ [1+x+x^2+x^3]}{Q^{(n)}[1+x+x^2]} = {(n+2) \left(n^2+n+2 x^2+2 x\right)} $$ REFERENCES [1] - http://www.math.utah.edu/~gustafso/s2017/2270/labs/lab7-polyroot-qrmethod.pdf [2] - https://en.wikipedia.org/wiki/Indefinite_product
ABSTRACT We consider a notion of excess curvature, which is simple in definition but quickly produces hard problems. Compare the arclength or curvature integral of a standard function plotted in a region. We can compare this to a flat line, and look at the difference across the entire integration domain. For probability distributions this produces a finite constant which is often unknown. INTRODUCTION If the arclength of a curve f(x) in a region x ∈ [a, b] is given by $$ _a^b[f] = \int_a^b {\partial x}\right)^2} \; dx $$ and for the curve f(x)=c, for some constant, we have 𝒜ab[c]=b − a we can consider the _excess_ curvature as the difference 𝒜ab[f]−𝒜ab[c] which is curvature beyond the domain range. As an integral this is simply $$ _a^b[f] = \int_a^b {\partial x}\right)^2} -1 \; dx $$ the reason for doing this is to extend to an infinite (a → −∞,b → ∞) or semi-infinite domain, where the individual 𝒜 terms diverge, but the difference is convergent. We can then remove the limits a and b, as long as we are clear on the support being integrated over. EXAMPLES Consider the unit Gaussian $$ f(x) = {} e^{-x^2/2} $$ we then have ℬ[f]=0.069796988349688... which does not immediately have a clear closed form. If we shift the mean of this distribution it has no effect on the final result, as the hump is the same distortion to the line as before. We can get closed forms for a few very simple distributions, for example the unit triangle t(x) and it’s normalised powers which give $$ [t(x)] = 2 - 2 \\ \left[{2}t^2(x)\right] = +(3) - 2 \\ \left[2t^3(x)\right] = }{3} -2 - {3}F(i((1+i)),-1) $$ where F is the elliptic F function, showing how complicated these closed forms get. Because the support of these is always [ − 1, 1], there is always a term of −2 which is the base curvature of that support. For infinite support it is less clear how this factor will look. For a function which is not a probability distribution, such as f(x)=ex, the excess curvature in the infinite limit is unbounded. But we can find closed forms for ^y_{-\infty}[e^x] = -(y+1) + } + \left(}}{3-5 }}\right) it is apparent that the term −(y + 1) looks like the support term, and is separable even in this semi infinite case. On the range [0, ∞), we have \left[}{2}\right] = }{2} -1 - (2) + \log(4) on the symmetric full support for which this is a normalised distribution we can double this as \left[}{2}\right] = -2 - 2(2) + 4\log(2) In general it seems on the range [0, ∞), we have \left[}{2}\right] = -{a} + }{2a} - {a}\left({a^2}\right) + {a} - {a} COMBINATIONS For combinations of unit triangles t(x) we have non-additive effects. $$ t(x) \to (2-2) \\ {2}(t(x-\delta)+t(x+\delta)) \to (2 - 4), \delta \ge 1 $$ in between this, there is some kind of overlap function, where the curvature of the combination is different to the pair or the individual. $$ 2 \left(-2\right) & d>1\lor d\leq -1 \\ -2 \left(-2\right) d & -1<d\leq -{2} \\ 2 \left(-2\right) d & {2}\leq d\leq 1 \\ 2 \left(-1\right) (2 d+1) & d=0 \\ -2 \left(- d+2 d-+1\right) & 0<d<{2} \\ 2 \left(- d+2 d+-1\right) & $$ for three triangles fully separated we have $2 - 6$, for four we have $2 -8$. In general it seems $2-2n$ for n non-overlapping triangles scaled by 1/n. If the triangles are not scaled, then we have additive curvature as expected to give $2n(-1)$. We could potentially see this as a form of convolution, excess curvature convolution, for a pulse f(x) we measure $$ [f * f](t) = ^\infty -1 \; dx $$ or with averaging $$ [f * f](t) = ^\infty {4}\left(f'(x)+f'(x-t)\right)^2}-1 \; dx $$ WHY DO THIS? Is there an ’energy’ associated with curvature? Consider a probability density of a particle or wavepacket, or a deformation in a surface such as a wave. What about constructive and destructive interference? Consider calculus of variations trying to minimise a quantity. If we take an ensemble of particles each with Gaussian density, what is the minimum curvature configuration? For the triangular function above, the minimum comes with (t(x)+t(x − 1))/2. This brings the particles together but does not overlap them and does not push them too far apart. What is the forcefield associated with such dynamics? Can we define a potential energy function V(x) or pairwise interaction which has the same minimum and gradients? Upgrading to a Gaussian means the tails will interact over distance. For a unit Gaussian, the equilibrium point lies somewhere between a shift of 2.447 and 2.451, another minimum is full separation of the two bodies.
Consider a general template to generate sequences (or polynomials) using the inverse Mellin transform and a kernel function ϕ(s) $$ p_k(x) = f(x) ^{-1}[\phi(s) q_k(s)](x) $$ here pk(x) and qk(x) are polynomials, and f(x) is a function that cancels out with the generating form from the inverse Mellin transform. This is observed with an example setting qk(s)=sk, ϕ(s)=Γ(s) and f(x)=ex, we have $$ B'_k(x) = e^x ^{-1}[\Gamma(s) s^k](x) $$ where B′k(x) appear to be some form of alternating Bell polynomials, and the coefficients of these polynomials are made up of Stirling numbers of the second kind S₂(n, k) as $$ B'_n(x) = ^n (-1)^{n-k} S_2(n,k) x^k $$ we also find that $$ ^n S_2(n,k)}{2^n} x^{k/2} = e^{} ^{-1}[\Gamma(2s) s^n](x) $$ very interestingly $$ (1+x)^{n+1} ^{-1}[\Gamma(s)\Gamma(1-s) s^n](x) ^n (-1)^{n-k-1} A[n,k] x^{k+1}, k>0 $$ where A(n, k) as the Eulerian numbers. The agreement is off slightly for k = 0. There is a more general form to this $$ (1+x)^{n+t} ^{-1}[{\Gamma(t)} s^n](x) $$ which for t = 1 gives the Eulerian numbers, and for t = 2 is related to A199335. We can even insert t = 1/2, and get a sequence which is related to A185411 (with an additional factor to 1/2n). FIXING THE SIGNS We now consider a modification to the transform to fix the signs, define the inverse-Q transform as $$ p_n(x) = ^{-1}[\phi(s)](n,x) = ^{-1}[\phi(s) (-s)^n](-x) $$ where we have chosen the inverse because of the inverse Mellin transforms, now we have $$ ^{-1}[\Gamma(s)](x) ^{-1}[\Gamma(s)](n,x) = B_n(x) = ^n S_2(n,k) x^k $$ for Bell polynomials Bn(x) and interpreting 0⁰ as 1 which is common in combinatorics. It’s still (perhaps) not entirely right, because for ϕ(s)=Γ(s)Γ(1 − s) we have $$ (1-x)^{n+1}^{-1}[\Gamma(s)\Gamma(1-s)](n,x) = x A_n(x), n>0 $$ relating to Eularian polynomials, equally one could say $$ (1-x)^{-1}[\Gamma(s)\Gamma(1-s)](x) = {(1-x)^n}, n>0 $$ TABLE OF RELATIONS |c|c| Function & Function & Numbers Γ(s) & ex & StirlingS2 Γ(s)Γ(1 − s) & (1 + x)n + 1 & Eulerian Numbers we can see that the function f(x) is clearly related to ℳ−1[ϕ(s)], which is exciting because, by assuming qk(s)=sk for all inputs it links the function ϕ(s) directly a special class of numbers T(n, k). We can as questiosn such as , which kernel ϕ(s) produces the binomials?
Consider the poly-Bernoulli numbers. If we take the series expansions for tan(x) but simply replace the Bernoulli numbers with the next order poly-Bernoulli number, we reach a complicated series expasion with no clear pattern. We can call this tan(2)(x) defined by $$ \tan^{(2)}(x) = ^\infty 2^{2n}(2^{2n}-1)B^{(2)}_{2n}}{(2n)!} x^{2n-1} $$ we can also define $$ \tanh^{(2)}(x) = ^\infty (2^{2n}-1)B^{(2)}_{2n}}{(2n)!} x^{2n-1} $$ these are the poorly behaved series, but the analogue of sin and cos may have well behaved series representations. We can assume that $$ \tan^{(2)}(x) = (x)}{\cos^{(2)}(x)} $$ and also $$ \tanh^{(2)}(x) = (x)}{\cosh^{(2)}(x)} $$ and attempt to retain the relationship $$ \cosh^{(2)}(x) = \cos^{(2)}(ix) $$ and so on. We then match up $$ \tan^{(2)}(x) = - {6} - $$ for $$ \sin = ^\infty a_k x^{2k-1} $$ $$ \cos = ^\infty b_k x^{2k-2} $$ we could also try to impose other constraints such as $$ \sin^2 + \cos^2 = 1 $$ In order to derive coefficients. OTHER STRATEGIES It seems a related function $$ \tan_b^{(2)}(x) = ^\infty 3^{2n}(3^{2n}-1)B^{(2)}_{2n}}{(2n)!} x^{2n-1} $$ has a very interesting series reversion property, where the denominators of the expansion are the same (?) for both forward and reverse series? The coefficients also bring the coefficient of x to 1 and keep the terms positive... Then we can assume that the two functions are defined with a very special symbol (such as factorial in the same way?) $$ {b_1} = 1 $$ assuming they are both 1? $$ {b_1^2} = {5} $$ $$ a_2 - b_2 = {5} = -{3?} + {2?} $$ $$ {1?} = 1 \\ {1?2?} - {3?} ={5} \\ {} = {175} $$ $$ {1?} = 1 \\ {2?3?}-{2?3?} ={5} \\ {2?} - {2?3?}-{4?}+{5?} = {175} $$ for $$ {2?3?} ={5} $$